text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Dressing the Dressing Chain The dressing chain is derived by applying Darboux transformations to the spectral problem of the Korteweg-de Vries (KdV) equation. It is also an auto-B\"acklund transformation for the modified KdV equation. We show that by applying Darboux transformations to the spectral problem of the dressing chain one obtains the lattice KdV equation as the dressing chain of the dressing chain and, that the lattice KdV equation also arises as an auto-B\"acklund transformation for a modified dressing chain. In analogy to the results obtained for the dressing chain (Veselov and Shabat proved complete integrability for odd dimensional periodic reductions), we study the $(0,n)$-periodic reduction of the lattice KdV equation, which is a two-valued correspondence. We provide explicit formulas for its branches and establish complete integrability for odd $n$. Introduction The dressing chain [23,25,30] appeared in the application of Darboux transformations to the Schödinger (Sturm-Liouville) equation, which is the spectral problem for the Korteweg-de Vries (KdV) equation. A detailed study concerning the integrability properties as well as solutions of the model were presented in [30]. In particular, the authors proved that the dressing chain with a periodic constraint in odd dimensions is completely integrable in the sense of Liouville-Arnold. The dressing chain can also be obtained as an auto-Bäcklund transformation of the modified KdV (mKdV) equation via the celebrated Miura transformation between KdV and mKdV [19], and a symmetry of mKdV. The two ways of deriving the dressing chain are not unrelated, as the Miura transformation itself can be derived from factorisation of the Schrödinger equation [7]. Both the Darboux and Bäcklund approach can be seen as a discretisation process, and the two methods have been applied to other equations. In particular, in the discrete setting, Spiridonov and Zhedanov [26] considered a tri-diagonal discrete Schrödinger equation, for which discrete Darboux transformations gave rise to two equivalent systems: the discrete time Toda lattice and a system they called the discrete dressing chain [26, equations (5.30) and (5.31)]. As the discrete Schrödinger equation considered in [26] is the spectral problem for the Toda lattice [17] one could refer to these systems as dressing chains of the Toda lattice. Starting from the Volterra equation and using two discrete Miura transformations, Levi and Yamilov obtained an integrable lattice equation which they regard as a direct analogue of the dressing chain [16, equation (31)], cf. [9]. In our context, we would refer to that equation as an auto-Bäcklund transformation for a modified Volterra equation. In general, for a given integrable equation, one can ask the following questions, see Fig. 1: 1. Is there a Miura or, more generally, a Bäcklund transformation to a modified equation which has a symmetry? This then gives rise to an auto-Bäcklund transformation of the modified equation, which discretises the equation. 2. Is there an associated spectral problem, whose factorisation yields a dressing chain? 3. Does the Bäcklund transformation (1) arise in the factorisation of the spectral problem (2) Bäcklund (3) In this paper, our starting point is the dressing chain. We factorise its discrete spectral problem, which itself is an exact discretisation, cf. [32], of the (continuous) Schrödinger equation. It turns out that the discrete dressing chain (of the dressing chain) coincides with the (non-autonomous) lattice Korteweg-de Vries (lKdV) equation. By studying a related Lax representation we identify a Bäcklund transformation to a modified dressing chain which admits a symmetry. The derived auto-Bäcklund transformation is again given by the lKdV equation. In analogy to the continuous case, cf. [30], we study the (0,n)-periodic reduction 1 of the (discrete) dressing chain of the dressing chain (a.k.a. the lKdV equation), which is a two-valued correspondence (i.e., multi-valued map). We provide explicit formulas for its two branches, and establish linear growth of multi-valuedness. Moreover, we prove (in odd dimensions) that the map is Liouville integrable with respect to a quadratic Poisson structure of Lotka-Volterra type. Background We clarify Fig. 1 by succinctly providing some details for the KdV equation. We hope it also makes clear to the reader that how the dressing chain is related to the KdV equation is completely analogous to how the lattice KdV equation is related to the dressing chain. The KdV equation u t = u xxx − 6uu x arises as the compatibility condition, L t = [L, M ], for the system of linear equations Lφ = λφ, φ t = M φ where L is the Schrödinger operator L = −D 2 + u, M = 4D 3 − 3(uD + Du), and λ is a spectral parameter. One can check that if v satisfies the mKdV equation v t = v xxx − 6(v 2 + α)v x , then u given by the Miura transformation satisfies the KdV equation. As the mKdV equation is invariant under v → −v another Miura transformation is given bȳ Combining the two equations (2.1) and (2.2) yields an auto-Bäcklund transformation for the mKdV equation, which coincides with the dressing chain [30]. A related chain, which is an auto-Bäcklund transformation for the potential KdV equation, was already written down by Wahlquist and Estabrook [31], who used Bianchi's permutability theorem to show that it generates hierarchies of solutions due to a nonlinear superposition principle. More general auto-Bäcklund transformations (and their interpretation as differential-difference equations) were given in [14,15]. Auto-Bäcklund transformations for differential-difference equations are lattice equations, and some examples were presented in [10,16]. Darboux transformations for differential and difference equations (also known as dressing transformations) are maps of the functions and the coefficients that preserve the form of the equations [5]. They can be obtained by factorisation of operators, cf. [3,7,12,21,22], and they provide an effective way to construct exact solutions of a wide range of integrable equations (see, e.g., the monograph [18]). Recall that the Schrödinger operator L can be decomposed as subject to the constraint (2.1). Darboux [5] showed that under the transformation (2.4) Denoting v = v i , v = v i+1 , etc., and adding a periodic constraint, i.e., v i+n = v i and α i+n = α i , one gets the finite dimensional systems of ordinary differential equations which was shown to be completely integrable for odd n [30]. Dressing the dressing chain By eliminating the x-derivatives in the Schrödinger equation Lφ = λφ, using equation (2.3), one obtains the discrete Schrödinger equation and T : z → z represents a shift operator. The discrete Schrödinger operator K is the dual, with respect to (2.3), to the continuous operator L, cf. [24]. We note that the compatibility condition provides a Lax representation for the dressing chain (2.4). The operator K can be decomposed as, cf. [32], Here β does not depend on the direction (as α does) but will depend on another discrete direction introduced below. In order that such a decomposition holds, one needs and Eliminating f leads to (h − g)g + α + β = 0. This can be solved by posing g = ψψ −1 , where ψ is a special solution of (3.1) with λ = −β. Now that f and g are well defined, we can apply the usual tactics (interchanging the two factors in the decomposition) to generate a Darboux transformation for (3.1). With and or, in terms of g, Shifts in the direction correspond to a second discrete direction, created by iterated Darboux transformations, and the parameter β varies in this direction. The linear system of equations (3.1) and (3.5) provides a Lax representation for the lKdV equation, KG = GK. In the light of Fig. 1, equation (3.7), or (3.8), is the dressing chain of the dressing chain. In analogy to the continuous case, we will consider a periodic reduction in the direction, i.e., f i+n = f i and β i+n = β i , and we take α = α to be a constant. The finite dimensional system of difference equations we will study is As we make explicitly in Section 5, it gives rise to a two-valued correspondence. It would also be justified to refer to the above system (3.7) as the discrete dressing chain, since its continuum limit coincides with (2.4). Using equations (3.3) and (3.6), one can express f = w − w, g = w − w. Substituting them into (3.4) gives the lattice potential KdV equation ( w − w)( w − w) = α + β, whose continuum limit with respect to the direction is [11,20] (3.10) From (3.2), (3.3) and the above expressions for f and g, one obtains v = w − w which relates the potential dressing chain (3.10) to the dressing chain (2.4). We will refer to the (0, n)-reduction of the lKdV equation (3.9) as the n-dimensional discrete dressing chain. The modified dressing chain One next wonders if the discrete dressing chain (3.7) (or (3.8)) is the auto-Bäcklund transformation of a modified dressing chain. This is indeed the case. The Lax equation (as well as v = 1 2 f − g + gx g ). The system (4.1) provides a Bäcklund transformation, cf. [11, Definition 2.1.1] between the dressing chain (2.4) and the following equation which we will refer to as the modified dressing chain. The modified dressing chain (4.2) admits the symmetry Applying this symmetry to the right hand sides of (4.1) and transforming the left hand sides by Combining the two Bäcklund transformations v + v = v + v we obtain f − g =f −¯ g, which shows that the lKdV equation is an auto-Bäcklund transformation for the modified dressing chain (4.2). 5 Explicit formulas for the n-dimensional discrete dressing chain, and linear growth of multivaluedness In this section we consider the (0, n)-reduction of the lattice KdV equation, which is a twovalued correspondence. We give explicit formulas for both branches (M, N ), and prove that M N M = N . The latter implies that the l-th iteration of the correspondence is 2l-valued, cf. In the finite reduction (3.9), without loss of generality, we set α = 0 since it can be absorbed into the parameters β j . Having fixed n ∈ N and taking i ∈ I = {1, 2, . . . , n} subject to the periodic boundary conditions f n+i = f i , β n+i = β i for all i ∈ I, the system of equations (3.9) reads . . . These equations define a two-valued correspondence on R n . One solution of the system (5.1) is given by (which is f i = g i , cf. (3.4)). This defines a map which is an involution. The other solution of the system (5.1) gives rise to a more intriguing map on R n , which will be denoted by M . We next provide explicit formulas for M and for its inverse. Remark 5.1. Consider the finite version of system (3.9) defined by choosing n ∈ N and restricting i ∈ I subject to the open boundary condition f n+1 = β n+1 = 0. The resulting system then takes the form whose unique solution is given by the involution (5.2). (5.4) For n = 3 the function G reads We will make use of the following cyclic permutation τ : I → I and of the involution σ : I → I defined by Simply stated, the permutation τ is a shift modulo n, and σ is to reverse the elements of I. By some abuse of notation we will write, for any function H depending on the variables f 1 , f 2 , . . . , f n and the parameters β 1 , β 2 , . . . , β n , = H(f n , f n−1 , . . . , f 1 , β n , β n−1 , . . . , β 1 ). For example, for n = 3 we have A useful property of the above-defined functions is that, for any n, the expression is invariant under τ and σ. The following formula, which can be easily proved, is also useful. For any function H depending on f 1 , f 2 , . . . , f n , β 1 , β 2 , . . . , β n , we have τ στ H = σH, which implies, for all i, We now define functions where the indices are considered modulo n and in the set I, in particular or equivalently that f 1 f n G − β 1 τ G is fixed under τ , which holds due to (5.5). The maps M and N satisfy the following relation. Applying the involution σ to all indices in system (5.1), and interchanging f i ↔ f i one observes that then we also have f σ j = M j f σ i , and applying σ to the j index (which just enumerates the functions), one has Thus the inverse map is which as a function of the f i has components The latter formula is obtained using (5.6). It follows immediately that Using the formulas (5.3) and (5.4), one has and similarly and Combining all these leads to (5.8). As a corollary of Lemma 5. [29,Section 6.2] it is proved that the l-th iteration of such a correspondence is 2l-valued. Complete integrability of the odd-dimensional discrete dressing chain In this section we show that the correspondence (M, N ) is Liouville integrable with respect to a quadratic Poisson structure which is of Lotka-Volterra type. Our main result is the following theorem. Theorem 6.1. For odd n, the correspondence defined by (5.1) is Liouville integrable. Before proving the theorem, we introduce the relevant Poisson structures and provide some of their basic properties. Lotka-Volterra Poisson structures Lotka-Volterra Poisson structures are homogeneous quadratic, and they are defined on R n by the formulas where A is a constant skew-symmetric matrix. The rank of this Poisson structure is equal, at a generic point, to the rank of the constant matrix A. Each null-vector of the matrix A is associated to a Casimir of the corresponding Poisson bracket. If v = (v 1 , v 2 , . . . , v n ) is such that vA = 0, then the function n i=1 x v i i is a Casimir of the Poisson bracket. Two linearly independent null-vectors correspond to two functionally independent Casimirs (for a proof see [13,Example 8.14]). In what follows we consider the Lotka-Volterra structures (6.1) where A is the n × n skewsymmetric matrix with its upper triangular part defined by The rank of the matrix A is n when n is even and n − 1 when n is odd with the null-vector v = (1, 1, . . . , 1); a Casimir of the corresponding Poisson structure is the function x 1 x 2 · · · x n . With H = x 1 + x 2 + · · · + x n , the Hamiltonian vector field {·, H} q defines a system of differential equations which, up to a simple change of variables, is isomorphic to the Bogoyavlenskij lattice [1,2,4,28]. A simpler Poisson structure is the constant Poisson structure defined by the brackets where B is a constant n × n skew-symmetric matrix. After some tedious but straightforward calculations (see [ With H = x 1 +x 2 +· · ·+x n , the Hamiltonian vector field {·, H} b defines a system of differential equations which can be transformed to the dressing chain (2.5). The Poisson structure {·, ·} b is of rank n − 1 but with a more complicated Casimir than the product x 1 x 2 · · · x n (see [6,8] for an explicit construction of this Casimir). Remark 6.3. The involution N preserves any Poisson structure of Lotka-Volterra form. It follows that item (4), of the previous proposition, is (trivially) true for all n ∈ N. However, this is not the case for items (1) − (3) as for even n the equations in the previous proof do not hold for i = n. Note, for even n the bracket {·, ·} b is not Poisson, see condition (6.4). To derive sufficiently many independent invariants for the map (5.7), we employ a matrix version of the Lax representation. The compatibility condition is now written (modulo the periodic condition which implies that the (monodromy) matrix L = G n G n−1 · · · G 1 is isospectral. The eigenvalues of L (and therefore its trace) are invariants of our map (5.7). Using a decomposition of the Lax matrix G i similar to the one used in [6,30] (explained in a more general manner in [27]), we are able to calculate the trace of L and provide a succinct formula for the invariants. Futhermore, using the results of [30] we show that these invariants are in involution with respect to the Poisson bracket (6.1) with matrix (6.2) and therefore complete the proof of Theorem 6.1. Furthermore, these functions are in involution with respect to the Poisson bracket {·, ·} q . Proof . We decompose the matrix G i as follows This structure simplifies the computations. For example we can immediately verify the following properties The monodromy matrix is, with y i = −(λ + β i ), in the form L = A n n + y n K A n−1 n−1 + y n−1 K · · · A 1 1 + y 1 K = A n n A n−1 n−1 · · · A 1 1 + i∈I and its trace Substituting λ = 0 yields the expression for I 0 , i.e., while . . . Applying D to I 0 gives a sum of similar products where the j-th term (1 − β j D j ) is replaced by D j , which is equal to −I 1 . Similarly, applying this operator k times yields (−1) k I k k! times, namely once for each permutation of k indices. This proves the first part of the proposition. To prove the second part of the proposition, we first note that the invariants I i (as functions of x i ), as obtained from the trace of L, coincide with the invariants that Veselov and Shabat provided for the continuous dressing chain [30]. In that paper, using the Lenard-Magri scheme, they proved that the invariants I i are in involution with respect to the Poisson bracket {·, ·} b = {·, ·} q + {·, ·} c . Therefore, in order to prove that the invariants I i (considered as functions of f i ) are in involution with respect to {·, ·} q , it suffices to show that the map R n → R n , (f 1 , f 2 , . . . , f n ) → (x 1 , x 2 , . . . , x n ) is a Poisson map between (R n , {·, ·} q ) and (R n , {·, ·} b ). This is precisely item (1) of Lemma 6.2. Remark 6.5. The expression for I 0 (6.5) in terms of f i , g i , and β i is quite cumbersome, e.g., for n = 3, However, when imposing the relation f i g i = β i , the expression simplifies drastically, and we have I 0 = −(f 1 f 2 f 3 + g 1 g 2 g 3 ). The fact that a similar expression can be obtained for any n can be seen from Remark 6.6. For n even the map M is anti-volume preserving and for odd n it is volume preserving. The map N is measure preserving when n is even and anti-measure preserving when n is odd. The density of the measure is
4,639.6
2018-04-07T00:00:00.000
[ "Mathematics" ]
Power Generation Performance Indicators of Wind Farms Including the Influence of Wind Energy Resource Differences : The accurate evaluation and fair comparison of wind farms power generation perfor ‐ mance is of great significance to the technical transformation and operation and maintenance man ‐ agement of wind farms. However, problems exist in the evaluation indicator systems such as con ‐ fusion, coupling and broadness, and the influence of wind energy resource differences not being able to be effectively eliminated, which makes it difficult to achieve the fair comparison of power generation performance among different wind farms. Thus, the evaluation indicator system and comprehensive evaluation method of wind farm power generation performance, including the in ‐ fluence of wind energy resource differences, are proposed in this paper to address the problems above, to which some new concepts such as resource conditions, ideal performance, reachable per ‐ formance, actual performance, and performance loss are introduced in the proposed indicator sys ‐ tem; the combination of statistical and comparative indicators are adopted to realize the quantitative evaluation, indicator decoupling, fair comparison, and loss attribution of wind farm power genera ‐ tion performance. The proposed comprehensive evaluation method is based on improved CRITIC (Criteria Importance though Intercrieria Correlation) weighting method, in which the uneven situ ‐ ation of different evaluation indicators and the comprehensive comparison of power generation performance among different wind farms shall be overcome and realized. Several sets of data from Chinese wind farms in service are used to validate the effectiveness and applicability of the pro ‐ posed method by taking the comprehensive evaluation models based on CRITIC weighting method and entropy weighting method as the benchmarks. The results demonstrated that the proposed evaluation indicator system works in the quantitative evaluation and fair comparison of wind farm design, operation, and maintenance and traces the source of power generation performance loss. In addition, the results of the proposed comprehensive evaluation model are more in line with the actual power generation performance of wind farms and can be applied to the comprehensive eval ‐ uation and comparison of power generation performance of different wind farms. Introduction Under the strategic goal of building a new power system with new energy as the main body, China's wind power industry will enter a new era of rapid development [1].The simultaneous development of incremental and stock wind power installation will be an important development feature in the future.The stock wind farms have the advantages of large installed scale and excellent wind energy resources.However, due to the lack of experience in early wind turbine manufacturing, wind turbine selection, and wind farm operation and maintenance management, some in-service wind farms do not fully exploit the advantages of wind energy resources with high quality and have a great space for performance improvement.Moreover, some wind turbines with a long service time have experienced the problems of declining equipment health and increased failure frequency, and it is urgent to improve power generation performance through technical transformation.Therefore, how to improve the quality and efficiency of huge stock wind power assets has become a new challenge to be solved by practitioners in the field of wind power.Wind farm power generation performance evaluation is used to quantitatively evaluate the actual power generation performance and its deviation from the ideal power generation performance of wind farms, tracing the source of power generation performance loss and determining the performance improvement space through technical transformation and operation and maintenance management, which can provide reliable data support for technical transformation and operation and maintenance management of wind farms.In addition, the comparison of the power generation performance among different wind farms can lead the production and operation activities of the wind power industry to the way of low cost and high efficiency and comprehensively improve the operation and management level of the wind power industry. Many scholars have conducted corresponding studies on wind farm power generation performance evaluation, which are mainly divided into two fields: indicators method optimization and indicators system establishment.For indicators method optimization, Guo et al. [2] proposed the concept of theoretical power generation completion rate and the corresponding calculation method for the evaluation of offshore wind farms performance.Dong et al. [3,4] constructed the efficiency cloud and performance cloud models of wind turbines and used cloud characteristic parameters to quantitatively evaluate the operation efficiency and performance of wind turbines.Aldersey et al. [5] used the capacity factor to measure the power generation performance of offshore wind farm and analyzed the inter-year and intra-year variability of capacity factor.Lo et al. [6] used capacity factor to measure the operation performance of wind farms and discussed the relationship between the operation performance of wind farms and key factors such as resource area, regional location, and scale.Niu et al. [7] made a comparative analysis of the correlation, consistency, and representation of three efficiency measurement indicators such as wind farm availability, power generation efficiency, and power coefficient from the aspects of probability distribution, pairwise difference, and linear correlation. The above improved indicators have different emphases and application scenarios, which can only reflect partial characteristics of wind farms and cannot comprehensively evaluate the performance of wind power generation.For indicators system establishment, wind farm operation indicator system stipulated in the industry standard "Guide for wind farm operation index evaluation" issued by the National Energy Administration of China involves four aspects: electricity indexes, equipment operation indexes, operation and maintenance indexes, and electric power consumption indexes [8].Yan et al. [9] constructed a wind turbine operation performance evaluation indicator system from three aspects of wind turbine power curve, availability, and reliability and constructed a wind farm operation performance evaluation indicator system from three aspects of wind farm optimization operation capacity, maintenance efficiency, and other evaluation indicators.Luo et al. [10] constructed a wind farm operation performance indicator system with inherent characteristics, network related characteristics, and operation characteristics as the core.Meng et al. [11] established a multi-level evaluation indicator system for wind farm operation from three aspects of wind energy resource, wind farm operation, and wind farm equipment operation.Pfaffel et al. [12] defined capacity factor, time-based availability, technical availability, energetic availability, failure rate, and mean down time to evaluate the performance and reliability of wind turbines.Zhang et al. [13] established four indicators: energy consumption intensity, energy payback time, energy payback ratio, and energy return intensity to measure the energy performance of inland, coastal, and offshore wind farms.Kulkarni et al. [14] proposed four kinds of evaluation indicators: technical performance, environmental performance, economic performance, and social performance to evaluate the performance of wind farms.In order to make the summary of the literature review clearer, the summary of the existing indicators is shown in Table 1. Table 1.The summary of the existing indicators. Article Category Indicators Guo et al. [2] Performance indicators Theoretical completion rate of power output Dong et al. [3,4] Performance and efficiency indicators Performance cloud parameters, efficiency cloud parameters Social acceptability, policy objectives, labor impact, political acceptance, compatibility with national energy In summary, many scholars have studied the improved indicators and indicator system related to wind farm power generation performance evaluation, but there are still some problems remaining unsolved: (1) The existing improved indicators have different emphases and application scenarios, which can only reflect partial characteristics of wind farms and fail in the comprehensive evaluation of power generation performance of wind farms.(2) The existing evaluation indicator systems have problems such as confusion, coupling and broadness, and the influence of wind energy resource differences could not be effectively eliminated, which makes it difficult to achieve the fair comparison of power generation performance among different wind farms.(3) The existing evaluation indicator systems cannot trace the causes of wind farm power generation performance loss, which is difficult to effectively guide the technical transformation and operation and maintenance management of wind farms. To solve the aforementioned problems, the evaluation indicator system and comprehensive evaluation method of wind farm power generation performance, including the influence of wind energy resource differences, are proposed in this paper.The major contributions of this paper are as follows: (1) The indicator system including new concepts such as resource conditions, ideal performance, reachable performance, actual performance, and performance loss is constructed, which are not cross coupled in functional characteristics and can maintain the independence, hence realize the quantitative evaluation of wind farm design, operation, and maintenance.(2) The comparative indicators based on the wind energy resource and design level of wind farms are proposed, which can effectively eliminate the influence of wind energy resource and design levels differences and achieve the fair comparison of power generation performance among different wind farms.(3) The refined performance loss indicators are proposed in a targeted manner to trace the source of power generation performance loss and guide the technical transformation and operation and maintenance management of wind farms.(4) The comprehensive evaluation method of wind farm power generation performance based on improved CRITIC weighting method is proposed to overcome the uneven situation of different evaluation indicators and realize the comprehensive comparison of power generation performance among different wind farms. The remainder of this paper is organized as follows.Section 2 elaborates wind farm power generation performance evaluation indicator system.Section 3 describes the comprehensive evaluation method of wind farm power generation performance based on improved CRITIC weighting method.Section 4 elaborates the case study.Section 5 concludes this paper. The Design Principle of Indicator System Wind farm power generation performance evaluation indicator system is the foundation for the quantitative evaluation and fair comparison of power generation performance among different wind farms.In order to objectively, truly, and effectively evaluate wind farm power generation performance, the constructed indicator system should meet the following basic design principles: (1) Purpose.The design purpose of indicator system is the prerequisite and foundation for the existence of indicator system.The construction of any indicator system must have a clear design purpose, and the selected evaluation indicators and the constructed indicator system must meet the purpose of wind farm power generation performance evaluation. (2) Science.The design of the indicator system must follow the actual situation of wind farms, which can not only objectively and effectively reflect the power generation performance level and development trend of wind farms but also conform to the scientific theory and objective facts that have been proved by practice. (3) Independence.The selected indicators should comprehensively reflect the power generation performance level of wind farms.The logical relationship among various indicators shall be ensured.Besides, they should be independent of each other to avoid cross coupling of information.(4) Operability.The definitions and calculation methods of indicators should be simple and easy to understand and quantify.The basic data used for indicators calculation should be easy to obtain in order to make a definite quantitative evaluation of wind farms power generation performance.(5) Comparability.The impact of external conditions shall be considered in the comparison of power generation performance among different wind farms due to the difference in wind energy resource, terrain, and wind turbine type of different wind farms.Therefore, the selected comparative indicators should be reasonable, fair, and comparable to facilitate the fair comparison of power generation performance among different wind farms. The Structure Design of Indicator System The whole life cycle of wind farms can be divided into four stages: planning and design, construction, operation and maintenance management, and decommissioning [15].The inherent characteristics of wind farms are determined by the planning and design stage, including the macro and micro site selection, wind turbine selection, and optimal layout, etc. Construction mainly includes wind farms road construction, wind turbines hoisting, substation installation, and collecting power lines laying, etc. Operation and maintenance management is the decisive factor affecting the evolution of actual power generation performance, operation life, and economic benefits of wind farms.Decommissioning refers to the dismantlement of wind turbines at one time after wind farms reach their service life, and the dismantlement of all wind farm facilities and the ecological restoration of the site, mainly including the dismantlement, transfer, and recycling of wind farms related equipment. For in-service wind farms, operation and maintenance management is the dominant factor to affecting wind farm power generation performance, which determines the degree to which the power generation performance is close to the optimal performance under the actual wind energy resource.Therefore, the operation and maintenance management stage is selected as the research stage of wind farm power generation performance evaluation in this paper where the actual power generation performance of wind farms and its deviation from the ideal performance are quantitatively evaluated and the source of power generation performance loss and potential for improvement are determined.Following the five design principles of indicator system and taking energy as the clue and adopting five categories of evaluation indicators, including resource conditions, ideal performance, reachable performance, actual performance, and performance loss, the evaluation indicator system of wind farm power generation performance based on statistical and comparative indicators is constructed, as shown in Figure 1.As shown in Figure 1, the constructed wind farm power generation performance evaluation indicator system is mainly divided into five categories of evaluation indicators: resource conditions, ideal performance, reachable performance, actual performance, and performance loss.The indicator of resource conditions is mainly used to measure the abundance of available wind energy resource in wind farms, which is the main source of energy conversion in wind farms.Ideal performance, reachable performance, and actual performance are mainly used to measure the power generation performance that all wind turbines of the wind farm can achieve under the design level, the current normal operation state, and the current actual operation state under the condition of actual wind energy resource.The three kinds of evaluation indicators are under the parallel relationship in functional characteristics and correspond to the power generation performance of wind farms equipment under different health conditions and operating conditions.The operation and maintenance ability of wind farms can be measured by the gap between the three categories of evaluation indicators.The gap between reachable performance and ideal performance mainly measures the maintenance ability of wind farms maintenance staffs, and the gap between actual performance and reachable performance mainly measures the operation and management ability of wind farms.The proposed indicators of performance loss are mainly used to refine the specific reasons for the gap between actual performance and reachable performance, which is complementary to the actual performance, and can be used to guide the technical transformation and operation and maintenance management of wind farms.It should be noted that the wind energy resources of different wind farms are different.It is necessary to eliminate the impact of wind energy resource difference to realize the fair comparison of power generation performance among different wind farms.Therefore, six comparative indicators are proposed to compare the power generation performance of different wind farms by the relative changes of statistical indicators. Wind farm power generation performance evaluation indicator system Resource conditions Ideal performance Designed energy production Reachable energy production Actual energy production Actual performance Operation performance coefficient Failure loss coefficient Energy production loss due to failure Statistical indicators Comparative indicators Energy production loss due to regular inspection Energy production loss due to power limitation Energy production loss due to other factors Other losses coefficient To sum up, the constructed wind farm power generation performance evaluation indicator system has the following characteristics: (1) The categories of evaluation indicators are not cross coupled in functional characteristics, and their independence can be maintained.(2) The proposed comparative indicators are based on the wind energy resource and design level of wind farms, which can eliminate the influence of wind energy resource and design-level differences and achieve the fair comparison of power generation performance among different wind farms.(3) The proposed indicators of performance loss can refine the specific reasons of performance loss and can effectively guide the technical transformation and operation and maintenance management of wind farms. The Indicators of Resource Conditions The indicators of resource conditions are mainly used to measure the abundance and exploitable utilization of wind energy resource in wind farm area.The available wind energy resource is the most fundamental factor affecting the power generation performance of wind farms.The MEWPD is used to measure the availability of wind energy resource in wind farms. MEWPD refers to the power of the effective wind speed (between cut-in wind speed and cut-out wind speed, usually 3-25 m/s) in the unit area perpendicular to the wind direction within the statistical period.With the increase of rotor diameter and tower height, the spatial fluctuation of wind speed in the wind wheel sweeping plane is becoming more and more obvious due to the influence of wind shear and tower shadow effect [16].Hence, the influence of wind shear and tower shadow effect should be considered in the calculation of MEWPD in which wind shear is the main cause of power loss, while tower shadow effect is the main cause of power fluctuation, so the power loss caused by tower shadow can be ignored [17].Therefore, the rotor equivalent wind speed considering wind shear effect is used to replace the hub height wind speed to simplify the calculation.The specific calculation method is shown in Formulas (1)-(3). (1) where W is the MEWPD of wind farm within the statistical period; i W is the MEWPD of the i-th wind turbine within the statistical period; N is the number of wind turbines; T is the number of wind speed data within the statistical period (the statistical period refers to the evaluation period of wind farm power generation performance, which is usually month or year);   t  is the air density at time t, which is calculated by ambient temperature and atmospheric pressure [10]; is the rotor equivalent wind speed of the i-th wind turbine at time t; is the hub height wind speed of the i-th wind turbine at time t, the nacelle transfer function is usually used to correct the nacelle wind speed to hub height wind speed [18];   t  is the wind shear coefficient at time t; i R and i H are the rotor radius and hub height of the i-th wind turbine respectively; are the cut-in wind speed and cut-out wind speed of the i-th wind turbine respectively. The Indicators of Ideal Performance The indicators of ideal performance are mainly used to measure the maximum power generation performance that all wind turbines of wind farm can achieve according to the design requirements under the actual wind energy resource conditions, which mainly reflects the design, manufacturing, and installation level of wind turbines and the adaptability of wind turbines selection.The designed energy production (DEP) and designed wind energy utilization coefficient (DWEUC) are used to measure the ideal performance of wind farms. (1) DEP The DEP refers to the power output of all wind turbines operating at the design level within the statistical period under the actual wind energy resources conditions, which mainly reflects the maximum power generation performance that wind farm can achieve under the actual wind energy resource conditions.The calculation method is shown in Formula (4). where 0 E is the DEP of wind farm within the statistical period; t  is the time resolu- tion of wind speed data, and the measurement unit needs to be converted to hour; is the modified dynamic power curve of wind turbine based on actual wind energy resource conditions and wind turbine manufacturer power curve [19,20], which mainly includes air density correction and turbulence intensity correction, the specific correction methods can be found in [9,21]. (2) DWEUC The DWEUC refers to the proportion of the DEP to the effective wind energy of wind farm within the statistical period, which mainly reflects the effective wind energy capture capacity of the wind farm and can indirectly measure the adaptability between wind turbines selection and actual wind resource conditions.The calculation method is shown in Formulas ( 5) and (6). where 0  is the DWEUC of wind farm within the statistical period; E is the effective wind energy of wind farm within the statistical period, that is, the sum of available wind energy in the sweeping plane of all wind turbines within the statistical period; i A is the rotor swept area of the i-th wind turbine. The Indicators of Reachable Performance The indicators of reachable performance are mainly used to measure the theoretical power generation performance of all wind turbines in the wind farm under actual wind energy resource conditions and current equipment health status.With the increase of wind turbines operation time, their health status and power generation performance will gradually deteriorate, and the power generation performance of wind turbines can achieve complete or partial recovery after overhaul and components replacement.The indicators of reachable performance mainly reflect the reachable power generation performance and maintenance level of wind farms.The reachable energy production (REP) and maintenance performance coefficient (MPC) are used to measure the reachable performance of wind farms. (1) REP The REP refers to the theoretical energy production that can be achieved by all wind turbines of the wind farm under the actual wind energy resource conditions and current equipment health status within the statistical period, which mainly reflect the theoretical reachable performance of the wind farm in actual operation.The calculation method is shown in Formula (7). where 1 E is the REP of wind farm within the statistical period;    1 f is the wind turbine power curve model based on nacelle wind speed and theoretical power [19]; is the nacelle wind speed of the i-th wind turbine at time t. (2) MPC The MPC refers to the proportion of the REP to the DEP of wind farm within the statistical period, which mainly measures the maintenance ability of wind farms maintenance staffs and can provide decision-making support for wind turbines renovation and maintenance plan.The calculation method is shown in Formula (8). where 1  is the MPC of wind farm within the statistical period. The Indicators of Actual Performance The indicators of actual performance are mainly used to measure the actual power generation performance of all wind turbines of wind farm, which mainly reflect the operation and management ability of wind farms.The actual energy production (AEP) and operation performance coefficient (OPC) are used to measure the actual performance of wind farms. (1) AEP The AEP refers to the sum of the actual power generation of all wind turbines of wind farm within the statistical period, which mainly reflect the actual power generation benefits of wind farm.The calculation method is shown in Formula (9). where 2 E is the AEP of wind farm within the statistical period;   t P i is the actual power of the i-th wind turbine at time t, which is generally obtained from wind turbine supervisory control and data acquisition (SCADA) system. (2) OPC The OPC refers to the proportion of the AEP to the REP of wind farm within the statistical period, which mainly measures the operation management ability of wind farms and the potential for power generation performance improvement by improving the operation management ability.This indicator measures the actual power generation level based on the wind energy resource and design level of wind farms, and it can eliminate the influence of wind energy resource and design levels differences.The calculation method is shown in Formula (10). is the OPC of wind farm within the statistical period. The Indicators of Performance Loss In order to realize the power loss attribution and effectively guide the technical transformation and operation and maintenance management of wind farms, the evaluation indicators of performance loss of wind farm power generation are proposed, which mainly include failure loss, regular inspection loss, power limitation loss, and other losses. (1) Failure loss The energy production loss due to failure (EPLF) refers to the energy production loss of wind farm due to the shutdown of the on-site equipment failure within the statistical period, mainly including wind turbines failure, substation equipment failure, and collecting power lines failure, etc.For wind turbine, only its own fault is considered.The failure loss coefficient (FLC) refers to the proportion of the EPLF to the REP of the wind farm within the statistical period, which mainly reflects the fault repair ability and spare parts management ability of wind farms.The calculation method of FLC is shown in Formula (11). where f  is the FLC of wind farm within the statistical period.f E is the EPLF of wind farm within the statistical period, which is equal to the sum of the REP in all fault shutdown periods within the statistical period.The start and end time of fault shutdown of each wind turbine can be determined according to the fault statistics reports of wind farms production and operation management system. (2) Regular inspection loss The energy production loss due to regular inspection (EPLRI) refers to the energy production loss of wind farm due to the shutdown of the on-site equipment regular inspection within the statistical period, mainly including wind turbines regular inspection, substation equipment regular inspection, and collecting power lines regular inspection, etc.For wind turbine, only its own regular inspection loss is considered.The regular inspection loss coefficient (RILC) refers to the proportion of the EPLRI to the REP of wind farm within the statistical period, which mainly reflects the rationality and execution efficiency of regular inspection arrangement of wind farms.The calculation method of RILC is shown in Formula ( 12). where i  is the RILC of wind farm within the statistical period.i E is the EPLRI of wind farm within the statistical period, which is equal to the sum of the REP in all regular inspection shutdown periods within the statistical period.The start and end time of regular inspection shutdown of each wind turbine can be determined according to the fault statistics reports of wind farms production and operation management system. (3) Power limitation loss The energy production loss due to power limitation (EPLPL) refers to the energy production loss of wind farm due to the dispatching power limitation of power grid within the statistical period.The power limitation loss coefficient (PLLC) refers to the proportion of the EPLPI to the AEP of wind farm within the statistical period, which mainly reflects wind power consumption capacity of the grid dispatching section where the wind farm is located.The calculation method of PLLC is shown in Formula (13). where l  is the PLLC of wind farm within the statistical period.l E is the EPLPL of wind farm within the statistical period, which is equal to the sum of the difference between the REP and the AEP in all power limit periods within the statistical period.In general, each wind farm will send corresponding signals by wind farm automatic generation control (AGC) system during power limiting operation, including specific power limiting time, duration, and power.The power limiting data can be marked with the record of power limiting instructions to directly identify the power limiting data [22].For wind farms with missing power limiting records, they need to be identified according to the characteristics of operation data after power limiting as an auxiliary method, such as clustering analysis [20,23] and time series analysis [22,24]. (4) Other losses The energy production loss due to other factors (EPLOF) refers to the energy production loss caused by other off-site factors within the statistical period, such as off-site transmission lines fault, power system fault, bad weather (thunderbolt, freezing, typhoon), and other complex conditions.The other losses coefficient (OLC) refers to the proportion of the EPLOF to the AEP of wind farm within the statistical period.The calculation methods of EPLOF and OLC are shown in Formulas ( 14) and (15). where other E and other  are the EPLOF and OLC of the wind farm within the statistical period respectively. Comprehensive Evaluation Method of Wind Farm Power Generation Performance Based on Improved CRITIC Weighting Method Seven comparative indicators are proposed to compare the power generation performance among different wind farms.However, it is likely that different evaluation indicators are uneven in the practical evaluation process, and it is impossible to quantitatively compare the comprehensive power generation performance levels among different wind farms.Therefore, a comprehensive evaluation method of wind farm power generation performance based on the improved CRITIC weighting method is proposed.It is worth noting that there is a complementary relationship between the OPC and the four performance loss coefficients.In order to eliminate information redundancy, only six comparative indicators except the OPC are selected to evaluate wind farm power generation performance. Improved CRITIC Weighting Method The weight determination methods are mainly divided into subjective and objective weighting method and it is difficult to subjectively determine the importance of the six comparative indicators in the comprehensive evaluation process of power generation performance.Therefore, the objective weighting method is adopted to determine the weight of different comparative indicators.The objective weighting approaches mainly include the entropy weight method [25], principal component analysis method [26], and CRITIC weighting method [27], etc.The entropy weight method mainly takes the discreteness of indicators in data samples to determine the importance of the indicators, ignoring the correlation between different indicators.The principal component analysis method adopts dimension reduction to maintain the independence between variables, but it cannot measure the discreteness of indicators in data samples.However, the CRITIC weighting method takes both the discreteness of indicators and the conflict among different indicators into consideration, and it determines the weight of indicators based on the contrast strength and the conflict among indexes.The strength of contrast is reflected by standard deviation where the larger the standard deviation is, the more information the indicator reflects and the greater the weight.The conflict is reflected by the correlation between indicators where the larger the correlation coefficient, the lower the mutual exclusivity and the lower the weight of the indicator. The CRITIC weighting method is adopted to calculate the weight of the evaluation indicators after being compared with other methods above in the aspect of applicability and shortcoming.However, the weight determined only by the CRITIC weighting method belongs to the static constant weight evaluation matrix, ignoring the influence of the internal differences of each indicator on the evaluation object, and the phenomenon of state imbalance is prone to occur.When a certain indicator turns into poor or extremely poor, the weight of the indicator is too weak to be reasonably presented, thus the evaluation results are inconsistent with the practical situation and the comprehensive power generation performance level of the wind farm cannot be evaluated effectively.The variable weight theory can maintain the balance of each indicator in the comprehensive evaluation by revising the constant weight on the basis of the deterioration degree of the indicators, which avoids the problems of one-sided and unreasonable evaluation results caused by the constant weight [28].Therefore, the variable weight theory is applied to improve the CRITIC weighting approach.The specific calculation steps are as follows. (1) Establishment of the initial evaluation matrix Assuming that the numbers of wind farms and indicator to be evaluated are m and n, respectively, the j-th index of the i-th wind farm to be evaluated shall be described as ij x , and the initial evaluation matrix is shown in Formula (16). (2) Normalization of the initial evaluation matrix by relative deterioration degree The initial evaluation matrix is normalized by relative deterioration degree to eliminate the influence of indicators dimension.The relative deterioration degree of an evaluation indicator of the power generation performance reflects the degree of deviation from its optimal state whose range is [0, 1].The larger the indicator value is, the smaller the relative deterioration degree value is. The calculation method of the relative deterioration degree is shown in Formula (17) for the high-superior evaluation indicators, such as the DWEUC and MPC. where x is the relative deterioration degree value of the i-th wind farm and the j-th indicator to be evaluated.The range of the corresponding indicator is [α, β], which is commonly taken as [0, 1] except for the DWEUC, where the range is [0, 0.593]. For the low-superior evaluation indicators, such as the FLC, RILC, PLLC, and OLC, the calculation method of the relative deterioration degree is shown in Formula (18). (3) Calculation of the contrast strength of each evaluation indicator The standard deviation of each evaluation indicator is applied to reflect the contrast strength in the CRITIC weighting method which can be illustrated by Formula (19).The conflict is reflected in CRITIC weighting method by the correlation between the evaluation indicators.The correlation coefficient among the n evaluation indexes shall be calculated firstly to obtain the value of the quantification conflict index.The correlation coefficient between the p-th and the j-th evaluation indicator is shown in Formula (20).The variable weight theory is adopted to revise the constant weight values, and the calculation method is shown in Formula (24). where ij w is the variable weight value of the i-th wind farm and the j-th indicator to be evaluated; T is the variable weight coefficient, which is taken as 0.5 in this paper; the constant weight value of the k-th evaluation indicator; x is the relative deteriora- tion degree value of the i-th wind farm and k-th indicator to be evaluated. Modeling Process The modeling process of the comprehensive evaluation model of wind farm power generation performance based on the improved CRITIC weighting approach is shown in Figure 2, and the specific steps are as follows. (1) Collecting the operating data of the wind farm to be evaluated, which mainly includes wind farm operation management system data, wind farm AGC system data (if any), wind turbine SCADA system data, anemometer tower data, wind turbine nacelle transfer function or lidar wind data, wind turbine design parameters (such as designed wind turbine power curve, hub height, rotor radius, etc.).(2) Calculating the power generation performance evaluation indicators matrix of the wind farm, which mainly includes six comparative indicators such as the DWEUC, MPC, FLC, RILC, PLLC, and OLC.(3) Determining the comprehensive fuzzy evaluation level of the wind farm power generation performance, which is mainly divided as . Among which, the larger the value n is, the more detailed the evaluation object is, and the more it can reflect the fuzziness and gradual change of the fuzzy evaluation.However, the greater the complexity and calculation of the model, it is necessary to consider the accuracy of the evaluation results and the complexity of the evaluation process.According to the practical needs of wind farm power generation performance evaluation, four evaluation levels are selected in this paper, namely (4) Calculating the relative degradation degree matrix g of the wind farm power gen- eration performance evaluation indicators.Each evaluation indicator is normalized to obtain the deterioration degree matrix based on the concept of relative deterioration degree and the differences of indexes.The calculation steps are the same as those of the improved CRITIC weighting method in step (2) and the relative deterioration degree matrix g is shown in Formula (25). where x is the relative deterioration degree value of the m-th wind farm and the n-th indicator to be evaluated. (5) Determining the membership function.The triangular and semi-trapezoidal functions are adopted as the membership functions [29] in this paper, and the fuzzy decomposition interval of membership function is determined according to relevant criteria and expertise.The membership degree matrix corresponding to each evaluation level is obtained according to the relative deterioration degree of each indicator, and the membership matrix of the i-th wind farm to be evaluated is shown in Formula (26). where k i n v , is the membership degree of the i-th wind farm and the n-th indicator be- longs to the rank k L , and k was taken as 1, 2, 3, 4 in this paper.(7) Calculating the fuzzy comprehensive evaluation grade matrix of the wind farm power generation performance and the fuzzy comprehensive evaluation grade matrix of the i-th wind farm to be evaluated is shown in Formula (28). Data The effectiveness and applicability of the proposed approach are verified based on the measured data from 14 wind turbines of wind farm in Northwest China and six wind farms in Northeast China.The data types mainly include the fault statistics reports of wind farms production and operation management system, wind turbine SCADA system data, anemometer tower data, lidar wind data, wind turbine design parameters (such as wind turbine design power curve, hub height, rotor radius, etc.).The time length of wind turbine SCADA system data, anemometer tower data, and lidar wind data is 1 year, and the time resolution is 10 min.Based on the nacelle wind speed and hub-height wind speed measured by lidar of some wind turbines, the nacelle transfer function models are constructed, which are applied to other wind turbines of the same model. Wind Turbines Power Generation Performance Evaluation Results The power generation performance of different wind turbines is quantitatively evaluated based on the measured data from 14 wind turbines of wind farm in Northwest China.The calculation results of MEWPD of wind turbines are shown in Figure 3.As shown in Figure 3, the MEWPD of the 14 wind turbines in the wind farm is 75~150 W/m 2 , which belongs to the third category of wind resource area.The wind farm terrain is complex, and the wind resources of different wind turbines are significantly different.Among them, WT #7 has the highest MEWPD (143.88W/m 2 ), and WT #10 has the lowest MEWPD (87.63 W/m 2 ), with the difference of 56.25 W/m 2 .Therefore, the influence of the wind energy resource differences should not be ignored in the evaluation process of power generation performance of different wind turbines. Figure 4 shows the calculation results of the DEP, REP, and AEP of wind turbines.As shown in Figure 4, different wind turbines have different DEP, REP, AEP.The DEP of WT #7 is the highest (3108 MWh) and WT #10 is the lowest (2150 MWh), with a difference of 958 MWh, which is related to the wind energy resource conditions where the wind turbine is located.The gap between the REP and DEP is mainly related to the maintenance level of wind turbines, which will gradually increase with the deterioration of health status and power generation performance, and this gap can be narrowed by improving the maintenance level of wind turbines.The gap between the AEP and REP is mainly related to the operation and management ability of wind farms, which needs to further refine the causes of power generation loss of wind turbines and tap the power generation potential of wind turbines by formulating effective operation and management measures. The energy production losses of wind turbines are mainly divided into the EPLF, EPLRI, EPLPL, and EPLOF, etc.The calculation results are shown in Table 2.It can be seen from Table 2 that there are considerable differences in the EPLF of different wind turbines: the EPLF of WT #14 is the highest (102.95MWh), indicating that the reliability of WT #14 is relatively low; the EPLF of WT #1 is the lowest (59.66 MWh), indicating that the reliability of WT #1 is relatively high.It is worth noting that WT #7 has the optimal wind energy resource conditions, but the AEP is not the highest (its power generation loss is mainly caused by the fault shutdown being large, and its good wind resource conditions are not well utilized).Therefore, troubleshooting must be strengthened to improve the reliability of WT #7.There is little difference between the EPLRI of different wind turbines, which is basically maintained near 7 MWh of which the small proportion in total loss indicates that the regular inspection plan of the wind farm is relatively reasonable.There are some differences in the EPLPL of different wind turbines since the optimal allocation of active power to reduce the fatigue load of different wind turbines under power limiting (b) Because of the differences of wind energy resources in different wind turbine locations, it is unfair to use statistical evaluation indicators for comparison.Therefore, the comparative indicators are adopted to compare the power generation performance of different wind turbines, and the calculation results are shown in Table 3.It can be seen from Table 3 that the MEWPD of wind turbines are basically above 0.45, indicating that the wind turbines selection is relatively reasonable and can take full advantage of wind energy resources.It should be noted that the MEWPD of some wind turbines may be higher than its maximum design utilization coefficient.The main reason is the measuring error of nacelle anemometer, which requires wind farms maintenance staffs to check the installation and measurement accuracy of wind measuring devices.Except for WT #7, the MPC of other wind turbines are all above 0.9, hence the maintenance level of WT #7 should be further improved.The OPC of wind turbines are all above 0.92, indicating that all wind turbines have good operating performance and relatively low power generation losses.Based on six comparative indicators, such as the DWEUC, MPC, FLC, RILC, PLLC, and OLC, the comprehensive evaluation method based on improved CRITIC weighting approach is adopted to comprehensively evaluate the power generation performance among different wind turbines.The calculation results compared with the comprehensive evaluation methods based on CRITIC weighting method and entropy weighting method are shown in Tables 4-6. 4 and 5, there is no obvious difference in the results of the comprehensive evaluation method based on entropy weight method and CRITIC weight method, which is possibly because there is little correlation difference between indicators.The only difference is that the evaluation results of WT #13 are different, which are excellent and medium, respectively.From the analysis in Table 3, it can be seen that WT #13 has the second largest FLC, and the result of the comprehensive evaluation method based on entropy weight method is excellent, which is obviously not in line with the actual power generation performance.As shown in Tables 5 and 6, the proportion of excellent, good, medium, and poor in the comprehensive evaluation results based on CRITIC weighting method is 50%, 29%, 14%, and 7% respectively, while the proportion in the comprehensive evaluation results based on the improved CRITIC weighting method is 21%, 14%, 36%, and 29% respectively, which has a great difference.Taking WT #8, WT #10, and WT #11 with larger differences in evaluation results as examples, as shown in Table 3, the OLC of WT #8, PLLC of WT #10, and FLC of WT #11 are the highest, indicating that the power generation loss of the three wind turbines is relatively large and the power generation performance is relatively low.However, the evaluation results based on CRITIC weighting method are excellent, medium, and good respectively, which are obviously inconsistent with the actual conditions.The evaluation results based on improved CRITIC weighting method are poor, which are more aligned with the actual power generation performance level of the wind turbines and can effectively measure the comprehensive level of power generation performance of different wind turbines. Wind Farms Power Generation Performance Evaluation Results In order to further verify the effectiveness and applicability of the proposed approach, the measured data from six wind farms in Northeast China are adopted to quantitatively evaluate the power generation performance of different wind farms.The calculation results of the MEWPD of wind farms are shown in Figure 5.As shown in Figure 5, the MEWPD of the six wind farms are all higher than 300 W/m 2 , which belongs to the first category of wind resource area, and the wind energy resource is excellent.Among them, the MEWPD of WF #1 is the highest (554 W/m 2 ), and the MEWPD of WF #2 is the lowest (321 W/m 2 ), with a difference of 233 W/m 2 .The specific reason for this discrepancy may be the difference in wind energy resource endowments of the wind farm itself or the influence of wake effect of large wind power base, which needs to be further verified.Therefore, the difference in wind energy resources should be eliminated to achieve a fair comparison of the power generation performance among different wind farms, so as to prevent some wind farms from remaining invincible due to its inherent advantages of wind resources. Figure 6 shows the calculation results of the DEP, REP, and AEP of wind farms.As shown in Figure 6, there is a great difference between the REP and DEP of WF #2, WF #5, and WF #6, indicating that the maintenance ability of these three wind farms is relatively poor, and the health status and power generation performance are greatly degraded.In order to improve the overall power generation performance of wind farm, it is necessary to carry out necessary technical transformation for wind turbines with low power generation performance while improving the maintenance ability of wind farm maintenance staffs.There is a large difference between the AEP and REP of WF #1, WF #4, and WF #6, indicating that the operation and management ability of these three wind farms is relatively poor.Therefore, it must deeply explore the causes of power generation loss and improve the operation and management ability of the wind farm, so as to maximize the power generation potential of wind farm.The calculation results of energy production loss of wind farms are shown in Table 7.The EPLOF of wind farms is the highest except for WF #6, and it is necessary to further trace the root causes of other energy production losses of different wind turbines with the help of more specific data information and more advanced big data mining technology.It is worth noting that the EPLOF of WF #6 are relatively small, but the EPLF, EPLRI, and EPLPF are the highest, indicating that the fault repair ability and timeliness, the rationality of regular inspection plan, and the consumption capacity of the power grid are undesirable.Therefore, it is necessary to optimize the regular inspection work plan, improve the business skills of operation and maintenance staffs, and explore the wind power consumption scheme, so as to reduce the power generation loss of WF #6 as much as possible.The comparative indicators are used to evaluate the power generation performance of different wind farms.The comparative indicators are based on the wind energy resource and design level of wind farms themselves, which can eliminate the influence of wind energy resource and design levels differences; the calculation results are shown in Table 8.The DWEUC of different wind farms is between 0.23 and 0.33, which is relatively low compared with Table 3.The main reason is that the six wind farms are located in the first category of wind resource area with excellent wind resources, but the wind turbines are all early small capacity units, which cannot make full use of excellent wind energy resources.Therefore, the power generation performance of wind farms could be improved by lengthening blades, adding power amplifiers, or replacing small capacity with large capacity to avoid wasting excellent wind energy resource.The MPC of WF #2 and WF #6 is low (lower than 0.85), and the maintenance level of the wind farm must be improved.The OPC of WF #4 and WF #6 is low (lower than 0.86), and it is necessary to improve the operation and management ability of wind farms to reduce energy production losses.Significantly, the OLC of WF #4 is 14.03%, which is the most important factor causing energy production losses and it needs more specific data information to locate the root causes.Based on the above six comparative indicators, the comprehensive evaluation method based on improved CRITIC weighting method is adopted to comprehensively evaluate the power generation performance of different wind farms.The calculation results compared with the comprehensive evaluation methods based on CRITIC weighting method and entropy weighting method are shown in Tables 9-11.9 and 10, the results of the comprehensive evaluation model based on CRITIC weighting method and entropy weighting method are the same, and the possible reasons have been explained in the evaluation process of wind turbine power generation performance.As shown in Tables 10 and 11, the evaluation results of the two methods have both similarities and differences.The evaluation results of WF #1, WF #2, and WF #4 are different among them.The DWEUC of WF #1 is the lowest, the MPC of WF #2 is the second lowest, and the OLC of WF #4 is the highest, indicating that the effective wind energy capture capacity of WF #1 is relatively low, the maintenance ability of WF #2 is relatively low, and the energy production losses of WF #4 are relatively high.However, the evaluation results based on CRITIC weighting method are good, excellent, and good respectively, while the evaluation results based on improved CRITIC weighting method are poor, medium, and poor respectively.The evaluation results of the proposed model are more aligned with the actual power generation performance level of the wind farms, which further verifies the effectiveness and applicability of the proposed approach. Conclusions The evaluation indicator system and comprehensive evaluation method of wind farm power generation performance, including the influence of wind energy resource differences, are proposed in this paper.Firstly, taking energy as the clue, using five categories of evaluation indicators, including resource conditions, ideal performance, reachable performance, actual performance, and performance loss, the evaluation indicator system of wind farm power generation performance based on statistical and comparative indicators is constructed.Then, aiming at the comprehensive comparison of power generation performance among different wind farms, the comprehensive evaluation method of wind farm power generation performance based on improved CRITIC weighting method is proposed.Finally, actual data from Chinese wind farms are used to validate the effectiveness and applicability of the proposed method by taking the comprehensive evaluation models based on CRITIC weighting method and entropy weighting method as the benchmarks.The conclusions are as follows: (1) The proposed statistical indicators can realize the quantitative evaluation of different aspects, such as resource conditions, ideal performance, achievable performance, actual performance, and performance loss of wind farms, trace the root causes of power generation performance loss, and effectively guide the technical transformation and operation and maintenance management of wind farms.(2) The proposed comparative indicators are based on the wind energy resource and design level of wind farms themselves, which can effectively eliminate the influence of wind energy resource and design levels differences and achieve the fair comparison of power generation performance among different wind farms.(3) The proposed comprehensive evaluation model based on improved CRITIC weighting method can avoid the problems of one-sided and unreasonable evaluation results caused by the models based on CRITIC weighting method and entropy weighting method, and the results are more aligned with the actual power generation performance of wind farms, which can effectively realize the comprehensive evaluation and fair comparison of power generation performance of different wind farms. There are several possible directions to further the present work.The evaluation indicator system of wind farm operation performance including evaluation aspects such as power generation performance, energy consumption level, grid connection characteristics, and power market transaction can be further studied.In addition, wind farm operation performance improvement schemes can be further studied to form a benign closedloop management mode to maximize the potential of wind farm power generation and economic benefits. Figure 1 . Figure 1.Wind farm power generation performance evaluation indicator system. x are the normalized standard deviation and mean value of the j-th evaluation indicator respectively.(4) Calculation of the correlation coefficient and quantitative conflict index between the evaluation indicators ( 5 )( 6 ) is the correlation coefficient between the p-th and the j-th evaluation indica- tor;  p x is the mean value of the p-th evaluation indicator.The quantitative conflict index of the j-th evaluation indicator and other evaluation indicators can be derived from Formula(21).Calculation of the constant weight values of the evaluation indicators Firstly, the comprehensive information coefficient j C of the j-th evaluation indica- tor is derived from the standard deviation and quantitative conflict index obtained respectively according to steps (3) and (4), shown in formula(22).Then, the constant weight value 0 j w of the j-th evaluation indicator is obtained according to the comprehensive information coefficient j C , shown in Formula(23).Calculation of the variable weight values of the evaluation indicators by variable weight theory ( 6 ) Determining the variable weight values of the evaluation indicators by the improved CRITIC weighting method, of which the calculation steps are illustrated in Section 3.1, as shown in Formula(27). ( 8 ) Determining the fuzzy comprehensive evaluation results of power generation performance of wind farm by adopting the maximum membership principle. Figure 2 . Figure 2. Modeling process of wind farm power generation performance comprehensive evaluation model. Figure 3 . Figure 3.The calculation results of MEWPD of wind turbines. Figure 5 . Figure 5.The calculation results of MEWPD of wind farms. Figure 6 . Figure 6.The calculation results of DEP/REP/AEP of wind farms. Table 3 . The calculation results of comparative indicators of wind turbines. Table 4 . The comprehensive evaluation results of wind turbines power generation performance (entropy weighting method). Table 5 . The comprehensive evaluation results of wind turbines power generation performance (CRITIC weighting method). Table 6 . The comprehensive evaluation results of wind turbines power generation performance (improved CRITIC weighting method). Table 7 . The calculation results of energy production loss of wind farms. Table 8 . The calculation results of comparative indicators of wind farms. Table 9 . The comprehensive evaluation results of wind farms power generation performance (entropy weighting method). Table 10 . The comprehensive evaluation results of wind farms power generation performance (CRITIC weighting method). Table 11 . The comprehensive evaluation results of wind farms power generation performance (improved CRITIC weighting method).
12,341
2022-02-28T00:00:00.000
[ "Engineering" ]
Examining the Interaction between Exercise, Gut Microbiota, and Neurodegeneration: Future Research Directions Physical activity has been demonstrated to have a significant impact on gut microbial diversity and function. Emerging research has revealed certain aspects of the complex interactions between the gut, exercise, microbiota, and neurodegenerative diseases, suggesting that changes in gut microbial diversity and metabolic function may have an impact on the onset and progression of neurological conditions. This study aimed to review the current literature from several databases until 1 June 2023 (PubMed/MEDLINE, Web of Science, and Google Scholar) on the interplay between the gut, physical exercise, microbiota, and neurodegeneration. We summarized the roles of exercise and gut microbiota on neurodegeneration and identified the ways in which these are all connected. The gut–brain axis is a complex and multifaceted network that has gained considerable attention in recent years. Research indicates that gut microbiota plays vital roles in metabolic shifts during physiological or pathophysiological conditions in neurodegenerative diseases; therefore, they are closely related to maintaining overall health and well-being. Similarly, exercise has shown positive effects on brain health and cognitive function, which may reduce/delay the onset of severe neurological disorders. Exercise has been associated with various neurochemical changes, including alterations in cortisol levels, increased production of endorphins, endocannabinoids like anandamide, as well as higher levels of serotonin and dopamine. These changes have been linked to mood improvements, enhanced sleep quality, better motor control, and cognitive enhancements resulting from exercise-induced effects. However, further clinical research is necessary to evaluate changes in bacteria taxa along with age- and sex-based differences. Introduction Exercise has long been recognized as an important strategy for maintaining overall health and improving well-being.In recent years, scientists have begun to understand the complex relationship between exercise and microbiota [1,2].The gut microbiota, popularly known as gut flora, refer to the trillions of microorganisms that reside in the gastrointestinal tract, especially bacteria, which are the most abundant and most studied.These microorganisms play a critical role in maintaining optimal function of the gut as well as the overall body health. Experimental evidence has shown that the gut microbiota can be modified by a variety of factors, including diet, physiological stress, and antibiotic use [3,4].Importantly, physical exercise has been recognized as an important modulator of the gut microbiota.Indeed, studies have shown that regular exercise is associated with a more diverse and stable gut microbiota, which is associated with better gut health [5][6][7].For example, cardiovascular exercise (e.g., running or cycling) has been found to increase the abundance of certain bacterial species, such as Akkermansia muciniphila, Faecalibacterium prausnitzii, Prevotella, Methanobrevibacter, and Veillonella atypica [8,9], as part of the exercise-induced physiological adaptation processes.Besides increasing the abundance of beneficial bacteria, exercise has also been found to have anti-inflammatory effects on the gut by decreasing the levels of pro-inflammatory cytokines in the body while promoting immunosurveillance [10].It seems that exercise has a positive effect on gut permeability, avoiding the "leaky gut" [11].A leaky gut is characterized by a porous gut lining, which allows harmful substances and bacteria to leak into the bloodstream, leading to inflammation [12].Regular exercise has been found to help strengthen the gut barrier, reducing the risk of developing a leaky gut [13].It is worth noting that inflammation in the gut has been linked to a variety of conditions, including irritable bowel syndrome, inflammatory bowel disease, and even mental health disorders [14]. The relationship between exercise and gut health is complex, and more research is needed to fully understand the effects and mechanisms involved.Some studies suggest that high-intensity exercise may have a negative impact on the gut, while others have shown no significant difference between high-and low-intensity exercise interventions [15].Certainly, the magnitude of the exercise-induced stress is key to evaluating its effects on human physiology [16], including changes in gut microbiota.For instance, it has been discussed that excessive exercise and inadequate recovery not only strongly affect the gastrointestinal system negatively [17] but also impair gut microbiota composition and function [18].This negative effect normally leads to a dysbiosis that may contribute, at least in part, to worsened immune responses that are seen during overtraining [18].Moreover, it is also important to consider other factors such as diet (e.g., fluid restrictions), sleep, environmental conditions (e.g., altitude, temperature), trainability, age, and stress levels, as they also impact gut health [19].Thus, psychological stress and exercise-induced stress (i.e., intensity and/or duration of the exercise stimuli) affect microbiota [20]. Scientific evidence has highlighted the intricate interactions between gut health, gut microbiota, and neurodegenerative diseases, suggesting that changes in gut microbial diversity and function might have an important role in the onset and progression of these neurological conditions [21,22].Additionally, recent studies have revealed a dynamic interplay between gut microbiota, neurodegeneration, and the role of physical activity [23,24].Regular physical exercise has been shown to have a positive effect on gut health, specifically on gut microbiota, by increasing the abundance of beneficial bacteria, reducing gut inflammation, and improving gut barrier function.However, the relationship between these factors is complex and multifactorial; therefore, it is not fully understood.How do these interactions vary due to different factors such as population, type of exercise, and others?What are the specific mechanisms by which gut microbiota, neurodegeneration, and physical activity are linked?This article aims to review the current literature on the interplay between exercise, gut microbiota, and neurodegeneration.We will emphasize the convergence of the physiological pathways by which physical exercise impacts the gut microbiome and the brain. Methods This study follows previous guidelines on the development of a narrative review outlined by Dixon-Woods et al. [25] and Popay et al. [26].It encompasses the identification, selection, evaluation, and synthesis of the published articles.The first author organized and recruited experts on different areas regarding the aim of the narrative review.The authors collaborated remotely to establish the goals and objectives of the review through a series of online meetings and email correspondence.Each author then contributed a section that aligned with their individual expertise (e.g., nutrition, sport science, aging), resulting in the creation of a first draft of the manuscript.This draft was subsequently reviewed and discussed among all authors following previously an established methodology [27], with several rounds of revisions and refinements made before the final approval.All communications and coordination throughout the process was completed electronically and was led by the first author. Eligibility Criteria All relevant types of articles were considered, including meta-analyses, systematic reviews, randomized controlled trials (RCTs), exploratory studies, confirmatory studies, and case reports.Preference was given to high-quality research, such as meta-analyses and RCTs.There were no date restrictions. Information Sources The primary sources for the articles included the following online databases: PubMed/MEDLINE, Web of Science, and Google Scholar.The studies were published between 2013 and 2023. Search Strategy The search string included free terms as "neuromodulation", "gut health", "exercise", "neurodegeneration", "neurodegenerative diseases", and "microbiota".Each term was combined with keywords such as long-term, chronic, acute, psychiatry, pathophysiology, injury, illness, and disease.The reference lists of the selected articles were also manually searched for additional literature (snowballing). Findings Presentation The narrative discussion by each author was aligned with the author's individual expertise (e.g., nutrition, sport science, aging) and interpretation of the relevant articles.The text provides details on the nature of each study organized by sections including: (i) physical exercise, microbiota, and health; (ii) the neuromodulatory effects of physical exercise; (iii) exercise and neurodegeneration; and (iv) a discussion of potential convergent hallmarks in the complex interaction between physical exercise, microbiota, and neurodegeneration.Finally, future directions are presented to guide upcoming research in the field. Physical Exercise-Gut Health Relationship Stress can be defined as the perturbance of any biological system by modifying its components after external (e.g., exercise or diet intervention) or internal (e.g., genetics, prior knowledge, and current adaptations) stimuli.According to the allostasis-interoception model [28], to evoke a healthy biological adaptation in the individual, stress should be maintained in a chronic and periodized manner, while the system must pay the cost for it (i.e., allostatic load).If the magnitude of stress overcomes the system's capacity, an allostatic overload arises, and a pathological state might take place [29].This has been demonstrated to occur at the physiological and cellular level (for detailed information see the following reviews and meta-analysis: [30][31][32]).Along this line, stress and allostatic load are believed to be significant factors in the relationship between sex/gender and cardiovascular diseases.Longpré-Poirier et al. [33] posit that chronic stress and psychosocial factors may better account for the patterns of increased allostatic load observed in women.On the other hand, biological risk factors and unhealthy behaviors may play a more crucial role in driving increased allostatic load in men.Notably, men exhibit allostatic load patterns that are closely linked to impaired anthropometric, metabolic, and cardiovascular functioning, while women tend to have greater dysregulation in neuroendocrine and immune functioning.Additionally, Wang et al. [34] utilized an integrated micromechanical tool capable of applying controlled mechanical stress to individual cells and simultaneously monitored dynamic subcellular mechanics, observing a biphasic process in individual cell allostasis.This process involves cellular mechanics attempting to return to a stable state through a mechanoadaptive phase with heightened biophysical activity, followed by a decaying adaptive phase.The observations suggest that cellular allostasis is achieved through a complex balance of subcellular energy and cellular mechanics.When subjected to a transient and localized physical stimulation, cells trigger an allostatic state that maximizes energy and surmounts a mechanical "energy barrier", followed by a relaxation state that achieves mechanobiological stabilization and minimizes energy expenditure. Exercise-induced stress has long been recognized for its numerous benefits to physical and mental health [35].It has been shown that an effective exercise dose might increase the production of anti-inflammatory cytokines (e.g., interleukin-10) while at the same time decreasing pro-inflammatory molecules (e.g., interleukin-6) [36].This can help to reduce the overall level of inflammation in the body, which might result in an enhanced immune response (i.e., immunosurveillance) [10].However, recent research has highlighted the interplay between exercise, inflammation, and gut health as a plausible mechanism for the immunomodulatory effects that are connected to exerkine production [35].Exerkines are molecules that are characterized as signaling agents and released in response to both acute and chronic exercise.These molecules exert their effects through various pathways, including endocrine, paracrine, and autocrine mechanisms.Numerous organs, cells, and tissues release these factors, with examples including skeletal muscle releasing myokines, the heart releasing cardiokines, the liver releasing hepatokines, white adipose tissue releasing adipokines, brown adipose tissue releasing baptokines, and neurons releasing neurokines.The potential roles of exerkines are vast and encompass improvements in cardiovascular health, metabolic function, immune response, and neurological well-being [37][38][39][40].This suggests that regular physical activity might have a positive impact on the gut microbiota (diversity and function) [1,2] which may also facilitate healthy metabolic shifting [41]. The gut microbiota refers to communities of microorganisms that are made up of mainly Bacteria, Archaea, and Eukarya (fungi, protozoans, and metazoan parasites), as well as eukaryotic and prokaryotic viruses (bacteriophages) that reside in the gastrointestinal tract [42].As any other human biological component that contribute to the physiological regulation, dysbiosis, or an imbalance in the gut microbiota, has been linked to a variety of health issues such as inflammatory disorders [43].Since inflammation is a key aspect of many chronic diseases (e.g., obesity, diabetes), the gut microbiota has been described to play a crucial role in disease prevention and management through the production of short-chain fatty acids (SCFAs), anti-inflammatory molecules, and subsequent modulation of the immune response [44]. Notably, physically active individuals have a higher abundance of beneficial bacteria and a lower abundance of pro-inflammatory bacterial species [18,44].In this regard, exercise favors the production of SCFAs by gut microbiota, which can also improve gutbarrier function [45,46].These exercise-mediated effects on gut health are not limited to healthy individuals, given that regular physical activity can also improve the gut status in individuals with chronic diseases [25].In the obese population, regular physical activity has been shown to improve gut microbial diversity, which subsequently contributes to a reduction in systemic inflammation [47].In general, regular exercise has been shown to improve disease symptoms (e.g., abdominal pain) and reduce the need for medication. Notwithstanding, how does physical exercise regulate the gut microbiota status?Based on the current research, we might establish that it is mediated by exerkines, es-pecially lactate (La − ).It is worth noting that La − is a stress-related signaling molecule that plays a key role in allodynamic responses in health and disease [48].Therefore, it is a biomarker that is frequently used in exercise and sport physiology, as it positively correlates with intensity (stress level).La − is an intermediate product of energy metabolism that is considered one of the key stress-related molecules of human physiology in health and disease rather than a waste or fatigue substance [49].Some of the pleitropic effects of La − metabolism include: (i) regulation of energy production (e.g., Cori's cycle, transient between glycolysis and oxidative metabolism, changes in substrate utilization); (ii) organelle signaling and interoception processes (i.e., cross-talking between organelles and tissues via monocarboxylate transporter isoforms (MCTs)); and (iii) epigenetic control of gene expression (lactylation) [50].Indeed, exercise training has been shown to enhance the expression of MCT1 and MCT4, which contributes to the higher transport and removal rate of La − [51]. Interestingly, La − disposal, production, and transportation are not only regulated by extrinsic factors such as exercise dose and energy intake (i.e., distal physiology) but also by intrinsic factors like genetic variations in La − -related genes (MCTs) and the microbiota status [52,53].In recent years, this direct interaction between La − levels and the microbiota status has been reported in different phenotypes, including the obese population and highly trained athletes [54][55][56].Veillonella atypica, Eubacterium hallii group, Anaerobutyricum hallii, Anaerostipes, and many other bacterial species can metabolize La − to produce SCFAs and other intermediates that contribute to the microbial diversity and to the enrichment of specific bacterial populations after an exercise period [45].It should be noted that MCT1 is present as a myocyte membrane transporter and is also expressed in the gut epithelium to facilitate the absorption of SCFAs produced by the gut microbiota [57].Alternatively, it is plausible that other La − sources beyond the muscle (e.g., bacterial species such as Lactobacillus spp.) impact exercise-induced adaptations by increasing the La − availability to allow La − -utilizing bacteria to produce butyrate and other SCFAs [53].It seems that this bidirectional interaction mediated by changes in La − levels may be responsible, at least in part, for the exercise-induced changes in the microbiota and the bacteria-related contribution to energy metabolism (SCFAs) and exercise adaptations at the physiological level.Nevertheless, further research is warranted to examine the minimal exercise intensity and the necessary time of an exercise training program required to positively alter the microbiota status. Physical Exercise as a Neuromodulator Physical exercise, regardless of the intensity level, has been proven to be an effective treatment for a wide range of medical conditions.These include cardiovascular [58,59], respiratory [60,61], metabolic [62-64], musculoskeletal [65,66], and neurological [67-69] conditions.Research suggests that exercise and physical activity can lead to changes in brain function and improve the ability to adapt to new challenges and behavior changes [70][71][72].Additionally, several studies have highlighted the significance of cortisol in certain neurological conditions.The conversion of cortisol to cortisone, in fact, has been shown to increase proportionately with exercise as a response to training.This is essential, as it protects individuals who have undergone training from the negative effects of prolonged elevated cortisol levels [73], including depressive issues and anorexia [74].The exercise, sport science, and medicine community should delve deeper into the connection between exercise and neural function to further understand the neurobiological mechanisms active during various types of physical activity. Endurance training in various forms and intensities has been shown to increase endorphins and endocannabinoids, resulting in reduced symptoms of anxiety, sleep disorders, and depression [75,76].Anandamide, a type of endocannabinoid that is increased during exercise, has been linked to the regulation of physical and psychological stress [69].In this regard, anandamide might play a role in various brain activities through physiological regulation of stress, anxiety, and post-stress recovery [77].This can lead to a reduction in overactivity in the amygdala [78].It should be noted that regulation of the stress response after physical exercise is dependent on the glucocorticoid hormone.Since anandamide is a fatty acid-like molecule, it can readily pass through the blood-brain barrier and contribute to mood regulation via the glucocorticoid pathway [79].Moreover, there is strong evidence to suggest that anandamide might have a significant role in the increase in brainderived neurotrophic factor (BDNF) during and after exercise.In fact, anandamide levels remain elevated during recovery, delaying the return to normal levels of BDNF [80].BDNF is considered the primary molecule responsible for exercise-induced neurogenesis and brain plasticity, in addition to its beneficial effects on learning through its enhancement of synaptic plasticity and long-term potentiation [81]. Furthermore, exercise increases the likelihood of tryptophan crossing the blood-brain barrier, increasing serotonin levels in the brain.This is due to an increase in the uptake of branched-chain amino acids in muscles during exercise [82].Serotonin is a neurotransmitter that affects thermoregulation, mood, emotional behavior, food intake, and sleep-wake cycles [83].However, excessive serotonin levels can lead to neurological issues, including mental and autonomic disorders [84].Dopamine, another neurotransmitter that is increased during and after exercise, plays a role in the early stages of motor control, memory, and cognitive flexibility [85].Dysfunction in dopamine levels can lead to various conditions, such as schizophrenia, attention deficit hyperactivity disorder, bipolar depression, addiction, and Parkinson's disease [86].Dopamine also regulates immune functions related to T-cell activation and inflammation.[87].Its receptors play an important role in synaptic plasticity and motor behavior by reinforcing the selection of movements.Considering the aforementioned points, both short-and long-term exercise programs have been shown to enhance cognitive performance and delay neurodegenerative responses.Exercise appears to modulate levels of neurotrophins (e.g., BDNF), hormones (e.g., cortisol), and neurotransmitters (including anandamide, dopamine, and serotonin); however, these effects vary based on factors such as sex, age, and genetics [79].This neuroregulation caused by exercise appears to be dependent on the intensity of the exercise [88,89] (see Figure 1). The Role of Exercise in Neurodegeneration There is robust evidence showing that exercise can enhance neurological function i both healthy adults and those with cognitive impairments [90].Research suggests tha cardiovascular exercise, in particular, can enhance cognitive abilities such as processin speed, attention, and cognitive flexibility [68].Similarly, strength exercise can improv physical capabilities as well as mental and behavioral conditions [90].Physical exercis provides certain benefits and affects gut health, brain function, and cognitive functio through different pathways.In the following paragraphs, we briefly describe commo neurodegenerative diseases and the potential for exercise to alleviate symptoms as part o a non-pharmacological strategy. The Role of Exercise in Neurodegeneration There is robust evidence showing that exercise can enhance neurological function in both healthy adults and those with cognitive impairments [90].Research suggests that cardiovascular exercise, in particular, can enhance cognitive abilities such as processing speed, attention, and cognitive flexibility [68].Similarly, strength exercise can improve physical capabilities as well as mental and behavioral conditions [90].Physical exercise provides certain benefits and affects gut health, brain function, and cognitive function through different pathways.In the following paragraphs, we briefly describe common neurodegenerative diseases and the potential for exercise to alleviate symptoms as part of a non-pharmacological strategy. Parkinson's disease is a prevalent neurodegenerative disorder that causes progressive and unpredictable damage to the brain [91].Characterized by the death of dopamineproducing neurons in the brain, Parkinson's disease is characterized by motor dysfunctions, such as difficulty initiating and performing voluntary movements, issues with posture, stiffness, slow movement, muscle rigidity, and problems with coordinating movement sequences.It also often results in behavioral and cognitive impairments [92][93][94].Exercise is often recommended as a strategy to manage the symptoms and disability caused by Parkinson's disease.Exercise-based programs such as hydrotherapy have been shown to be effective in treating some symptoms of Parkinson's disease, including improved motor function, balance, and quality of life [94][95][96].Alternative therapies like Tai Chi [97], yoga [98], and dance [99] may also help treat Parkinson's disease and improve outcomes like gait, balance, and functional mobility.Other programs like Nordic walking [100], resistance training, and flexibility training have also been effective in improving motor symptoms and functional performance in Parkinson's disease patients.Strength programs usually accompanied by stretching, balance, and breathing exercises also suggest improvements in physical and cognitive capabilities [101]. Alzheimer's disease is a progressive and degenerative disorder that affects memory and cognitive function.It is the primary cause of dementia among adults, with age being the main risk factor [102].Exercise has been shown to be an effective alternative and complementary approach to medication in Alzheimer's due to it having fewer side effects and better adherence compared to drugs [103].Cardiovascular exercise can reduce the prevalence, morbidity, and mortality caused by Alzheimer's and slow down the rate of decline [102].Exercise has a neuroprotective role, promoting greater angiogenesis and neurogenesis, reducing inflammation, and decreasing cerebrovascular risk factors [104][105][106].Long-term exercise programs can prevent the risk factors of Alzheimer's disease, improve blood flow, increase hippocampal volume, and improve neurogenesis [103].A variety of activities, such as swimming, walking, cycling, yoga, and bowling, have been shown to improve cognitive performance, memory, and executive function.Moreover, a resistance exercise-based program of one hour per week can help reduce the progression of Alzheimer's by improving strength, flexibility, and balance in the long-term [102,107].Studies also suggest that exercise can preserve the volume and integrity of the hippocampus, temporal, basal ganglia, and thalamus [108]. Multiple sclerosis (MS) is a chronic disorder of the central nervous system in which the patient's immune system attacks the myelin sheath surrounding the axons of neurons in the brain and spinal cord [109].This leads to demyelination, which causes symptoms such as a loss of function and feeling in the limbs, chronic pain, fatigue, balance loss, and cognitive impairment [110].There is currently no cure for MS, and evidence suggests that MS patients are less active than the general population [109].Various exercise modalities, such as cardiovascular, strength, and interval training, have been used to treat MS.These interventions, such as cycling and walking-jogging, can help mitigate declines in walking mobility and reduce disease progression [111].A systematic review found that cardiovascular and mixed exercise can reduce self-reported fatigue in MS patients [112]. Finally, amyotrophic lateral sclerosis (ALS) is a progressive, fatal, and neurodegenerative disease characterized by symptoms such as fatigue, muscle stiffness, and cognitive impairment [113].The role of exercise in the treatment of ALS is controversial, but when implemented early in the disease, it can help improve motor function and enhance independence [114,115].Rehabilitation programs usually focus on avoiding muscle fatigue and damage, and the exercises used include swimming, walking, and cycling at submaximal levels [116]. Complex Interactions between Exercise, Neurodegeneration, and Gut Health The diversity and composition of the gut microbiota are crucial for several vital functions, including regulation of basic processes such as digestion, as well as facilitating the extraction, synthesis, and absorption of nutrients and metabolites [117].Furthermore, the gut microbiota status determines the abundance of metabolites, neurotransmitters, and SCFAs produced by the bacteria [118]. In recent years, interest in the connection between the gut microbiota and the gutbrain axis has raised significantly, particularly in relation to neurodegenerative disorders.This is due to evidence suggesting that gut microbiota imbalances might play a role in pathological processes associated with psychiatric and neurological conditions [119].It has been previously described that the gut plays a crucial role in releasing various hormones, peptides, and microbial metabolites, such as SCFAs, secondary bile acids, and products derived from tryptophan and polyphenols.These substances have significant effects on neuronal function and survival.Notably, many of these compounds can cross the bloodbrain barrier (BBB), including SCFAs, which exploit active membrane transporters on the endothelium to reach the central nervous system (CNS) [120].Conversely, the CNS also sends efferent responses to the gut, thereby regulating important aspects like motility, mucus secretion, barrier integrity, and visceral sensitivity [121].The communication between the gut and the CNS is bidirectional, and this is why any dysbiosis in the microbiota would impact brain function through this gut-brain axis. Dysregulation of the gut microbiota has been linked to various neurodegenerative disorders such as Parkinson's, Huntington's, multiple sclerosis, and Alzheimer's [122][123][124].These diseases have been associated with a decline in the integrity and function of the gut, potentially resulting in increased gut permeability and inflammation.This can create an abnormal environment in the gut [119], which disrupts communication between the gut and the brain.Communication between the gut microbiota and the nervous system may be driven by gut-brain axis pathways that include the enteric nervous system, vagus nerve neuronal connections, the immune system, and metabolism [125].The enteric nervous system is composed of enteroendocrine cells which receive signals directly form the gut microbiota.These cells can induce the secretion of hormones that cross the BBB and impact the function of brain cells.Furthermore, the vagus nerve is intricately connected to enteroendocrine cells, and it serves as a potentially crucial link between the gut microbiota and the brain.This direct connection allows for bidirectional communication between the gut and the brain, enabling the exchange of signals and information that can influence various physiological and neurological processes.The vagus nerve's involvement in this communication pathway highlights its importance in mediating the gut-brain axis, which could be through exercise and could facilitate interactions between the gut microbiota and brain function.Additionally, immune-signaling mediators such as cytokines, chemokines, and microbial-associated molecular patterns (MAMPs) play a crucial role in facilitating communication between the gut microbiota and the brain.These mediators can interact through both direct and indirect pathways, enabling bidirectional signaling between the gut and the brain.Through these signaling pathways, the gut microbiota can influence immune responses and neuroinflammation in the brain, while the brain can also modulate immune functions in the gut.This intricate immune communication network contributes to the complex interactions of the gut-brain axis and plays a significant role in shaping overall health and well-being.Finally, it is also important to note that products of microbial metabolism, such as short-chain fatty acids (SCFAs) and other microbial-derived metabolites such as tryptophan, act as chemical signals in the host's cells, influencing various aspects of cellular function [126]. Moreover, clinical research has associated gut microbiota imbalances with neurodegenerative disorders [22].Exercise, therefore, can improve gut health by increasing the diversity of the microbiota and promoting a balance between the beneficial and harmful bacterial communities [127,128].This suggest that a positive impact on the gut microbiota might influence neurological health [129,130].Exercise can decrease the transit time of food through the gastrointestinal tract, reducing the exposure of pathogens to the mucus layer in the gut and having a secondary effect on the circulatory system, which in turn reduces the population of harmful pathogens [119,131]. The communication between the gut and the central nervous system (CNS) is very complex, with microbial metabolites such as SCFAs, bile acids (BAs), and tryptophan playing a key role.These compounds bind to receptors in the CNS and affect various functions, including intestinal transport, secretion, and permeability.Additionally, signals from the gut are sent to the CNS through the vagus nerve and other channels, influencing feeding behavior and energy homeostasis.Skeletal muscle also plays a role in this communication, with receptors for SCFAs and BAs found on muscle fibers.This allows the gut microbiota to participate in muscle energy metabolism and fiber conversion.Furthermore, during exercise, myokines secreted by skeletal muscle stimulate the secretion of intestinal hormones, which can further influence food intake and energy balance.The concept of the brain-gut-muscle axis is becoming increasingly recognized as important for regulating energy homeostasis and overall health [132]. The mechanisms by which exercise affects the gut microbiome and alters its components have been studied.A strong connection between exercise, stress-related factors, and the immune response is thought to be the key mediating pathway [119,133].Animal studies (i.e., mice and rats) have demonstrated that exercise leads to an increase in antioxidant enzymes, anti-inflammatory cytokines, and proteins that prevent cell death in intestinal lymphocytes while also decreasing proinflammatory cytokines and proteins that promote cell death.This leads to a reduction in intestinal inflammation [134,135] and immunosurveillance [10], which has been reported in clinical research (see Table 1). Table 1.Description of the positive effects of exercise on gut microbiota and brain functions.From a molecular point of view, it is necessary to highlight that La − metabolism is at the convergence between exercise, microbiota, and neurobiology.We have already discussed the influence of exercise-induced stress on microbiota status via higher La − levels and increased MCTs content, as well as the enrichment of La − -utilizing bacterial species in the gut and the subsequent higher production of SCFAs to mediate exercise adaptations.However, abnormal elevated and sustained La − concentrations have been linked to the progression of major cellular pathologies that are associated with neurodegenerative diseases [136].While physical exercise enhances the flux of SCFAs and La − through an increased expression of MCTs, in the progression of neuropathological diseases, the tissues are not able to participate in the sequestration and utilization of La − , resulting in an allostatic overload [137].A recent meta-analysis on post-mortem and in vivo imaging data concluded that increased La − levels and reduced pH are common features of the schizophrenic brain [138].In addition, there is a marked association between La − concentrations, hyperphosphorylation of Tau (τ) proteins, and cognitive decline in Alzheimer's disease [139].In general, La − is a stress-related signaling molecule, and the La − production/removal ratio can be positively modified by physical exercise (i.e., higher MCT expression, changes in lactate threshold, and metabolic shifting) [140].Thus, it is plausible to state that exercise and microbiota regulate the systemic and brain levels of La − through a feedforward positive motif that might result in energy optimization and control of oxidative stress and hydrogen ion (H + ) concentrations.These key features are the core for controlling inflammation and possibly contribute to the preventive and treatment of neurological disorders.This relationship between exercise, gut microbiota health, and cognitive function (gut-brain-muscle axis) is shown in Figure 2. Given the link between gut dysfunction and the gut microbiome in neurodegenerative diseases and the effects of physical exercise on the gut microbiome, further research is needed to confirm whether exercise can partially modulate neurodegeneration through the gut microbiome [119,141].One proposed mechanism is linked to the improvement in mitochondrial dysfunction found in neurodegenerative disorders.It has been shown that both acute and chronic exercise can initiate dynamic processes in mitochondria, including biogenesis, fusion, fission, and mitophagy [142].One study found that exercise training can enhance the metabolic and genetic capabilities associated with the tricarboxylic acid (TCA) cycle.In contrast, non-exercised mice with obesity induced by a high-fat diet (HFD) exhibited a reduced metabolic capacity in their fecal microbiota [143].These findings suggest that exercise has a positive impact on mitochondrial function and the gut microbiota, potentially contributing to improved neurodegeneration.The implementation of multimodal strategies that include exercise, diet, sleep hygiene, and psychological therapy has been shown to be highly relevant in the treatment and management of patients with degenerative diseases.By addressing both physical and mental health needs, multimodal treatments have the potential to slow the progression of degenerative diseases, improving the quality of life and overall well-being of patients.It is important for healthcare professionals to consider a holistic approach for treating these conditions to achieve optimal results.Figure 2. The brain-gut-muscle axis.Increases in lactate, characteristic of neurodegenerative diseases, are controlled by exercise, which regulates systemic and brain lactate levels.Additionally, changes in the microbiota diversity and intestinal profile affect to the production of SFCAs, which can cross the blood brain barrier.Under neurodegenerative conditions, changes in mood, behavior, and cognition, together with alterations in the blood-brain barrier and the inflammatory state, have been reported, leading to neuronal death.The release of brain-derived neurotrophic factor (BDNF) during and after exercise contributes to the neuroplasticity, improving the neurodegenerative condition. Gut Microbiota Changes Given the link between gut dysfunction and the gut microbiome in neurodegenerative diseases and the effects of physical exercise on the gut microbiome, further research is needed to confirm whether exercise can partially modulate neurodegeneration through the gut microbiome [119,141].One proposed mechanism is linked to the improvement in mitochondrial dysfunction found in neurodegenerative disorders.It has been shown that both acute and chronic exercise can initiate dynamic processes in mitochondria, including biogenesis, fusion, fission, and mitophagy [142].One study found that exercise training can enhance the metabolic and genetic capabilities associated with the tricarboxylic acid (TCA) cycle.In contrast, non-exercised mice with obesity induced by a high-fat diet (HFD) exhibited a reduced metabolic capacity in their fecal microbiota [143].These findings suggest that exercise has a positive impact on mitochondrial function and the gut microbiota, potentially contributing to improved neurodegeneration.The implementation of multimodal strategies that include exercise, diet, sleep hygiene, and psychological therapy has been shown to be highly relevant in the treatment and management of patients with degenerative diseases.By addressing both physical and mental health needs, multimodal treatments have the potential to slow the progression of degenerative diseases, improving the quality of life and overall well-being of patients.It is important for healthcare professionals to consider a holistic approach for treating these conditions to achieve optimal results. Future Research Directions Research on the interplay between gut microbiota, neurodegeneration, and physical activity is an emerging field that is rapidly gaining attention in the scientific community.Future research in this area will likely focus on several key areas. First, more research is needed to understand the specific mechanisms by which gut microbiota, neurodegeneration, and physical activity are linked.For example, studies are needed to confirm the convergence of La − metabolism and also to identify the specific gut microbial species and alternative metabolic pathways that may play a role in neurodegeneration, as well as the precise ways in which exercise impacts the gut microbiome. Second, further clinical research is warranted to understand the acute and chronic responses among different populations.It is important to understand how these interactions vary among different age groups, sexes, ethnicities, and lifestyles, as these factors may play a role in the susceptibility to neurodegeneration. Third, the field will likely move towards an integrated perspective to study gut microbiota, neurodegeneration, and physical activity by considering the impact of both diet and stress on the gut-brain axis.Diet is known to play a crucial role in shaping the gut microbial ecosystem.Since stress can exacerbate inflammation in the gut, it is necessary to consider effective dietary interventions (e.g., probiotic and prebiotic intake) in conjunction with an exercise program, which may improve gut microbial health and prevent or slow down the progression of neurodegeneration [144]. Lastly, it will be important to consider the ethical implications of manipulating the gut microbiome to treat or prevent neurodegenerative diseases.Researchers will need to consider the potential risks and benefits of such interventions as well as the potential long-term effects on gut microbial health and overall well-being. Conclusions Studies have shown that the gut microbiota plays a crucial role in maintaining overall health and well-being, and that there is a dynamic interplay between physical exercise, the gut microbiota, and neurodegeneration.Regular and effective exercise has been shown to modulate gut microbial diversity and function, with positive implications for gut health and overall well-being. Although more research is needed, it seems that La − metabolism is the convergent mechanism by which physical activity, the gut microbiota, and neurodegeneration progression are linked.Several studies have shown the effects of exercise on the La − production/removal ratio and La − flux regulation, the La − -consuming function of certain bacterial species (e.g., Veillonella atypica, Eubacterium hallii group, Anaerobutyricum hallii, and Anaerostipes, among others), and the pathophysiological concentrations of La − associated with neurodegenerative disease progression.More research is needed to discover the time course and features of these complex interactions.It is also important to consider the ethical implications of manipulating the gut microbiome to treat or prevent neurodegenerative diseases.Researchers should consider the potential risks and benefits of such interventions, as well as the potential long-term effects on gut microbial health and overall well-being.Future research should focus on developing and testing interventions to improve gut microbial health and prevent or slow down the progression of neurodegenerative diseases. Overall, the interplay between physical exercise, the gut microbiota, and neurodegeneration is a complex and multifaceted topic that requires further research to fully understand the underlying mechanisms and interactions. Biomedicines 2023 , 1 Figure 1 . Figure 1.The benefits of exercise and its influence on brain functions such as anxiety, sleep, moo regulation, cognition, and inflammation.Created using Biorender.com(accessed on 4 August 2023 Figure 1 . Figure 1.The benefits of exercise and its influence on brain functions such as anxiety, sleep, mood regulation, cognition, and inflammation.Created using Biorender.com(accessed on 4 August 2023). - Increases Firmicutes and Actinobacteria -Decreases anxiety and depression -Increases butyrate-producing bacteria, such as Roseburia hominis, Faecalibacterium pausnitzii, and Ruminococcaceae -Improves mood -Increases butyrate concentration -Improves motor control -Reduces transient stool time in the gastrointestinal tract -Decreases inflammation through t-cell activation -Increases key antioxidant enzymes (catalase and glutathione peroxidase), anti-inflammatory cytokines (including IL-10), and antiapoptotic proteins (including Bcl-2) in intestinal lymphocytes -Improves memory, long-term potentiation and cognitive flexibility -Decreases proinflammatory cytokines (TNF-α and IL-17) and proapoptotic proteins (caspase 3 and 7), leading to an overall reduction in gut inflammation -Improves sleep -Increases SCFAs, followed by a decrease in Bacterioides and an increase in Faecalibacterium and Lachnospira -Improves neurogenesis and brain plasticity -Modulates gastrointestinal motility -Improves brain metabolism through mitochondria Biomedicines 2023 , 19 Figure 2 . Figure2.The brain-gut-muscle axis.Increases in lactate, characteristic of neurodegenerative diseases, are controlled by exercise, which regulates systemic and brain lactate levels.Additionally, changes in the microbiota diversity and intestinal profile affect to the production of SFCAs, which can cross the blood brain barrier.Under neurodegenerative conditions, changes in mood, behavior, and cognition, together with alterations in the blood-brain barrier and the inflammatory state, have been reported, leading to neuronal death.The release of brain-derived neurotrophic factor (BDNF) during and after exercise contributes to the neuroplasticity, improving the neurodegenerative condition.
8,730.4
2023-08-01T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
PNPLA3-I148M: a problem of plenty in non-alcoholic fatty liver disease ABSTRACT Fatty liver disease (FLD) affects more than one-third of the population in the western world and an increasing number of children in the United States. It is a leading cause of obesity and liver transplantation. Mechanistic insights into the causes of FLD are urgently needed since no therapeutic intervention has proven to be effective. A sequence variation in patatin like phospholipase domain-containing protein 3 (PNPLA3), rs 738409, is strongly associated with the progression of fatty liver disease. The resulting mutant causes a substitution of isoleucine to methionine at position 148. The underlying mechanism of this disease remains unsolved although several studies have illuminated key insights into its pathogenesis. This review highlights the progress in our understanding of PNPLA3 function in lipid droplet dynamics and explores possible therapeutic interventions to ameliorate this human health hazard. Introduction Fatty liver disease is a burgeoning health hazard that affects more than one-third of the population in the western world [1]. It is divided into two types: nonalcoholic and alcoholic. Non-alcoholic fatty liver disease (NAFLD) affects~25% population worldwide and is most prevalent in the Middle East and South America with the lowest incidence in Africa [2]. NAFLD is typically associated with obesity and insulin resistance [3,4]. Its progression is characterized by four stages: Steatosis (first) leading to non-alcoholic steatohepatitis (NASH), a condition characterized by inflammation and ballooning (second). This condition may develop into organ impairment or cirrhosis (third) leading to the end stage of hepatocellular carcinoma (fourth) necessitating liver transplantation [5]. NAFLD will soon overtake Hepatitis C as the leading indication of liver transplantation [6]. NAFLD has multifactorial pathogenesis involving a close interplay of environmental factors and genetic determinants. There are numerous studies, suggesting ethnic difference as the major cause of hepatic fat [7,8]. The first clinical evidence of association of a variant of the PNPLA3 (aliases Adiponutrin, calcium-independent phospholipase A2 epsilon) gene, rs738409 C > G with NAFLD development was provided by Romeo et.al who demonstrated that the frequency of the PNPLA3-I148M variant was significantly higher in the Hispanics (49%) compared to the European Americans (23%) and African Americans (17%). The prevalence of hepatic steatosis as measured by proton magnetic spectroscopy was higher in the Hispanics (45%) than European Americans (33%) and African Americans (24%) [4]. This variant was also associated with alcoholic liver disease [9] and accumulation of hepatic fat in studies across different ethnicities and geographical region [10]. Pnpla3 was first cloned from a cDNA library of 3T3 preadipocytes during differentiation into mature adipocytes and hence was named 'adiponutrin' [11]. In mice, Pnpla3 is highly expressed in white and brown adipocytes and modestly expressed in the liver (Ct = 25).In mice, this gene is nutritionally regulated in response to carbohydrate and insulin treatment [11][12][13]. In humans, however, its expression is 10-fold higher in the liver than adipose tissue [14]. Nutritional regulation of PNPLA3 is robust in humans similar to mice. Low-calorie diet reduces PNPLA3 expression in the adipose tissue that gets upregulated on refeeding [15] by both insulin and glucose [16]. In liver, PNPLA3 is positively associated with body mass index(BMI) [17]. Thus,PNPLA3 demonstrates nutritional regulation. [18]. Patatin is a major protein of potato tuber with nonspecific lipid acyl hydrolase activity [19,20]. It is a storage protein but cleaves fatty acids from membrane lipids by its lipase activity [21]. PNPLA3 is variedly expressed in human and mice. In humans, PNPLA3 is highly expressed in the liver and retina. In the liver, it is expressed in hepatocytes, stellate cells and sinusoidal cells [22][23][24][25][26]. In mice, Pnpla3 is expressed in brown and white adipose tissues, adrenal gland, skeletal muscle, heart and liver. Unlike humans, its expression is higher in the adipose tissue compared to liver [11,27]. In the liver, Pnpla3 levels are higher in hepatocytes than stellate cells [23]. The PNPLA3-I148M do not show an association of high liver fat content with insulin resistance [4,28]. Elevated diacyl glycerol (DAG) is associated with insulin resistance in rodents and humans with steatosis yet PNPLA3-I148M is not associated with changes in insulin sensitivity [29]. This disconnect is due to the unaltered proportion of DAG (FA 18:1) in PNPLA3-I148M carriers [30]. Irrespective of differential tissue expression, murine models have provided significant insights into the pathogenesis of NAFLD caused by PNPLA3-I148M mutant. In mice, overexpression of the human PNPLA3 and PNPLA3-I148M in adipose tissue did not reveal any change in morphology or function of either white or brown fat or cold tolerance. The increased liver fat was associated with only liver specific overexpression of PNPLA3-I148M suggesting that the fatty liver phenotype is associated with the disease mutant in liver rather than the adipose tissue [31]. However, this model has limitations. The sequence homology of human and mice PNPLA3 is 68%. The human PNPLA3 is 481 amino acid in length while the mouse PNPLA3 is 384. The extended human PNPLA3 has two vesicle targeting motifs [32]. The transgenic mouse constitutively overexpresses human PNPLA3 and is not nutritionally regulated. To overcome these shortcomings a knock-in (KI) model was developed by Smagris et al., that developed steatosis on high sucrose challenge with pronounced accumulation of the mutant at hepatic lipid droplets [33].Pnpla3 is nutritionally regulated. It is reduced on fasting and robustly expressed on high carbohydrate refeeding by insulin [11,23,27,34]. Previous studies have shown that PNPLA3 is located at the lipid droplets and the PNPLA3-I148M is associated with droplets of larger size with reduced triglyceride(TG) hydrolysis [35][36][37]. Biochemical fractionation studies have indicated that more than 90% of the cellular PNPLA3 pool resides at the lipid droplets [35]. Thus, PNPLA3 is a predominantly lipid droplet resident protein. In mice, the wild type PNPLA3 is undetectable 6h post fasting whereas this effect this blunted and the protein persists till 12h fasting in PNPLA3-I148M KI mice due to reduced ubiquitylation and impaired proteasome targeting [38]. It is possible that the disease variant alters the conformation of PNPLA3 and reduces the access of E3 ligase. The mutant might trap the substrate in a conformation unrecognizable by the E3 ligase or it may be rapidly deubiquitylated thereby reducing turnover. This makes it imperative to identify the ubiquitylation sites of PNPLA3 to explore therapeutic opportunities. In mice, hepatic Pnpla3 is regulated under the transcriptional control of SREBP1c (Sterol regulatory binding protein 1c) and ChREBP (Carbohydrate response element binding protein) [39]. In humans, two independent studies have shown that it is regulated exclusively by SREBP1c [23,39] while another group reported its regulation by glucose through ChREBP [40]. SREBP1c is ubiquitylated and targeted to the proteasome by E3 ligase Ring Finger Protein 20 (RNF20) to regulate hepatic lipid metabolism [41]. Transcriptional program helps to maintain long-term controls, but deactivation or degradation of excess protein is an added layer of lipid homeostasis. PNPLA3 is under acute nutritional control possibly by post-translational modification that fine-tunes its regulation. PNPLA3 function PNPLA3 has been shown to have different enzymatic activities. PNPLA3 expressed in Sf9 insect cells demonstrated TG lipase and acylglycerol transacylase activities [42]. Similar activities were reported in a cell-free system [27]. In 2011, Huang et al. reported that PNPLA3-I148M mutant showed an impaired TG lipase activity suggesting a loss of function in the development of steatosis. In the same report, they did not observe acyltransferase activity of either the PNPLA3 wildtype or PNPLA3-I148M mutant [43]. However, in an independent study trigger factor fused soluble PNPLA3 was shown to have a lysophosphatidic acid acyl transferase (LPAAT) activity which increased when PNPLA3-I148M was overexpressed, suggestive of a gain of function mutation [44]. However, a subsequent study failed to detect any increase in TG biosynthesis in the PNPLA3-I148M KI mice model of hepatic steatosis arguing against such a possibility [38]. A possible explanation for this discrepancy is since the proteins were purified from E.coli it likely co-purified an endogenous LPAAT as reported in a subsequent study [45]. Purified PNPLA3 and PNPLA3-I148M from Pichia pastoris (yeast) showed robust TG hydrolase and modest acyltransferase activities [46]. PNPLA3 is well expressed in human liver and retina necessitating further studies on its activity towards retinyl esters in human stellate cells. Purified PNPLA3 showed retinyl palmitate hydrolase activity which was impaired in the PNPLA3-I148M mutant leading to massive accumulation of retinyl esters in these cells [24]. Humans harbouring the PNPLA3-I148M mutation were found to have lower levels of circulating retinol and retinol binding protein 4 and had a higher content of hepatic retinyl esters suggesting a role of PNPLA3 in regulating the release of retinol from stellate cells [25,47]. PNPLA3 has also been suggested to play a role in lipid remodelling in hepatic TGs. Human hepatocytes expressing PNPLA3 and PNPLA3-I148M had higher levels of very long-chain polyunsaturated fatty acids (PUFA) in the phospholipids. The PNPLA3-I148M mutant was associated with lower levels of arachidonic acid in hepatic TGs [37,48,49]. Consistent with these reports, Mitsche et al., reported that PNPLA3 and PNPLA3-I148M transfer PUFA from TGs to phospholipids. Arachidonic acid was the major lipid transferred to phospholipids in hepatic lipid droplets of PNPLA3-I148M KI mice. This function was not observed in the catalytically dead PNPLA3 -S47A mutant and PNPLA3 knockout mice [50]. Arachidonic acid is also the major substrate of membrane-boundo acyl transferase domain containing protein7 (MBOAT7) [51], a mutant of which is implicated in alcoholic cirrhosis [52]. In sum, phospholipid remodelling and NAFLD susceptibility seem to go hand in hand. Studies in mice models have shed considerable light on PNPLA3 function. These reports indicate that the PNPLA3-I148M is neither a gain nor loss of function mutation but a neomorph. Animal models of Pnpla3 knockout do not exhibit hepatic TG accumulation hence do not develop steatosis [53,54]. Experiments in transgenic mice models argue against a gain of function of PNPLA3-I148M mutant. Mice overexpressing human wild type PNPLA3 have TG levels similar to the non-transgenic mice, although PNPLA3-I148M overexpression recapitulates the human steatosis phenotype [31]. PNPLA3 shares most sequence homology (~46%) with PNPLA2 or ATGL that plays a crucial role in the rate-limiting step of TG hydrolysis [55]. Unlike other lipases that have catalytic triad (Ser-His-Asp), patatins use a catalytic dyad (Ser-Asp). Ser47 is a conserved residue of the hydrolase motif (Gly-X-SerX-Gly) that lies between a beta strand and an alpha helix. The crystal structure of the heartleaf horse nettle patatin is similar to the homology modelling of PNPLA3 [19,35]. Two contrasting views are in the field regarding the PNPLA3 structure. There are reports suggesting that PNPLA3 has membrane-spanning domains on the basis of secondary structure prediction [11] with a strong association with endoplasmic reticulum and lipid droplets [27,42] yet other reports of homology modelling of PNPLA3 suggest the alpha helixes to be part of the globular structure precluding the membrane span [35]. The crystal structure of PNPLA3 is yet to be determined. Degradation pathways of PNPLA3 Murine models of fatty liver have unravelled significant insights into the mechanistic basis of PNPLA3-I148M induced steatosis. PNPLA3 is ubiquitylated and targeted to the proteasome [23]. The mutant PNPLA3-I148M as well as the catalytically dead PNPLA3-S47A accumulate at the hepatic lipid droplets on high sucrose feeding in KI mice [33]. PNPLA3-I148M continues to sustain at lipid droplets on prolonged fasting due to impaired ubiquitylation and proteasome targeting [38]. It is possible that the mutant undergoes a conformational change that restricts access of the E3 ligase. Both PNPLA3-I148M and PNPLA3 -S47A impair the catalytic activity of the enzyme by reducing its V max and neither impairs substrate binding [43]. It is possible that the uncleaved substrate traps the enzyme in a conformation inaccessible to the E3 ligase. An alternative possibility is that both these mutants are rapidly deubiquitylated and stabilized on the droplets. Binding to lipid droplets inhibit proteasomal degradation of several lipid droplet proteins [56,57]. In vivo inhibition of proteasome by Bortezomib (8 h) showed that the levels of PNPLA3 could not match PNPLA3-I148M in the transgenic mice suggesting the contribution of other degradation pathways in its turnover. Although inhibition of macroautophagy by 3-methyladenine failed to elicit an increase in the wild type protein, it may be due to partial blockade of this pathway as reported earlier [58]. Involvement of chaperone-mediated autophagy as observed for other lipid droplet proteins cannot be ruled out [59]. Human genetic studies indicate that accumulation of PNPLA3-I148M mutant is required for the development of NAFLD [25]. A naturally occurring PNPLA3 polymorphism, rs2294918, E434K variant was linked to reduced hepatic PNPLA3 protein abundance [25]. Carriers of the PNPLA3 I148M-K434 variant did not predispose to liver damage in contrast to the risk variant I148M-K434E [25]. Although it is not known if K434 gets ubiquitylated, predictive algorithms suggest this residue to be a good candidate [60]. Human PNPLA3 has 19 lysine residues while the truncated mouse protein has 12. The K434 residue is absent in mouse PNPLA3. It is possible that PNPLA3 gets ubiquitylated at multiple lysine residues at the patatin domain and downstream at the C-terminus. A recent report showed K100 residue of PNPLA2/ATGL to be the main site of ubiquitylation with COPI as its ubiquitin ligase [61]. This residue is conserved in human PNPLA3. Three other residues in the patatin domain: K92, K135 and K179 are conserved in human and mouse PNPLA3 as well as human and mouse ATGL. All the above lysine residues are potential ubiquitylation sites. A recent report suggests a lysine independent cysteine ubiquitylation of ACAT2 regulated by cholesterol and fatty acids [62]. PNPLA3 can have a similar mechanism of ubiquitylation. Human PNPLA3 has 18 cysteine residues while mouse PNPLA3 has 17 that could serve as potential ubiquitylation sites. In sum, identification of specific ubiquitylation sites is needed to determine the mechanism of evasion of ubiquitylation by the disease mutant. In addition to the proteasome, PNPLA3 is also targeted to the autophagy pathway [63]. This has been demonstrated in vitro and in vivo by pharmacological intervention and knockdown of a key component (ATG7) of macroautophagy [64]. This is congruent with findings on the interaction of ATGL with LC3 through its conserved LC3 interacting region (LIR) motif to modulate lipophagy [65]. PNPLA3 has also been suggested to play a role in lipophagy [63] although no change in hepatic TG levels were observed in PNPLA3 knockout mice [64]. The role of PNPLA3 in autophagy and specifically hepatic lipid metabolism remains obscure and warrant further studies. Therapeutic strategies NAFLD is a complex disorder and therapeutic options are limited. To date, no drugs have been approved by the Food and Drug Administration (FDA) for its treatment [66]. PNPLA3-I148M causes fatty liver. If the mechanistic basis for steatosis is similar in mice and humans then therapeutic interventions that lower PNPLA3-I148M protein levels are likely to be beneficial in ameliorating the condition in carriers with this variant. PNPLA3-I148M accumulation is a prerequisite for steatosis [64]. PNPLA3 function and its altered activity in the disease variant have thrown up many attractive drug targets (Figure 1). Downregulation of Pnpla3 with antisense oligo nucleotides in rats resulted in 20% hepatic fat reduction and enhanced insulin sensitivity [67]. Genetic suppression of Pnpla3 is an attractive strategy for regulating the protein expression in PNPLA3-I148M carriers. This strategy has been demonstrated to be effective in ameliorating fibrosis and steatosis in PNPLA3-I148M KI mice [64,68]. PROTAC3 mediated degradation of Halo tagged PNPLA3-I148M significantly reduces hepatic TG in mice challenged with high sucrose diet [64]. This therapeutic approach could be potentially useful in NAFLD patients harboring the PNPLA3-I148M variant. Under fasting, insulin levels are low. As a result, the SREBP-1c gene is not actively transcribed, nuclear SREBP-1c levels are low and Insig-1 mRNA and protein levels are low. In contrast, Insig-2 levels are high, owing to the Insig-2a transcript [69]. Pnpla3 is robustly expressed by SREBP1c on insulin mediation [70]. Insulin also induces SREBP-1c transcription, nuclear SREBP-1c activates the Insig-1 gene, and Insig-1 mRNA and protein levels rise to higher levels than in the basal nonfasting state. Insig-2 gets replaced by Insig-1 upon refeeding [69]. Developing SREBP1c inhibitor can specifically modulate PNPLA3 expression at the transcriptional level. SREBP1c cleavage is activated by SREBP cleavage-activating protein (SCAP) [71]. Inhibition of SCAP activity could provide a broad spectrum effect in ameliorating steatosis [64]. Determination of E3 ligase(s) of PNPLA3 will help design small molecule activators to specifically target the accumulated protein for proteasomal degradation [64]. It remains to be tested if competition for CGI-58 between ATGL and PNPLA3-I148M reduces the fraction of ATGL bound CGI-58 in PNPLA3-I148M KI mice. Data from transgenic mice argue against this hypothesis [72,73]. Interaction of CGI-58 and ATGL is well documented, and knockout of liver CGI-58 results in steatohepatitis and fibrosis in mice [74,75]. PNPLA3 was shown to be colocalized with CGI-58 [36]. A recent report suggests that the pro-steatotic effect of PNPLA3 requires the presence of CGI-58 consistent with a model where PNPLA3-I148M promotes sequestration of CGI-58 from ATGL to impair its lipase activity at the lipid droplets [76]. It is possible that during fasting, CGI-58 is sequestered to the PNPLA3-I148M enriched lipid droplets to reduce the available pool for ATGL. The ubiquitin defective PNPLA3-I148M mutant accumulates at hepatic lipid droplets even after 12 h of fasting concomitant with CGI-58 [38]. PNPLA3-I148M interacts with CGI-58 possibly to impair ATGL activity as the basis of steatosis. If PNPLA3-I148M sequesters CGI-58 away from ATGL then perturbing this interaction with small molecules can prove to be effective. In sum, small molecule intervention to modulate enzymatic activities of these proteins has potential therapeutic benefit in NAFLD. Concluding remarks PNPLA3 is an attractive target for treating NAFLD. The variant PNPLA3-I148M increases the risk of NAFLD progression. Mechanistic insights from animal models and mammalian cell lines indicate conspicuous changes in lipid droplet dynamics. Future studies will focus on identifying targets that can modulate PNPLA3 expression or alter its activity to ameliorate NAFLD in patients harboring the disease variant. PNPLA3 is robustly expressed on feeding by upregulation of SREBP1c which gets escorted by SCAP from the endoplasmic reticulum to Golgi complex to be cleaved by S1 proteases. Insig-1 that prevents binding of SCAP to SREBPs gets degraded at the proteasome in the process. The mature form of SREBP1c translocates to the nucleus to upregulate Pnpla3 expression. SREBP1c is degraded at the proteasome by its E3 ligase RNF20. PNPLA3 is a known substrate for proteasomal degradation although its specific E3 ligase is unknown. It is also known to be targeted to autophagy. Activation of proteins marked by and inhibition marked by are attractive drug targets for PNPLA3-I148M induced fatty liver.
4,275.6
2019-01-02T00:00:00.000
[ "Biology" ]
Coherence indicators of generators for express assessment of electric power system transient stability . A brief overview of the generator coherence indicators is presented, the areas and limitations of their applicability are indicated. The application of the area method for the rapid assessment of the transient stability of complex multi-machine electric power system based on the features of the dynamics of the system behavior, determined by the heterogeneous structure of the electrical network, is considered. A distinctive feature of the proposed approach is the use of the characteristics of the heterogeneity of the power system structure, which determine the presence of weak links (or cut-sets) of the electrical network, which are critical from the point of view of a possible violation of the system stability. Introduction Due to the instability of the electricity market and the growing demand for electricity, modern EPSs (electric power systems) are forced to work closer and closer to the limits of their stability.This makes the system more vulnerable to internal failures and external disturbances and increases the risks of stability loss. Conventionally, the assessment of transient stability comes down to detailed simulation of EPS responses to disturbances -that is, to calculations of transients (this is the numerical integration of a system of nonlinear differential equations of large dimensions).However, a disturbance is unpredictable, and the calculations should be carried out at least with a minimum, but in advance -the calculation itself takes a lot of time, and it still takes time to develop control actions. The key points of the proposed approach to the analysis of the transient stability of EPS are the following: 1. Study of heterogeneities in electric power systems.2. Revealing the coherence of the movement of generators under disturbances.3. Reducing the models of dynamics of electric power systems.4. Study of stability using reduced models. In view of the foregoing, an express assessment of the transient stability of a complex EPS using the area method for a given studied conditions, including a given network topology, electric mode and disturbance, is performed as follows: 1) Using structural analysis algorithms, weak (dangerous from the point of view of a possible violation of the stability of the system under a specific disturbance) cut-sets of the studied network are determined and ranked according to the degree of weakness; 2) For each weak section, as the degree of weakness decreases, a two-machine equivalent is formed; 3) An assessment of the transient stability by the area method is carried out for each of the obtained twomachine equivalents as the weakness of the sections decreases.The evaluation ends at the section for which the applied area method no longer violates the stability of the system; 4) If for any of the network cut-set, according to the above algorithm, a violation of the stability of twomachine equivalents is not recorded, we can assume that the stability of the EPS in a given studied conditions is not violated.Otherwise, the estimates show a violation of the stability of the system for a given network topology, electric mode and disturbance. Heterogeneity of EPS and coherence of movement of generators Structural heterogeneity is a fundamental property of any systems with a complex structure.The structure of the electrical network of a complex EPS is always heterogeneous and includes strongly connected subsystems and weak connections between them.It is important to identify this heterogeneity, quantify it and use it in modeling the EPS, its study and control of electric modes [1]. During the operation of the EPS, it is subject to disturbances and reacts to them by changing the parameters of the system electric mode.This reaction is determined both by the magnitude and location of the disturbance, and by the internal properties of the system itself.The nodes and connections (sections) of the system, the electric mode parameters of which reach unacceptable values first of all, are called weak points. When simulating electromechanical transients and evaluating the stability of EPS during disturbances, the presence of strong connections between the generators of a strongly coupled subsystem determines the coherence (identity, consistency in time) of the movement of generators in transients.Besides the presence of large reserves of throughput for connections between generators, which guarantees their mutual stability within the subsystem.On the contrary, weak connections between strongly connected subsystems create threats of stability violation.Due to the limited transmission capacity of weak links, it is along them that violations of the stability of the system usually occur during disturbances.Therefore, the identification and quantitative assessment of weak links in the structure of the electrical network are important tasks in the process of studying stability and determining control actions to ensure it [2]. The purpose of the study of heterogeneities is to identify weak links in the structure of the electrical network and thereby determine the strongly connected subsystems in this structure.Violations of the EPS stability and cascade development of emergency processes during regime changes will occur primarily through weak links (between subsystems) and less likely through stronger links (within subsystems).Therefore, the stability of the EPS must be analyzed primarily in relation to weak links. Structural inhomogeneity of the EPS determines the specifics of the movement of the system generators in the transient electromechanical process, namely, their coherent movement.The coherence of the movement of generators is an objective basis for reducing the mathematical model of the EPS dynamics by aggregating (combining) generators included in the same subsystem. The motion coherence of generators and is defined as (1) where are the angles of the rotors of generators and as functions of time in a single coordinate system.Identification of the coherence of the movement of generators during disturbances is the identification of groups of generators, the mutual (relative to each other) movement of which in the transient is close to coherent. Metrics and similarity/difference matrices for identifying coherent groups of generators Coherence can be determined by directly comparing the motion curves of the generators (which requires numerical integration of the transient).So, in Fig. 1, in the studied conditions a) and b), generators 1 and 2 are more coherent to each other than each of them is to generator 3.In case c), generators 2 and 3 have the greatest mutual coherence. In addition, coherence can be estimated by formal The equivalent network of the investigated EPS for calculating the indicators of similarity (coherence) or difference (non-coherence) of generators, in structural analysis is obtained from the conventional network used to calculate steady-state electric modes, by: • representation of loads by constant admittances, generators by a two-node equivalent (bus -transient impedance -transient electromotive force (EMF)), power of the primary engine (in the equations of motion of the generator rotors) -by a constant, • exclusion of all nodes that do not contain EMF. Any pair of equivalent network nodes turns out to be interconnected by an equivalent connection.From the point of view of interaction and mutual influence of generators, EPS can be represented as a complete graph, at the vertices of which generators are connected, and the edges represent the relationship between generators.Edges can be assigned some numerical characteristics that determine the degree of interaction and mutual influence of generators.Then the response of the system as a whole to the disturbance, instead of the curves of the movements of the generators, can be described by a matrix of similarity or difference indicators of the movements of the generators in pairs.On the basis of such matrices, a classification (identification of coherent groups) of generators can be made.The result of it is a set of nested subsystems (groups of generators of greater or lesser coherence) for a given studied conditions. The simplest indicators of coherence are determined through the admittance of the electrical network, in other cases they take into account the electric mode parameters and dynamic parameters of the EPS.In this case, the force of mutual influence of two generators can be interpreted as the strength of the connection between them and as the similarity (coherence) of their movement in the transient. Of the formal indicators of the similarity of generators, two have been conventionally and most widely used (see, for example, [3,4]): • "Distance-admittance", defined as the admittance of connections of the equivalent network obtained from the initial network (for generator models "EMF behind transient impedance") with the exception of all nodes that do not contain EMF.These measures describe how closely two generators are electrically connected.• "Distance-reflection" as the magnitude of the acceleration components (or active unbalance on the shaft) of one machine due to a change in the angle of the other.Unlike distance-admittance, this measure reflects the dynamic influence (synchronizing power) of one generator on others as a result of disturbance. Note that here the historically accepted term "distance" is inaccurate, since both of the indicators are essentially measures of similarity (proximity). In addition, the problem of determining coherence was solved using such features as: • symmetry of generators relative to the system, initial mutual accelerations of the rotors [5,6], • differences in the angular deviations of the EMF of generators in a pair from their initial value at a given time interval, determined by a linearized model, [7,8], • equality of synchronizing powers, observance of stability conditions within the group, and a number of others [3,4,[9][10][11]. In various sources, the following were considered as indicators of the similarity of generators: • electrical connectivity (structural maximum as the capacity of an equivalent connection) [12]; • dynamic (taking into account inertia) connectivity [11] or dynamic connectivity, determined using pairs of eigenvalues of the linearized model of EPS dynamics [13]; • the severity of the disturbance for a pair of generators based on the values of the Lyapunov function, written as an integral of energy for the mathematical model of the EPS dynamics in positional idealization [14]. For a more accurate assessment of coherence, it is often recommended to numerically calculate the initial stage of the transient using a linearized or nonlinear model [4,15].An analytical algorithm for estimating motion coherence based on the area method for pairs of generators was developed in [10,13] using the assumption of invariance in the transient of the mutual acceleration component of a pair of machines, determined by the change in the angles of the machines of the remaining part of the system.For the same purposes, indicators of the influence of disturbances on the behavior of EPS elements can be used, additionally taking into account the parameters of the disturbance and the dynamic characteristics of generators [9,10]. Let's assume that the reference (accepted obviously correct) process of classification of generators is known, obtained on the basis of numerical integration of the electromechanical transient process and visual analysis of integral curves.Then, evaluating the similarity with it of classification processes based on matrices of formal indicators of similarity or difference of generators, it is possible to determine to what extent this or that indicator is adequate to the task of identifying groups of coherent generators -that is, whether and in what cases this indicator is a measure of coherence generator movements. Energies of acceleration and deceleration The indicators based on the values of the energies of the mutual acceleration and actually implemented in the transient braking of generators are determined by integrating the mutual acceleration over the mutual angle (Fig. 2).The integration result contains three components: 1) Constant component reflects the difference in the design and operating parameters of a pair of generators (turbine power, moments of inertia, EMF modules and modules of self-admittances of driving points), 2) Sinusoidal component reflects the influence of direct connection between generators, 3) Non-sinusoidal component reflects the influence of the remaining part of the system (asymmetry in the location of two generators relative to all other generators, manifested through the difference in power flows related to the moments of inertia). Reducing the Power System Dynamics Models -Two-Machine Equivalent Identification of dangerous sections from the point of view of a possible violation of the stability of the system under a specific disturbance is carried out by means of a cluster analysis of indicators that estimate the degree of coherence of the movement of generators in the transient [1,2]. Classification of generators can also be made on the base of analysis and comparison of transient curves for various studied conditions.However, this way eliminates the very need for classification (since the detailed calculations necessary to obtain the curves in themselves provide an answer to most of the questions that arise in the study of a given conditions). Hence, it is obvious that it is necessary to use the most simplified models of the movement of generators in the transient, or to search for the possibilities of classifying without resorting to numerical integration at all.To organize and visualize the hierarchical structure of the electrical network, dendrograms (classification trees) are used, i.e. a graphical method for presenting the results of hierarchical clustering, which shows the degree of proximity of individual energy objects and clusters, and also graphically demonstrates the sequence of their combination or separation.Dendrograms store information about further divisions (or associations) of the electrical network into smaller (larger) islands. The transient stability of an EPS is conventionally estimated by numerically integrating its model offline for different studied conditions (network topologies, electric modes, and accident scenarios).The simplest classical positional model of EPS dynamics is written for the -th generator in the form Here is time, is the angle of the generator rotor relative to the synchronous axis, т and are electrical active powers (power, created by the accelerating torque of the turbine, and the generator power output to the network), ном is the rated power of the generator, is the rotor inertia constant, is the rated angular speed of the rotor, is the number of generators in the EPS.The power supplied to the network is defined as where is the mutual angle of the rotors of generators and , and are transient EMFs of generators and , and are the modules and complementing to 90ᵒ the angles of the complexes of intrinsic and mutual admittances of the electrical network. Different elements of the EPS react to disturbances in different ways.However, the behaviors (reactions) of some elements are more similar to each other than to others.This creates the possibility of aggregating information in order to reduce computational costs. Reducing the dynamics models of electric power systems is an aggregated representing of each of the coherent groups of generators by one equivalent generator.The conventional task of aggregation is to reduce the dimension of analyzed network (electromechanical equivalencing). Reducing the EPS dynamics model during the express assessment of transient stability consists in constructing a two-machine equivalent -that is, a twonode network, each of the nodes of which is an equivalent generator (that is, it aggregates a group of coherently moving generators). The most common models of equivalent generators are: • The center of inertia, characterized by aggregated (summed over the group) parameters, when the power, constant of inertia, admittance of connections with the nodes of the external network are taken equal to the sums of the values of the corresponding parameters of the generators of the aggregated group, and the equivalent EMF -equal to the sum of EMF of equivalent branches weighted by their admittances [10].• A representative generator that reflects the dynamic characteristics of a coherent group to the greatest extent (compared to other generators).The degree of representativeness is determined by the value of the "participation factor" (the contribution of the movement of the generator to the movement of the entire group).Such approaches are common when using selective modal stability analysis (see, for example, [16]). The dynamic equivalent of REI (Radial Equivalent Independent), similar in implementation to the center of inertia model, but providing equality of power losses in the steady state for the initial and equivalent networks.This is achieved by introducing a temporary fictitious "network of zero power balance", electrically connecting the nodes of these networks with each other [17,18]. By analogy with ( 2) and ( 3), the dynamics model of the two-machine (machine and machine ) EPS equivalent can be written as where sin sin . (5) Summary of the Results The highest accuracy of coherence recognition is achieved when using the acceleration and deceleration energies, which require numerical integration of the transient.However, the suitability of these indicators for rapid recognition is limited by the complexity of the calculation.At the same time, determined from a simplified model of generator motion, these indicators are suitable for identifying groups of coherent generators in cases of complex disturbances.For simple disturbances, the most accurate identification of groups of coherent generators is made by the matrix of absolute values of the initial mutual accelerations of their rotors.If it is necessary to use indicators of mutual similarity or difference not only for generators, but for any nodes of the studied network (which is required in a number of tasks), indicators such as mutual admittance modules, synchronizing powers and structural maxima are acceptable. To identify groups of coherent generators under complex disturbances, the use of indicators based on numerically determined acceleration and deceleration energies is preferable to indicators of the quality of the transient (since energy-based indicators, firstly, provide a quantitative measure of stability and, secondly, do not require expert assignment of the integration interval). The identification of groups of coherent generators based on energy indicators determined through the analytical integration of mutual acceleration over the mutual angle of generators is less reliable than the identification based on numerically determined energy indicators.The reason is the assumption about the constancy of the influence of the system on the mutual motion of the generators, introduced to create the possibility of analytical integration.In addition, these indicators (as well as the rest, determined without numerical integration) do not allow taking into account complex disturbances (sequences of several commutations that do not coincide in time). Energy-based indicators can be used to identify groups of coherent generators, provided that among the mutual movements of the generators there are no movements that are unstable on oscillations greater than the first.With this limitation, the most universal indicators are the absolute and relative values of the acceleration pads.The main advantage of energy indicators in comparison with visual analysis of curves and indicators of the quality of the transient is in their completely formal (not requiring expert decisions) calculation. The time interval of integration and the complexity of obtaining indicators of the quality of the transient and indicators based on numerically determined energies are comparable. The study was carried out within the framework of the state assignment project "Theoretical foundations, models and methods for managing the development and operation of intelligent electric power systems," topic FWEU-2021-0001, reg.no.AAAA-A21-121012190027-4, using the resources of the High-Temperature Circuit Multi-Access Research Center (Ministry of Science and Higher Education of the Russian Federation of Russia, project no.13.ЦКП.21.0038). Fig. 1 . Fig. 1.An example of the change in time of the angles of the generators in three studied conditions. Fig. 2 . Fig. 2. Determination of the energies (areas) of mutual acceleration and deceleration of generators and for a simple disturbance by integrating over the mutual angle. Fig. 2a illustrates Fig. 2a illustrates the result of analytical integration with the assumption of the constancy of the nonsinusoidal component, and fig.2b -the result of numerical integration without that assumption.The integration result contains three components: 1) Constant component reflects the difference in the design and operating parameters of a pair of generators (turbine power, moments of inertia, EMF modules and modules of self-admittances of driving points), 2) Sinusoidal component reflects the influence of direct connection between generators, 3) Non-sinusoidal component reflects the influence of the remaining part of the system (asymmetry in the location of two generators relative to all other generators, manifested through the difference in power flows related to the moments of inertia).
4,578
2023-01-01T00:00:00.000
[ "Engineering", "Physics" ]
A Visual Analytic in Deep Learning Approach to Eye Movement for Human-Machine Interaction Based on Inertia Measurement This paper proposes a hand free human-machine interaction (HMI) system to establish a novel way for communication between humans and computers. A regular interaction system based on the computer mouse puts the user’s hand for too long in a pronation posture that increases inflammation in the wrist and hand. Additionally, the need for hand obstructs the use of computers for handicap people. In this paper, we develop a new pointing device for differently able people based on open and closed human eyes with inertia measurement that restrict to deal with carpal tunnel syndrome (CTS) for regular people and enables a novel way to interact with computers for the handicap people. The proposed system carries the human head gesture and eyes to perform the movement and clicking event of the mouse cursor. A combined three-axis accelerometer and gyroscope is used to detect the head gesture and translate it into the position of the mouse cursor on the computer monitor. To perform the left and right-clicking event, the user needs to shut down the left and right eye for a moment while opening another eye. This paper is also carried out the design of a deep learning approach to classify the individual openness and closeness of both human eyes with quite a high accuracy of 95.36% that ensures the comprehensive control over the clicking performance. The use of complementary filter removes the noise and drift from the obtained performance and confirms the smooth and accurate operation of the proposed device. An experimental validation is added to show the effectiveness of the proposed HMI system. The experimental details along with the performance evaluation prove that the proposed HMI system has extensive control over its performance for differently able people. I. INTRODUCTION Human-machine interaction (HMI) develops an assistive communication technology for differently able people with more flexibilities. The HMI creates a phenomenon to design, evaluate and implement interactive computing devices for the interaction. The development of this technology enables a medium of communication between machines and humans by using graphical user interfaces (GUIs). It includes the efficient design of the operating systems, computer graphics, and different programming languages to control The associate editor coordinating the review of this manuscript and approving it for publication was Francesco Mercaldo . the machine for the interaction. On the contrary, other forms of communication include joysticks or tactile screens, graphic design skills, and social and cognitive psychology that control the human factor during the interaction with machines/computers [1]- [4]. A computer interaction based on regular mouse requires a human hand and plane surface for its operation which is not adequate for fixed pointing on the computer screen. To address this problem, the idea of air mouse have been emerged in [5]. These modern computer mice are wireless and use the light sensor to detect its movement. The operation of these mice also depend on the plain and unobstructed surface to effectively poll the user movement. When the user spends too much time on the computer with air mouse, they feel numbness and pain in their wrist which leads to a heightened risk of the CTS [6]. An ergonomic mouse is designed to solve the CTS problem by reducing the ulnar deviation [7]. The increased pressure in the carpal tunnel area limits the application of ergonomic mouse. Additionally, the need for hand limits the use of above mouses for the handicap people [8]. However, a hand-free operation of the pointing device or mouse is needed to make the computer-access for the disable people as well as to remove the strain in finger and cuts down the joint pain. Recently, people started to use emerging technology for human-computer interaction (HCI) based on visual information, human voice recognition, brain-computer interfaces (BCI) and head-operated joysticks [9]. A large number of people with hand disabilities pay much attention to this kind of technology for computer interaction because no hand or GUI is required to control the computer mouse. Acoustic based pointing system is one widely used system where the human voice is required for the interaction [10]- [12]. The control unit of this system receives the sound through the microphone and converts it into an electrical signal to perform the desired tasks. Although the use of this technique can relief the use of hands for computer operation, the effect of the external noise complex the proper selection of voice that suffers the accurate pointing. Bionic technology is proposed to overcome the effect of external noise [13]. It is the combination of biology, robotics and computer science, and plays an important role in the pointing system by using the biological signal from the human organs. Brain-computer interfacing (BCI) is one of the widely used bionic techniques that takes the signal from the neuron of the human brain through a number of electrodes. The amplitude of these signals is quite low that requires an external amplification system. Electrooculography (EOG) is another interaction way to detect and track the eye movement using the computer camera where no amplification system is required to make the interaction with computer. The peoples who are unable to use above interfacing devices for HCI, they can use this technique [14], [15]. This technique requires an additional eye-tracking algorithm to perform the interaction. Number of eye movement tracking algorithm has already been developed in the state-of-art. One of widely used technique based on the pattern of eye movement have been proposed in [16]. This method is designed based on the minimum redundancy with maximum relevance feature selection of eye. In this method, the pattern of eye movement is achieved by introducing saccades, blinks and fixations characteristics. This algorithm is able to increase the tracking precision up to 76.1% by using the support vector machine. Web browsing and detecting the single identifiable activity is the limitation of this algorithm. Experts research on the field of eye tracking is divided into signal tracking-based and vision-based method. A signal based eye tracking method is proposed in [17] where neural network with wavelet transform is used to find the potential differences between the cornea and retina to identify the eye movement. This system is able to reduce the tracking error less than 2 • . The need of electrodes and fixation problems are the main drawbacks of this method. A feature-based approach to detect the eye-blink artifact has been proposed for HCI based on EOG [18], [19]. The features of the eye-blink artifact may have maximum absolute value [20], entropy [21], second-order transform [18] and teager-kaiser energy [22]. An artifact detection method by using distance-based approach is presented in [23] for HCI. This is developed by reconstructing a template of the artifacts. The distance between the templates are measured by using dynamic time warping [24] and support vector machine [25]. A constant threshold value used in these technique produces the detection error. Vision-based gesture recognition technique has been applied to obtain high accuracy for mouse pointing [26]. This system requires the engagement of the human hand to control a pointing device by using 2D and 3D hand gestures. The advantage of this technique is its capability to obtain information from the captured image at long distance. This method has high precision but requires lots of time for computing. The training based methods by using active appearance models (AAM) [27], or histogram of oriented gradient (HOG) based SVM (HOG-SVM) [28] requires less data processing time. However, it is complicated to find an ideal feature dimension that results in produce lower accuracy. Electroencephalogram (EEG) technique has been developed to overcome the limitation of the feature and distance-based approach [29]. This algorithm combines with automatic thresholding algorithm where the extracted features are processed by using the digital filters. The individual threshold value is able to minimize the detection error. The advantage of this method is rapid data acquisition. Moreover, the noise due to the user movement affects the signal that may develop the lower signal to noise ratio and decreases the accuracy level. To address the above problems, in this paper, we propose a novel hand free human-machine or computer interaction system for communication between the machine or computer and people with quite high level of accuracy. The main aim of this paper is to develop a new pointing device that able to make interaction with computers for the differently able people. The main contributions of this paper are as follows: (i) Development of a novel hand free human-machine interaction system to carry on the combination of left and right eye movement classification for mouse pointing and head gesture for mouse movement on the computer screen. (ii) Design of a deep learning technique for enhancing the classification performance of the open and closed eye images obtained from a general-purpose webcam. (iii) Integrating a complementary filter (combination of high and low pass filter) with an inertial measurement unit (IMU) to remove the noise and drift from the obtained performance that results in accurate pointing on the computer screen. The residue of the paper is ordered as follows: The method of filter implementation and mouse pointer control is described in section II. The implementation of proposed deep learning approach for eye status detection is manifested in section III. Section IV describes the evaluation of usability performance using the computer. A concluding part of the paper is depicted in section V. II. MOUSE POINTER CONTROL The basic block diagram of the proposed system is illustrated in Fig.1. The system performs the fundamental mouse event, i.e. left click, right click and the mouse pointer movement on the screen without the intervention of the human hand. A radio frequency (RF) transmitter transfers the head motion activity to the computer to locate the cursor on the screen. The movement of the cursor is followed by the user's head motion activity which is detected by the IMU (combination of accelerometer and gyroscope sensor). The obtained information from IMU is employed with a sensor fusion technique to acquire minimal uncertainty that may affect the system performance. Hence, a complementary filter based sensor fusion technique is used to remove the problems appreared in IMU. The advantages of the proposed system are its capability to reduce the burden of holding the air mouse for its operation, decrease the wrist pain and ensure the reliable communication between the computer and differently able people. In this paper, the control of the mouse pointer's position is presented with respect to the movement of human head. When the user tilt his head on the left-right or forward-backward direction, the system performs the mouse pointer movement action. Accordingly, for calling the pointer to move in horizontal direction, the condition is to tilt the head to right (for right moving) or to left (for left moving). Similarly, for moving the pointer in vertical direction, the user needs to tilt his head in forward (for moving up) or to the backward (for moving down). An embedded control is used in the proposed design which takes the signal from IMU and performs the required action of mouse pointer. The precious reading from IMU is essential for better placement of the cursor of the mouse in the computer screen. The performance of IMU may be affected due to the presence of external noise which initiates the design of proper filtering system to overcome the noise. The detail design of complementary filter is discussed in the following section. A. FILTER IMPLEMENTATION The proposed system determines the current head tilt angle of a user using the combination of gyro sensor and accelerometer. The signals generated from the gyro sensor and accelerometer have to be interpreted to the pixel information in the computer monitor. The gyroscope sensor is used to measure the angular velocity of the head tilt. The output of this sensor is linearly proportional to the rate of change of rotation in degrees per second along the corresponding axis. The angular velocity from the gyroscope reading along with x-axis and y-axis can be represented as, where, x gyro and y gyro = Angular velocity along x-and y-axis x gyro_ADC and y gyro_ADC = Gyroscope raw data along x-and y-axis x axis_offset and y axis_offset = Gyroscope reading when lying stationary with x-and y-axis x axis _ scale and y axis _ scale = Conversion factor along x-and y-axis. The desired angle of the head tilt is measured by integrating of eq. (1) that represents the rotation of head along with x-and y-axis respectively. The resultant integration of the gyroscope sensor reading can be represented as, Here, x gyro_angle and y gyro_angle presents the initial angle of the gyroscope. To attain the change in angle, the gyroscope records the previous angle and adds the change in angle with the starting point. The measured angle from the gyroscope data is shown in the Fig.2(a). From the Fig.2(a), it is seen that the measured angle using gyroscope has an aptitude to drift due to the noise in the device and the inherent imperfection. An accelerometer have been used in the proposed design to determine the small movement of mouse, where the user needs to tilt his head less than 30 degrees in left-right and forward-backward direction. The small angle approximation method is adopted to get the desired angle from the accelerometer output data. The measurement of angle along the x-and y-axis can be given as, where, x axis and y axis presents the measured x-and y-axis angle using accelerometer, x axis_ADC and y axis_ADC represents the raw accelerometer reading along the x-and y-axis reading. The term x axis_offset and y axis_offset measures the accelerometer reading in lying flat conditions. The accelerometer sensor measures all the forces playing on it. However, the prone is the disturbance in measurement resulting the small forces. An accelerometer sensor in x-axis angle estimation without filtering is shown in Fig. 2 Both the accelerometer and the gyro sensor are sensible to the noise at a certain frequency. The accelerometer needs a low pass filtering system as it is sensible to the high-frequency noise originated by the system vibration. The gyroscope is also a high-frequency noise sensitive device which becomes a low-frequency drift after integration and causing the data to increase linearly over the time. Thus, a high pass filtering system is necessitated. The filter design for gyroscope sensor can be represented as, and the design of filter for the accelerometer can be given as, where, X gyro-angle andȲ gyro-angle = Filtered gyro angle along the x-and y-axis; X acc-angle andȲ acc-angle = Filtered accelerometer angle along the x-and y-axis; and α and β are the smoothing coefficient. The value of α and β are selected as 0.98 and 0.02 respectively for removing the drift from the signal obtained from IMU. The resultant filter design algorithm for the system can be represented for both x-and y-axis as, The resulted filter angle for X axis is shown in Fig.2(c). It is observed that the measured angle using the complementary filter is more accurate and provides the lower amount of error as compared to the only gyroscope and accelerometer angle. It is also seen that the final output has no signal coupling. The filtered angle for Y-axis introduces the similar output as like as X-axis. B. MOUSE CURSOR MOVEMENT ON THE SCREEN The proposed HMI system adopts the relative coordinate system where the current locus of the mouse cursor is always at (0,0). For a display of 1280*1024 resolution, the mouse cursor position always ranges from -1280 to 1280 and -1024 to 1024 along the x and y-axis. The general purpose mouse uses dots per inch (DPI) as a measure of the sensitivity while the controlled head movement mouse uses the term dots per degree (DPD) for the measurement of sensitivity. It means how far the mouse pointer will move on the screen for a change of one degree. As the human head has a limited bending angle, the selection of DPD affects much in the performance. The selected DPD for the system is chosen 45 • to obtain the promising performance of the system. The head gesture involves both of the left-right and up-down movement of the user's head. The up-down head movement is responsible to move the mouse cursor along the y-axis, while the left-right movement is for moving it along the x-axis. The head tilt angle along both axes are appropriately mapped to point the cursor coordinates on the computer screen. When the user's head is in the rest position, the noise due to the tiny movement makes the mouse pointer unstable. For this reason, a neutral zone is selected which ranges 5 • in each side from the normal to head along both axes where the head movement is neglected. Fig. 3 shows the algorithm for moving the mouse pointer on the screen. At the beginning stage, the system receives the raw gyroscope data and integrate it over the time. Thereafter, the system receives the accelerometer data. As both the accelerometer and the gyroscope are prone to noise, the final head tilt angle is calculated by applying the complementary filtering technique as illustrated in step:4 to 6. The rotational information is mapped within the minimum horizontal and vertical pixel (P h−min , P v−min ) to the maximum horizontal and vertical pixel (P h−max , P v−max ) of display. If there is a difference between the previous and current pixel value, ∇x is detected, the pointer moves in the ∇x direction (+ve axis for +∇x and -ve axis for −∇x) with | ∇x | pixels. III. PROPOSED DEEP LEARNING ARCHITECTURE FOR MOUSE CLICKING EVENT This section presents the detail information of making the proposed approach for left and right clicking event of the proposed pointing device. The flow chart of the mouse clicking event with the proposed eye state classification approach is shown in Fig. 4. The process begins from the clicking of user's image with a general purpose webcam installed on the computer. The user's eye image of size 64 × 64 is then extracted on basis of facial landmark detection as shown in Fig.4. The extracted grayscaling eye image is fed into the proposed eye state classification model. Based on the prediction of proposed eye state classification model, the system performs left or right clicking event. Fig. 5 shows the algorithm to describe the details of performing the eye state controlled mouse clicking event. In this algorithm, the step:5 to 9 outlines the procedure of the proposed eye state classification model. Once the eye state is classified by performing step:1 to 10, the system waits for a time T threshold which can be adjusted by the user. This time delay separates the regular eye blink from the command blink. If the user closes the left or right eye greater than the threshold value, the necessary control action will be performed. The proposed eye state classification approach comprises repeated sets of neurons that are applied over the space of the user's eye image. This sets of neuron apply over and over upon all the patches of the image and alluded as 2-D convolutional kernels. The term kernels is used as a matrix in machine learning to extract the most significant features from every subspace of an image. The working process of proposed approach is similar to the traditional convolution neural network. The proposed architecture classify the image by using the sequential operation of convolution and pooling layer and transfers the activation values in one volume to another volume by means of a differentiable function. The first layer consists of convolutional kernels that provides an output using the rectified linear unit (ReLu). On the contrary, the polling layer reduces the amount of computation time and optimize the parameters to acquire better accuracy. The proposed structure uses the spatial invariance to avoid the over fitting of the obtained result that differentiates the traditional CNN and proposed architecture. The model used in the system has three convolution layer accompanied by three max-pooling layers VOLUME 8, 2020 Table 1. A. STRUCTURE OF PROPOSED DEEP LEARNING ARCHITECTURE The proposed structure of the deep learning algorithm for the eye state classification scheme is presented in Fig. 6. This architectural element with regards to the convolutional and max-pooling layers, filters, filter sizes, and nodes are similar to that of a CNN model like AlexNet [30] except in fully connected (FC) layer. In order to achieve the less delay time during the HCI, the fine tuning of the neural network output is not done. The proposed deep learning model takes an image of size 64 × 64 pixel as an input and perform classification of open or closed left or right eye. The generated image after resizing is used as the final input for this model. In Table 1, the conv2D-1 to conv2D-3 are the convolutional layer. Additionally, the max-pooling layers or the sub sampling layers are the layer that selects the maximum value in one of the feature abstraction stages. The information is passed through the 3 convolutional layers and the 3 max-pooling layers. Additionally, the three FC layers after the convolutional and max-pooling layer stack performs the similar computations when applying it after using the inner products in a neural network. By progressing through each of the aforementioned layers, the proposed architectures extracts the features of left/right and open/closed eyes. The generated features are passed through the Softmax layer to classify the left and right eye open/closed. B. FEATURE EXTRACTION VIA CONVOLUTIONAL LAYER The process of feature extraction using convolutional layer is discussed in this section. In the convolutional layers, the feature extraction procedure is staged by applying a 2D convolutional operation to the input eye image. The application of convolutional operation to the input image changes the stride number for the horizontal and vertical direction, filter exploration movement range and the dimension size of the resulting image. Therefore, the filter size, number of filter, padding operation and stride values are the main factors need to be considered in the convolutional layer. From Table 1, the conv2D-1 layer has 32 filters of size 64 × 64 and strides by 1 pixel unit in the horizontal and vertical directions. One pixel unit padding is applied in the horizontal and vertical direction. A max-pooling filter of size 32 × 32 with a stride of 1 pixel unit explores in the horizontal and vertical direction. The remaining conv2d-2 to conv2d-3 layer performs the operation with 64 and 128 filters of size 32 × 32 and 16 × 16 respectively with a padding of 1 pixel and striding by 1 pixel unit. The pool size of the second and third max pooling layer is selected as 16 × 16 and 8 × 8 with a padding of 1 pixel unit. The term Relu and softmax activation function is used in convolutional layer and output layer respectively. The function of Relu activation function is to convert the nonlinear separable data to linear data which is fed to output layer. The proposed architecture successively apply the pooling operations and convolution filters to input data and creates a hierarchy of layers. The output of those layers are increasingly complex feature vectors than the input data which is necessary to simplify the data for obtaining high accuracy. In the proposed methodology, every pixel is given as input of n 1 × 1 for the input layer where n 1 defines the number of bands in eye image. The hidden convolution layer filters the n 1 × 1 input vector through t kernels with the size of k 1 × 1. The number of nodes in convolution layer can be defined as t × n 2 × 1, where n 2 = n 1 − k 1 . The activation map of the convolutional layer can be represented as, where,x ir = ith input activation map. y jr = jth output activation map. b j(r) = bias of the jth output map. k ij (r) = convolution kernel between the ith input map and the jth output map. C. MAX POOLING LAYER Pooling layer decreases the size of the convolutional layers' output. When passing through multiple pooling layers, a large image will be scaled down but keeps the features required for recognition. In max pooling layer the maximum value is stored. The max-pooling layer can be defined as, Here,ỹ i jk denotes a neuron in the ith output activation map, which is computed over an non-overlapping local region of size (s × s) in the ith input map. D. FULLY CONNECTED LAYER In the fully connected layers, every neuron in layer l is fully connected to the outputs of all neurons in layer l − 1. Each of the connection is necessitated for the calculation of the weighted sum. The output y (i) j of neuron j in a fully connected layer l is dependent on, where N (l−1) defines the number of neurons in the previous layer (l − 1), w (l) (i, j) is the weight for the connection from neuron i in layer (l − 1) to neuron j in layer l, b (l) j is the bias of neuron j in layer l and φ(x) is the activation function. E. SOFTMAX CLASSIFIER Softmax classifier is applied to handle multi-class classification in the fully connected condition of the system. Assume that there are K classes and n labeled training sample. For each test input, the softmax classifier creates a K-dimensional vector whose elements sum to 1. Each element of output vector represents the estimated probability of each class label as follows, (9) where, W = w 1 , w 2 , w 3 , . . . , w k are the parameters which is learned by the back-propagation algorithm. The cross entropy loss function is used as the cost function for the Softmax classifier and it can be calculated as, With N is the number of data points in the training set. Then, the gradient descent method is applied to solve the minimum of the J (W ) as, Finally, the parameters are updated as, where, η is the learning rate. F. TRAINING OF THE PROPOSED DEEP LEARNING ARCHITECTURE The neural network imitates the application of neurons presented in the human brain. The term weight is used as the strength of electrical signal while the neurons are connected. The weight of the specific neuron is multiplied with the specific value given as input to the neuron. The summation of all weighted values associated to the next layer applies as a input for the activation function. In this kinds of networks, the interaction between the neurons affect the output neurons when a new data is inserted to the network. The inclusion of new data needs to optimize to comply with previous input. The back-propagation algorithm is applied to optimize the neuron weights in the proposed neural network architecture. The popular adaptive moment estimation (Adam) [31] optimizer is adopted to learn the proposed architecture that works based on the extension to the conventional stochastic gradient descent method (SGD) [32]. The conventional SGD computes on a random selection of data examples which is inefficient. On the other hand, the Adam optimizer computes the individual adaptive learning rates for the different parameters. At the beginning of the training procedure, the input data is labeled with the correct output class in advance. The labeled data sample passes through the neural network. The actual and the generated output from the neural network can either be different or same. If there is a difference between the outputs, the learning rate is multiplied with a weighted parameter that reflects the differences. The obtained result is applied when the new weight values are updated. The gaussian distribution with a standard deviation of 0.001 and VOLUME 8, 2020 a mean value of 0 randomly initializes the weights used in th FC layer. The optimal parameters for the Adam optimizer are determined by conducting several experiments to obtain the lowest loss value and the highest model accuracy. Fig.7. With the purpose of checking the robustness of the proposed approach, the system uses a combined database for testing and training the proposed model. Due to the small amount of opened and closed eye images in DB1, an abundant of data is needed to ascertain the optimal values for copious coefficients. The use of the abundant data produces an over fitting result of the training set in a traditional CNN structure. This is happened during the training and testing the model with a diminutive amount of data. However, the proposed architecture uses a data augmentation technique to make the purpose build database DB2. Using the image rotation, horizontal flip, image scaling and image translation, the DB2 is created as a number of 542,752 images of 64 × 64 pixels. All images are rotated by 10 degrees for 3 times in both clockwise and anti-clockwise direction to get a data multiplication factor of 7 and the image translation and scaling technique is applied to 16 times. All the translated and rotated images are flipped horizontally that creates a total of 224 images from a single image. The data augmentation techniques with the original image is presented in Fig. 8. The use of data augmentation technique avoids to require the The proposed model is trained for the 100 epochs with Adam optimization algorithm where the categorical cross-entropy cost function is used to obtain optimum result. After the training phase, the proposed model appears to be promising performances with the augmented dataset. The overall classification accuracy and loss curve conferred in Fig. 9(a) and Fig. 9(b) shows that the accuracy of the proposed approach is 95.36% and the loss is 0.2. H. COMPARATIVE ANALYSIS WITH THE PROPOSED METHOD AND OTHERS In this study, the testing accuracies with the proposed classification model to the other models are compared. For the comparison, The combined source separation (SS) and pattern recognition algorithm (PRA), vertical projection, and HOG-SVM technique are taken under consideration. In the case of HOG-SVM based on eye state classification, the HOG extracts the features from the input image and SVM performs the classification of open and closed eye using a radial basis function (RBF). On the other hand, the SS and PRA based technique uses EEG signal data extracted from the eye region. The overall accuracy for the system is noted as 89%. The quantitative measurements regarding the overall accuracy and the input data type are presented in Table 2. It is observed that proposed architecture is capable to provide higher accuracy as compared to other methods. A comparative advantages between the proposed and other existing method is reported in Table 3. In the case of classification, the proposed architecture provides more reliable performance as compared to other existing networks. IV. EXPERIMENTAL RESULTS This section exhibits the experimental results of proposed architecture applying in the proposed pointing device to compare between the usability of the traditional computer mouse and pointing devices performance. An experimental setup for the performance measurement of proposed HMI system is shown in Fig.10. The system consists of a user, general purpose camera, IMU and multi-directional position selection channel. The function of camera is to take an user image which is passed through the proposed architecture. The output from proposed structure defines the state of the user image and performs the desired task based on multi-directional position channel. The detail performances of the proposed system are discussed in the following section. A. METHOD A total number of 10 participants with different abilities, of that 8 are male and 2 are female take part in this experiment. All of the participants are from the computer science and engineering department and their ages range from 22 to 24. For evaluating the performance, most widely used ISO 9241-9 [37] standard multi-directional position selection task is adopted as in Fig.11. In the task, the user selects the target arranged in a circular pattern and the sequence of the selection is shown in Fig.11. No time limits are given for completion of the task and no penalties for miss-selection. The evaluation process is divided into three sets to appraise whether fatigue is present and the sets are further broken into rounds to find the increasing or decreasing the performances. For each round, each target selection time (MT), target distance (D), target width (W) and the number of total target hits are recorded for calculating the index of difficulty and the index of performance (IP) as a measure of the HCI performance. The performance of proposed system is measured by using the following equations, The result evaluation is performed by comparing the usability of the proposed pointing device with an A4 Tech OP-620D modeled mouse with USB Optical cable. The IP accomplished by each participant for both pointing devices are shown in Fig. 12(a). From the result, it is observed that the participants with traditional computer mouse perform better as compare to the gesture and eye-controlled mouse. Since all the participants are regular mouse user, the gap in the performance is probably the familiarity with a pointing device which influences the person's ability to efficiently use it. In the case of successful target hitting, the successful attempts are lower for the proposed pointing device than the regular mouse. The number of unsuccessful selection per 100 targets is illustrated in Fig. 12(b) with a box and whisker chart. The unsuccessful attempts of regular computer mouse are recorded with a mean value of 7.77 where the highest unsuccessful attempt for the same mouse is recorded as 15. On the other hand, the mean value of unsuccessful attempts for the proposed pointing device is recorded as 15.55 while in lowest unsuccessful attempts are recorded as 10 which is lower than the upper quarterly of the regular mouse. Another result in Fig. 13 shows that the participants' performance is improved over the successive rounds while the performance for the regular computer mouse almost remains constant. It also clearly indicates that the participants are getting intimated with the new pointing device on the computer screen. The increment of user performance ensure that the participants with proposed pointing device perform better with the traditional mouse. The adaptability of the pointing device depends upon the user's familiarity and convenience. The evaluation of user's performance can be improved by training the user. However, this system provides access to the computer for the differently people. The general user familiar with the other pointing device may find difficulty at the beginning but the continuous using of the device make it comfortable to use. V. CONCLUSION This paper presnts a novel human-machine interaction system for controlling a pointer on the computer screen based on open and closed eyes. An accelerometer and a gyroscope sensor tracks the users head gesture and relocate the cursor on the screen. The clicking event functions are performed by detecting the user's eye movement. The detection of both eye movement is accomplished by using a deep learning classifier which is able to provide 95.36% accuracy in the clicking event on the computer screen. The obtained results in this paper is compared with other methods that ensure the comprehensive control over its classification performance. The validation of classifier performance is studied by using the HCI where the output of the classifier provides the input to control the clicking task of the proposed device. Also, this paper carries an experimental validation for the usability of the pointing device while comparing with the obtained performance for a regular computer mouse following the ISO standard. The comparison results show that the proposed system is promising to perform better with a trained user and over the using time. The future work of this research is to enhance the proposed system with more functionality like scrolling or performing double clicking. SANJAY DEY is currently pursuing the degree with the Computer Science and Engineering Department, Rajshahi University of Engineering & Technology. His research interest is mainly on AI systems, machine learning, deep learning, data mining, and computer vision. YEAHIA SARKER is currently pursuing the Department of Mechatronics engineering, Rajshahi University of Engineering & Technology (RUET), Rajshahi, Bangladesh. He is passionate about innovative research in the field of data science, human-computer interaction, and data processing to solve challenging problems. His current research focuses on hyperspectral image classification with deep learning methods. SUBRATA K. SARKER was born in Bangladesh in 1996. He received the B.Sc. degree in mechatronics engineering from the Rajshahi University of Engineering & Technology (RUET), Rajshahi, Bangladesh. He is currently working as a Lecturer with the Electrical and Electronic Engineering Department, Varendra University, Rajshahi. His research interests include control theory and applications, robust control of electro-mechanical systems, robotics, mechatronics systems, and power system control. FAISAL R. BADAL was born in Bangladesh. He received the B.Sc. degree in mechatronics engineering from the Rajshahi University of Engineering & Technology (RUET), Rajshahi, Bangladesh. He is currently working as a Lecturer at RUET. His research interests include control theory and applications, and power system control. SAJAL K. DAS received the Doctor of Philosophy (Ph.D.) degree in electrical engineering from the University of New South Wales, Australia, in 2014. In May 2014, he was appointed as a Research Engineer with the National University of Singapore (NUS), Singapore. In January 2015, he joined the Department of Electrical and Electronic Engineering, AIUB as an Assistant Professor. He continued his work at AIUB until he joined the Department of Mechatronics Engineering, Rajshahi University of Engineering & Technology (RUET), in September 2015 as a Lecturer. He is currently working as an Assistant Professor with RUET. His research interests include control theory and applications, mechatronics system control, robotics, and power system control.
8,779.4
2020-01-01T00:00:00.000
[ "Computer Science" ]
Investigating the Relationship Among English Language Learning Strategies, Language Achievement, and Attitude The main objective of the study was to ascertain whether English as a Foreign Language learners with various levels of English language achievement differ significantly in applying foreign language learning strategies. We also aimed to explore strategy use frequency in connection with attitude toward English language learning. Data were collected from 1,653 lower secondary students in Hungary through a revised version of the previously developed online Self-Regulated Foreign Language Learning Strategy Questionnaire (SRFLLSQ) based on Oxford’s Strategic Self-Regulation (S2R) Model. The findings point to statistically significant differences in the frequency of English language strategy use among more and less proficient learners. Quantitative analyses also reported that, in spite of the students stated low or moderate levels of strategy use, it turned out as a statistically significant predictor of foreign language attitude and language achievement. The results draw attention to the relevance of strategy research in foreign language teaching as well as encourages strategy teaching within language instruction. INTRODUCTION Foreign language learning requires many underlying skills and techniques. Learners have to master a number of complex linguistic, personal, cultural and social skills, and competences and be aware of effective techniques and strategies to help them cope with various challenges during the learning process. The frequent use of learning strategies can help learners to become more competent and effective language users in the classroom and inspire them to achieve higher levels of mastery in the target foreign language (Wong and Nunan, 2011;Oxford, 2016). Since the mid-1970s, an immense amount of learning strategy research has attempted to establish the concept and identify strategies that help learners to become more effective language learners (Oxford, 1990;Cohen, 1998). It is a widely researched and highly debated area even today (Thomas and Rose, 2019;Thomas et al., 2021). The most well-known and widely used taxonomy of language learning strategies (LLS) was developed by Oxford (1990Oxford ( , 2011Oxford ( , 2016. In her recently reconsidered Strategic Self-Regulation (S 2 R) Model based on Vygotsky's (1978) sociocultural May 2022 | Volume 13 | Article 867714 Habók et al. LLS, Language Achievement, and Attitude theory of self-regulated learning (SRL) and Zimmerman's threephase model (Zimmerman, 2000;Zimmerman and Schunk, 2011), she identified four main strategy categories: cognitive, affective, motivational, and social, each of them guided by the master category of "meta-strategies. " These meta-strategies are metacognitive, meta-affective, metamotivational, and metasocial strategies, respectively (Oxford, 2016). Oxford also developed a measurement tool (Strategy Inventory for Language Learning, SILL) for investigating LLS use, which is employed worldwide; however, it is based on her original conceptualization. Nevertheless, it is essential to relate the latest pedagogical theories to language teaching practice. Self-regulation theory, which was the basis for Oxford's new taxonomy, has been dominant since the beginning of this century. It is thus crucial to develop state-of-the-art measurement tools which can be used in the classroom by language teachers. In previous research, we have developed and validated a questionnaire based on Oxford's S 2 R Model (SRFLLSQ; Habók and Magyar, 2018b). To obtain a more comprehensive view of the role and possible effect of language learning strategies on certain other factors, such as attitude, motivation, and self-efficacy, it is essential to conduct further research. In this study, we aimed to examine LLS in relation to other crucial factors of language learning; we have investigated the relationships among the application of English language learning strategies, language achievement, and attitude toward English among lower secondary students in Hungary. The Concept of Language Learning Strategies Language learning strategies have been a research focus since the mid-1970s, as strategic language learning is a key to successfully acquiring a foreign language (Rubin, 1975). A number of definitions of LLS have emerged, with one of the most influential having proved to be that of Rebecca Oxford, who not only established a conceptualization, but also conducted empirical research. In her content-analytic study, Oxford involved 33 distinct definitions and interpretations of the term LLS and thus determine it as follows: L2 learning strategies are complex, dynamic thoughts, and actions, selected and used by learners with some degree of consciousness in specific contexts in order to regulate multiple aspects of themselves (such as cognitive, emotional, and social) for the purpose of (a) accomplishing language tasks; (b) improving language performance or use; and/or (c) enhancing long-term proficiency. Strategies are mentally guided but may also have physical and therefore observable manifestations. Learners often use strategies flexibly and creatively; combine them in various ways, such as strategy clusters or strategy chains; and orchestrate them to meet learning needs. Strategies are teachable. Learners in their contexts decide which strategies to use. Appropriateness of strategies depends on multiple personal and contextual factors (Oxford, 2016, p. 48). Strategic language learners select their LLS according to their personal preferences, motivational intentions, and situational circumstances. Therefore, it is especially difficult to identify a system for these strategies. This is one of the reasons why an extremely large number of conceptualizations and debates have emerged (Thomas and Rose, 2019;Thomas et al., 2021). Thomas et al. (2021) have pointed out that with the emphasis on selfregulation, the field of strategy research has moved away from formal educational settings toward learner autonomy. They argue that this is a hazardous trend because definitions of LLS minimize teachers' role and classroom contexts that can also be an influencing factor in strategic behavior among students. Thomas and Rose (2019) have therefore suggested a separation of LLS from self-regulation and introduced the Regulated Language Learning Strategies Continuum to make it clear that self-regulation can be conceptually separated in defining LLS. By interpreting LLS as being both other-and self-regulated, Dörnyei and Skehan (2003) argue that strategy use cannot be regarded as either emotional or cognitive or even behavioral, thus opening up another debated area in the field. The classification of LLS is another key area of argument. Oxford's original classification of six major fields (memory, cognitive, metacognitive, affective, compensation, and social strategies) was recently reconsidered and restructured on the basis of self-regulation theories. Her Strategic Self-Regulation (S 2 R) Model was developed based on Vygotsky's (1978) sociocultural theory of self-regulated learning (SRL). In her concept, she identified four main fields-cognitive, affective, motivational, and social strategies-each of them directed by a "master category of meta-strategies. " These meta-strategies are metacognitive, meta-affective, metamotivational, and metasocial strategies (Oxford, 2016). Her taxonomy was again open to a number of debates as some theorists (Dörnyei, 2005;Hajar, 2019) argued that success in language learning cannot be assessed through the frequency of strategy use alone. Research on Language Learning Strategies Despite the debates, LLS researchers have been devoted to conducting quantitative research on LLS use and its connection with other individual factors, such as gender, learning style, motivation, attitude, and proficiency (e.g., Radwan, 2011;Alhaysony, 2017;Magyar, 2018a, 2019). The most widespread measurement tool for assessing L2 learners' strategy use is Oxford's Strategy Inventory for Language Learning (SILL; Oxford, 1990). This questionnaire has been translated into numerous languages and adapted for a vast number of cultures around the world. It is based on her original taxonomy and employs her original six strategy fields. Based on her renewed taxonomy, a number of reconsidered measurement tools have been developed since then, which have approached effective language learning from different perspectives (Wang et al., 2013;Salehi and Jafari, 2015;Božinović and Sindik, 2017;Köksal and Dündar, 2017;Habók and Magyar, 2018b;An et al., 2021). One major area for researchers has been to find out what set of strategies foreign language learners rely on the most (Platsidou and Sipitanou, 2015;Alhaysony, 2017;Charoento, 2017;Dawadi, 2017;Habók and Magyar, 2018a,b, 2019, 2020Habók et al., 2021). Overall, results have concluded moderate use of LLS among participants. The most frequently used strategies are cognitive, metacognitive, and compensation strategies, while affective and memory strategies are the least preferred. Habók et al. (2021) have pointed out the different strategy preferences in different cultural contexts. Their results reinforced the preferred use of metacognitive strategies in both European and Asian contexts. However, there were statistically significant differences in the affective field with regard to the lower strategy use preference in the European sample. A great deal of research has investigated strategy use in connection with other aspects (Platsidou and Kantaridou, 2014;Rao, 2016;Charoento, 2017;Magyar, 2018a, 2020). One of the most often used factors was language achievement, which has been specified and covered in a multitude of ways. Some studies have focused on level of language proficiency or achievement test results (Raoofi et al., 2017;Taheri et al., 2019;An et al., 2021;Malpartida, 2021), others have relied on self-ratings (Charoento, 2017), and still others have involved language course marks (Habók and Magyar, 2018a;Sánchez, 2019;Bećirović et al., 2021). As a result, most research has pointed out that students with higher proficiency use LLS more frequently than those with less (Rao, 2016;Charoento, 2017;Raoofi et al., 2017;Sánchez, 2019). Charoento (2017) highlighted that successful students mainly used metacognitive strategies and less proficient students preferred to use social strategies the most. Sánchez (2019) reported that the application of social, metacognitive, and cognitive strategies was the highest among high achievers. Some research failed to find any significant differences in LLS use between learners with lower and higher English proficiency levels (Rianto, 2020). A relatively small number of studies have examined how LLS use predicts language proficiency. Some research has pointed out a positive correlation between strategy use and proficiency. Comprehensive work by Taheri et al. (2019) indicated a statistically significant correlation between LLS and second language achievement. Specifically, they confirmed a statistically significant relationship between cognitive, social, and compensation strategies and second language achievement. Platsidou and Kantaridou (2014) also found that language use is predicted by learning strategy use in a statistically significantly way and that it anticipates perceived language performance. Rao (2016) also reinforced that learners' English proficiency level determines their strategy use and highlighted that students with high proficiency use strategies more frequently than low-level learners. Habók and Magyar (2018a) stated that strategies have a statistically significant effect on proficiency through attitudes. In addition, these effects reflect general school achievement. Bećirović et al. (2021) observed that LLS can influence students' English as a foreign language (EFL) achievement. Specifically, cognitive strategies have a statistically significant positive effect on EFL achievement, while other strategies showed negative or no significant impact. An et al. (2021) also reported the positive direct effect of SRL strategies on students' English learning outcomes. Lin et al. (2021) concluded the direct impact of learning strategies on learners' performance in literal and inferential comprehension. Another research direction is the investigation of strategy use in relation to other underlying factors, like affective variables, such as motivation, attitude, self-efficacy, and self-concept. Educational research has pointed out that learners' attitude toward language learning is crucial since it can greatly impact learning results and language learning proficiency (Platsidou and Kantaridou, 2014). Studies have indicated that learners with a positive attitude toward language learning employ LLS more frequently and effectively. Platsidou and Kantaridou (2014) used confirmatory factor analysis to show that attitudes toward language learning predict the use of both direct and indirect learning strategies. Jabbari and Golkar (2014) reported a more frequent use of cognitive, metacognitive, compensation, and social strategies among students with a positive attitude toward language learning. Habók and Magyar (2018a) demonstrated the reverse effect: learners who apply LLS effectively were more likely to have improved learning experiences and positive attitudes toward foreign language learning. It can be concluded that strategy research is often related to other vital areas of language learning, among which attitude plays an important role. However, only a limited number of researchers have developed measurement tools for investigating self-regulated foreign LLS and measured it in relation to attitude. In addition, most studies have focused on the strategy use of tertiary samples with relatively high levels of proficiency. This study aims to fill this gap and provides an insightful investigation into the connections among strategy use, attitude, and English language achievement among lower secondary students. Based on the relevant literature (Jabbari and Golkar, 2014;Platsidou and Kantaridou, 2014;Habók and Magyar, 2018a), we hypothesized the statistically significant effect of LLS on proficiency through attitude. RESEARCH QUESTIONS The research addresses the following three research questions: 1. Which EFL strategy was the most frequently used by 11-yearold lower secondary students? 2. Were there statistically significant differences among students in their language learning strategy use on the basis of their English language achievement? 3. Which language learning strategy type has a statistically significant impact on learners' English language achievement and attitude? Participants In Hungary, students start primary school at the age of six. This lasts 4 year. Then, they continue their studies at the lower secondary level. At the age of 14, they move on to upper secondary school. The participants of the present research were 11-year-old lower secondary students in Grade 5 (N total = 1,653; N boys = 827, N girls = 780, N missing = 46) from 64 schools in Hungary. Hungarian students take EFL in compulsory courses in school, and they usually start learning a foreign language at the age of nine. However, in some schools, they can start at the age of six. Typically, they can choose between English and German, but recently a preference for English has become more common. In upper secondary school, two foreign languages are compulsory, English and German or Italian or Spanish. The second language depends on curricular choice at the school level. The English proficiency of the participating students was at beginner/elementary level (A1-A2). As for their engagement in learning, there were 17 students in the sample who spent 2 h or less per week on English. Around half of the learners (N = 884) devoted 3 h a week to this subject, and few participants dedicated four (N = 303) or five (N = 357) hours a week to the language. We also found 67 students who dealt with English six or more hours per week. In addition, we did not receive any answers to this question from 25 students. Instrument The revised and improved version of the Self-Regulated Foreign Language Learning Strategy Questionnaire (SRFLLSQ) was employed after our first measurement and validation (Habók and Magyar, 2018b). We reviewed the new findings on the theoretical background to foreign LLS research and continued revising the affective field. In addition, based on the relevant literature, we included the field of motivation in the questionnaire. We thus completed the measurement tool with motivational and metamotivational factors based on Oxford's Strategic S 2 R Model. Finally, the questionnaire covered four strategy areas: metacognitive (eight items), cognitive (six items), meta-affective (eight items), affective (eight items), metasocial (eight items), social (six items), motivational (four items), and metamotivational (four items; see Appendix). The fifth-grade students provided their responses on a five-point Likert scale, which ranged from 1 ("Never or almost never true of me") to 5 ("Always or almost always true of me"). The measurement tool was also complemented with a background questionnaire, which asked students about their foreign language school marks, which indicated students' English language achievement (1 = fail, lowest school mark; 5 = excellent, highest school mark). Students also self-reported their attitudes toward English learning on a fivepoint Likert scale, which again ranged from 1 to 5. Procedure First, the research was accepted by the IRB at the University of Szeged Doctoral School of Education. It was concluded that the research design follows IRB recommendations. The participating learners' parents were asked for written informed consent, which was handled by the participating schools. Second, an invitation was sent to schools to register for the measurement. In the call, schools were given information about the purpose of the measurement. Once the schools accepted the invitation, they received further instructions on data collection and a link to log into the Online Diagnostic Assessment System (eDia), which is developed, supervised, and operated by the University of Szeged Centre for Research on Learning and Instruction (Csapó and Molnár, 2019). Students' participation was voluntary in the research. They logged into the system with an official student assessment code (developed by the Hungarian Educational Authorities), which provided complete anonymity for them. The researchers could not identify the respondents on this basis. The identification code was handled by the school administrators, but the students' results were not available to them. Thus, complete anonymity was guaranteed. The eDia system is familiar to students because they regularly use it for diagnostic purposes during official school hours. The students had already acquired the necessary ICT skills at primary level, further improved through remote learning. For the present questionnaire, the participants indicated their responses by clicking on radio buttons. The learners were given a school lesson in a personal classroom environment provided by the school. After logging in, the respondents filled in the questionnaire in Hungarian, which is their native language, because they do not yet have the foreign language skills to provide reliable answers in English. This took 20 min to complete the instrument. Teacher assistance was not required while the questionnaire was being completed, but it was available. The students had the option to ask for assistance on any technical problems. Data Analysis First, we employed classical test analysis and examined reliability, means, and standard deviations for the questionnaire fields with SPSS Statistics 23.0. In the case of frequency of strategy use, we aimed to find out how strategy use was perceived by our sample. We also compared the students' strategy use vis-à-vis their English language achievement and attitude using an independent sample t-test. To interpret effect size, we followed Wei et al. 's (2019) and benchmark: under 0.005 is small, 0.01 is typical or medium, 0.02 is large, and is 0.09 very large. We used R 2 unsquared; thus, the benchmark for the effect size index is 0.07, 0.10, 0.14, and 0.30, which, respectively, represents small, medium, large, and very large cut-off values. We applied path analysis to map the possible relationships and effects of our variables. We studied the goodness-of-fit indices by applying various cut-off values for many fit indices, including the Tucker-Lewis index (TLI), the normed fit index (NFI), the comparative fit index (CFI), the root mean square error of approximation (RMSEA), and Chi-square values (Kline, 2015). TLI, NFI, and CFI were regarded as eligible with a cut-off value of 0.95, and RMSEA values indicated an acceptable fit of 0.8 (Kline, 2015). Descriptive Analysis The questionnaire was reliable in all the fields ( Table 1). As regards the whole sample, moderate strategy use was observed. The lowest strategy use was noted in the field of metasocial strategies, and the most frequent strategy was found in the affective field. As regards the corresponding factors, the most frequent use was observed in the motivational field (see Table 1). We also identified the frequency of strategy use among the more and less proficient learners. Students were divided into two categories based on their English language achievement ( Table 2). Those with excellent and good school marks were placed in the more proficient learners' category, while learners with average, fair, or unsatisfactory school marks were grouped into the less proficient learner category. Students (N = 810) who received excellent school marks met the requirements of the English language curriculum and performed at a high level. Learners (N = 500) with good marks had minor gaps. Those (N = 229) with an average school mark had major gaps in their knowledge, and those (N = 65) with unsatisfactory school marks had difficulty following the curriculum and varying levels of difficulty in all areas of language learning. Finally, students (N = 9) who received an unsatisfactory school mark are at a disadvantage which is difficult to overcome. No data were received from 40 students. Students' EFL achievement could be regarded as good with a mean of 4.2 (SD = 0.89). As a result, the more proficient learners employed strategies with greater frequency in all of the fields, a statistically significant finding. The affective factor indicated above medium effect size, while the other factors reported small effect sizes. Multivariate Analyses Finally, we investigated the effect of strategy use on English language achievement and attitudes. As Oxford's Strategic S 2 R Model considers strategies as being closely directed by their corresponding meta-strategies, we have regarded the strategies and their meta-strategy counterparts as common factors. The correlation coefficient was statistically significant between every strategy factor (r = 0.45-0.25, p < 0.001). Our model showed acceptable fit indices (Chi-square = 35.574, df = 5, p = 0.000, CFI = 0.995, TLI = 0.977, NFI = 0.994, RMSEA = 0.061). We therefore concluded that English language achievement and attitude are statistically influenced by strategy use (Figure 1). We found that the meta-affective and affective and metasocial and social categories directly influenced students' attitude toward English. A direct effect of attitude was observed on English language achievement. In addition, the metacognitive and cognitive categories had a direct effect on English language achievement, while English language achievement was indirectly affected by meta-affective and affective and metasocial and social factors. We could not confirm any significant effect of metamotivational and motivational factors on attitude or English language achievement. DISCUSSION We investigated the strategy use of 11-year-old lower secondary students in Grade 5 in connection with their language achievement and attitude toward the English language. Our first research question asked which LLS was the most frequently used by the sample. We found moderate strategy use with regard to a slightly modest application of the metasocial field, and the most frequent strategy use was observed in the affective field. These aspects of our findings partly correspond with most of the recent research with respect to moderate use of strategies; however, there are profound differences in the strategy preferences of the sample (Platsidou and Sipitanou, 2015;Alhaysony, 2017;Charoento, 2017;Dawadi, 2017;Habók and Magyar, 2018a,b, 2019, 2020Habók et al., 2021). Raoofi et al. (2017) also pointed out the low level of social strategy use in their research. Another important statistically significant finding is that higher proficiency learners used learning strategies with greater frequency than their less proficient peers. This applies to every strategy field in agreement with Charoento's (2017) results. Our second research question concerned differences in the use of LLS based on English language achievement. As concerns the sample, we regarded the EFL school mark as an indicator of English language achievement. The mean indicated that a considerable portion of the sample was grouped as more proficient. As a result, these students used LLS with greater frequency in all of the categories, which is a statistically significant finding. These results correspond with other research, which also reinforces this (Rao, 2016;Charoento, 2017;Raoofi et al., 2017;Sánchez, 2019). However, we also found that less proficient learners employed motivational strategies the most frequently, while their more proficient peers most often preferred the affective field, a result which is not reinforced by any previous findings. Apart from this, the strategy uses of both subsamples followed the same order, with social and metasocial strategy use being the least preferred type for both. This may be due to the fact that our sample was mainly at the beginner/ elementary level (A1-A2), so they cannot yet initiate conversations with others, even with native speakers. They also cannot understand many words and grammatical structures that are used by more proficient speakers, so social interaction is more difficult for them, even for the more advanced ones. Our results on the role of LLS in English language achievement and attitude confirmed the statistically significant effect of LLS on background variables. English language achievement was directly influenced by the metacognitive and cognitive fields and attitudes and indirectly affected by the meta-affective and affective fields, as well as the metasocial and social fields. Our model could not confirm any direct or indirect effect of the metamotivational and motivational fields on attitude or English language achievement. This may be because motivational components form distinct factors and their role differs somewhat in predicting language achievement. These results are in line with previous findings (Platsidou and Kantaridou, 2014;Habók and Magyar, 2018a), which also concluded the outstanding role of attitudes, which is an important predictor of language achievement and reinforces the role of strategy use. In summary, strategy use influences English language achievement through attitude to language learning in a statistically significant way. CONCLUSION The main objective of the study was to find evidence for the role of strategy use in students' achievement at the beginner/elementary level of English language learning. As a result, the strategy use preferences of the sample differed somewhat from the findings of previous research, as the affective and motivational fields were the ones the students preferred the most. This may be due to the fact that young children are more likely to use strategies that are rather emotional and related to their personality traits than strategies that require deeper understanding, specific learning techniques, and awareness, such as cognitive strategies. The use of social strategies was also quite low, probably owing to the low level of foreign language communication skills in the sample. As regards the different proficiency levels, more frequent strategy use was observed among the more proficient learners, a statistically significant finding. However, the patterns of strategy use were almost the same across the groups. The only difference was that the more proficient learners mostly preferred the affective field, while the less proficient ones mostly employed motivational strategies. This indicates that students at a higher level have more confidence to speak up and show how they feel about learning English. Learners with lower proficiency at this age often try to show that they are motivated, that is, that they are trying and want to achieve good results and present a good image of their own performance. The study also highlighted the importance of attitude; from the results, it can be concluded that, even at the beginner/elementary level, strategy use can affect language achievement and that a student's attitude is an important predictor and plays an important role as mediator between strategies and language achievement. This can have a positive impact on classroom performance and highlights the importance of teaching students about learning strategies. LIMITATIONS There are some limitations to consider in the study. First, the questionnaire was administered to fifth-grade students, who were at the beginner/elementary level of their English language learning. Thus, generalizability cannot be confirmed, and more research is needed across higher grades and higher proficiency learners. Second, we had difficulty identifying the affective domain in the first version of the questionnaire. For the fields in the present measurement tool, we have succeeded in identifying the affective and meta-affective domains of LLS. However, they still have to be optimized. Additional research is also called for with regard to the motivational components. Third, other underlying factors should be included in the investigation, such as self-efficacy, self-esteem, and self-concept. PEDAGOGICAL IMPLICATIONS The study points out that the role of learning strategies is substantial for the students in their language learning. Learning English is a complex process for Hungarian fifth graders. English pronunciation, vocabulary, and grammar are very different from those of Hungarian. For these learners, grammatical rules are often abstract phenomena, and it is difficult for them to associate meaning with the words they say and write. Furthermore, reading and listening comprehension are also influenced by many factors. The results draw attention to the paramount importance of teaching LLS, which can promote greater success among language learners. In addition, it is essential how consciously strategies are employed. Teachers are strongly urged to include strategy training in their courses. Strategy training can be conducted either in the form of an embedded sub-course in any of the subjects or in an independent form as an individual course. Strategy courses integrated into a school subject provide specific help for students learning that specific course material. For example, language learning strategies paid students in learning grammatical formulae or vocabulary in a foreign language, while general strategy courses help students to learn strategies that can be used in other school subjects, such as reading and writing strategies. Another implication of the study is that motivation and attitude also influence language achievement in a statistically significant way. Creating a learner-friendly and encouraging atmosphere is therefore essential. The findings from our research have provided important insights into these issues for classroom practice. DATA AVAILABILITY STATEMENT The datasets presented in this article are not readily available because the datasets are confidential and cannot be shared with third parties. Requests to access the datasets should be directed to AH<EMAIL_ADDRESS> ETHICS STATEMENT The studies involving human participants were reviewed and approved by IRB at the Doctoral School of Education, University of Szeged. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS AH and AM designed the study and implemented the data collection, as well as analyzing the data and participating in completing the manuscript. GM supervised the research and provided support. All the authors contributed to the editing and revision of the study and approved the final version of the manuscript.
6,753.6
2022-05-13T00:00:00.000
[ "Linguistics", "Education" ]
SEECR: Secure Energy Efficient and Cooperative Routing Protocol for Underwater Wireless Sensor Networks Underwater wireless sensor networks (UWSNs) is an emerging technology for exploration of underwater resources. Security plays an important role in the UWSNs environment because the environment of UWSNs is prone to different security attacks. This research proposes SEECR: Secure Energy Efficient and Cooperative Routing protocol for UWSNs. SEECR comprised of energy efficient and strong defense mechanism for combatting attacks in underwater environment. SEECR exploits cooperative routing for enhancing the performance of network. Considering the resource constrained UWSNs environment minimum computation is employed for implementing security so that SEECR remains suitable for underwater environment. In order to evaluate the performance of SEECR, this research compares the performance of SEECR with AMCTD: Adaptive Mobility of Courier Nodes in Threshold-optimized DBR - a well-known routing protocol for UWSNs environment. The performance of SEECR and AMCTD protocols are evaluated using different performance evaluation parameters such as number of alive nodes, transmission loss, throughput, energy tax and end-to-end delay. The results suggest an improved performance of SEECR over AMCTD. SEECR shows an improvement of 9% in terms of number of alive nodes, over 50% reduction in terms of transmission loss, up to 9% increase in throughput, up to 23% reduction in energy tax, and 25% reduction in end-to-end delay. Further, we observe that attack significantly degrades the performance of AMCTD whereas due to the embedded defense mechanism in SEECR the impact of attack is negligible. I. INTRODUCTION Underwater wireless sensor networks (UWSNs) consist of sensor nodes which are deployed under the water in order to sense the properties such as temperature, pressure and quality etc. [1]. The environment of UWSNs is very dissimilar from terrestrial wireless sensor networks (WSNs) in many ways. The electromagnetic signals cannot travel for long distance in UWSNs due to the high attenuation, scattering and the absorption effect [2]. In order to address this issue acoustic waves are used in UWSNs environment [3]. The propagation speed while using the acoustic waves in UWSNs environment is 1500m/sec which is much slower as compared to the radio The associate editor coordinating the review of this manuscript and approving it for publication was Longxiang Gao . waves based communication. In acoustic communication the end-to-end delay is high along with long propagation latency. The available bandwidth is very limited in acoustic communication (less than 100kHz) [2], [4], [5]. The sensor nodes in UWSNs environment are mostly considered static but due to the underwater activities these sensor nodes can move from 1 to 3m/sec. The sensor nodes in UWSNs environment are large in size and they consume more power due to which efficient utilization of energy is a critical factor in UWSNs communication. Charging and replacing of batteries in UWSNs environment is a challenging task [2], [6], [7]. The basic architecture of UWSNs is shown in Fig. 1. The sensor nodes are deployed in UWSNs environment as shown in Fig. 1. The communication among sensor nodes under the water is through acoustic waves whereas the sink VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ node communicates with on-shore server through radio communication. Applications of UWSNs include pollution monitoring, temperature and pressure measurements, environmental monitoring, seismic monitoring, assisted navigation, unmanned underwater exploration, submarine navigation, underwater disaster detection and prevention, underwater exploration such as corals, minerals and rare metals etc. [8]- [12]. The challenges in UWSNs environment are deployment, fabrication, maintenance and recovery cost is very high as compared to the terrestrial WSNs. Other challenges include power harvesting and power consumption, localization, deployment of sensor nodes, time synchronization, battery lifetime, underwater data collection, replacing or repairing sensor nodes, capacity of communication channels and security etc. [8], [13]. The basic security requirements in UWSNs are authentication, confidentiality, integrity and availability [14], [15]. There are different attacks possible in UWSNs which can abrupt the normal operation of the networks. The most common attacks include black hole attack, wormhole attack and sinkhole attack. Attacks in UWSNs environment can be grouped into the following two categories. i) Attack on the sensor node in UWSNs ii) Attack on the protocol in UWSNs. The first type of attack damages the sensor node but such type of attack is least likely in the real environment because it is very difficult to physically access the sensor nodes in UWSNs environment. The second type of attack target the communication protocols used in UWSNs. Once the communication protocol is compromised then it affects the entire network [13]. There are security requirements and countermeasure mechanisms to combat these attacks in UWSNs [15]. In this research work, SEECR protocol is proposed that is secure, energy efficient and cooperative in nature. SEECR efficiently utilizes the energy of sensor nodes for maximizing their life time. Cooperation technique used in SEECR has vital role in efficient utilization of energy. Moreover, security mechanism has been incorporated in SEECR for combating security attacks. Security mechanism involves minimum computation considering the resource constrained environment of UWSNs. SEECR detects and eliminates all those active attacks which drop the packets. The rest of the research paper is organized as follows: Section II discusses the current state of the art. Section III includes motivation and contributions. Section IV discusses underwater channel. Section V includes detail of the proposed routing protocol (SEECR) whereas Section VI includes details of simulation environment and performance evaluation parameters. Results are presented and discussed in Section VII. Section VIII concludes the work and presents directions for future work. II. LITERATURE REVIEW Energy efficiency is a key consideration in wide range of applications [16]- [18]. The authors in [16] proposed a novel energy efficient key agreement mechanism for UWSNs. For reducing the communication overhead between sensor nodes UWSNs are gathered into the clusters. The key agreement mechanism proposed in this research can resist different security attacks which include replay attacks, sybil attack, spoofed attacks and node replication attacks etc. In the proposed mechanism geographical information and identity are added to the public as well as the private key of the sensor node in UWSNs in order to boost the capability of the mechanism to resist against different attacks. The results of the simulation revealed that the key agreement mechanism proposed in this research produced better results in terms of security as well as networks performance. Using the proposed mechanism the UWSNs can have high network connectivity and ensures that the energy consumption of the sensor node and cluster-head node is under 10J and 20J respectively. The authors in [19] proposed key distribution scheme for peer to peer communication in mobile UWSNs environment. Two different mobility models are used in this research which are meandering and nomadic mobility model. In both the mobility models, structure used is hierarchical structure and communication is managed through distribution scheme known as Blom's key distribution scheme. Simulation results of this research indicated some minor connectivity issue due to mobility but the proposed scheme resolves connectivity issues on time. The key distribution scheme proposed in this research revealed better resiliency performance when some nodes are captured by the adversary and in that case very few links become compromised. Simulation results also revealed better performance of the proposed scheme in terms of security and energy consumption. The authors in [20] proposed cryptographic mechanism for UASNs. The authors proposed encryption algorithm which is efficient in nature for protecting the integrity and confidentiality while considering the unique characteristics of UASNs environment. In this research some modifications are made in traditional AES-128 by using alternate approach instead of using S-Box. The proposed algorithm can resist against brute force as well as other attacks. It has been concluded in this research that the round key is not breakable using brute force attack. The performance of proposed cryptographic scheme is compared with AES-128, Blowfish and PRESENT. Simulation results revealed that as compared to AES-128 and other cryptographic schemes the proposed cryptographic approach is secure and energy efficient. The authors in [21] proposed a design of secure routing approach for UASNs. The researchers in this research proposed an efficient signature schemes with no need of the online trusted third party. The scheme proposed in this research can tackle forgery attacks and improves the overall security. Moreover, a trapdoor scheme is presented which is used in routing messages for achieving anonymity among communicating nodes. The trap door design in the proposed scheme evades the overhead involved in maintaining huge number of pre-shared keys. The proposed routing scheme is compared with geographical information routing protocol based on partial network coding (GPNC) and level based adaptive geo-routing (LB-AGR). The performance parameters used are energy consumption, throughput and PDR. The results obtained from the simulation in this research revealed that the proposed routing scheme progresses security and the performance of the network become moderate. The research conducted in [22] proposed distributed approach for combating routing attacks in UWSNs. The proposed approach can detect sinkhole attack and wormhole attack in UWSNs environment. Two different phases are used in the proposed approach such as silent monitoring phase and detection phase. According to the proposed approach of detection and mitigation each sensor node in UWSNs has to keep track of its neighbor by overhearing the messages sent and received by the neighbor. An analytical model is presented in this research in order to capture the interaction among different parameters. Simulation has been used for implementing the idea presented in this research and the obtained simulation results revealed that the suggested approach is efficient and correct. The authors in [23] conducted detail study on denial of service (DOS) attack. In mobile UWSNs the DOS attack can be categorized into man in the middle attack, flooding attack and demolishing attack. The possible attacks in UWSNs environment are sybil attack, wormhole attack and selective forwarding attack. The influence of mentioned attacks on UWSNs is analyzed. They surveyed secure localization techniques for terrestrial WSNs. Simulation results revealed that there is dissimilarity in the performance of mobile UWSNs and mobile WSNs which indicates that the security mechanism suitable for WSNs are not appropriate for UWSNs environment. The authors in [24] considered the requirements as well as security related issues of UWASNs. In this research it is proposed that same key should be utilized by both the sender and the receiver for the purpose of encryption and decryption as the size of the symmetric key is less as compared to the size of the asymmetric key. When all headers are added the entire message is entered in the algorithm known as message-authenticated code having a secret key which is shared. The message integrity code (MIC) which is an output value is attached to the entire message at its end. The purpose of the MIC is to ensure that each and every one bit of the complete message as well as the shared key is authentic. The encryption algorithm encrypts the message as well as the MIC. Receiver side calculates its MIC and compares it with the MIC received from sender side. If both the MIC are same the receiver accepts the message otherwise discards the entire message. The authors in this research revealed that minimum amount of overhead should be added to the data when applying security in UWASNs. They proposed to use cryptographic module validation program (CMVP) algorithm for the said purpose. The research conducted in [25] proposed a secure neighbor discovery wormhole resilient routing scheme for UWSNs. This research used RIPEMID-160 and direction of arrival estimation as authentication mechanism for the purpose of secure neighbor discovery. The proposed scheme can resist against wormhole attack, impersonation attack and route poisoning attack. The scheme proposed in this research uses four agencies such as security agency, routing agency, underwater gateway agency and vehicle agency. Simulation results revealed that the secure routing scheme proposed in this research produced better results as compared to basic neighbor discovery schemes in terms of route maintenance overhead, consumption of energy, PDR and detection of failure. The authors in [26] offered a secure MAC protocol for UASNs environment in order to efficiently manage the environment of UASNs regarding energy efficiency, data reliability, authenticity, confidentiality of data, and the prevention against attacker. In this research cryptography algorithm is used in MAC protocol in order to provide security services such as data authenticity, confidentiality of data and protection against replay attack. CCM-UW mode based on AES and ARIA algorithms are used in order to carry out this research. The proposed MAC protocol is efficient in terms of consumption of energy and transmission time. The proposed MAC protocol for UWSNs is compared with existing MAC protocols. The results revealed that the proposed MAC protocol is secure and efficient as compared to the existing solutions. The authors in [27] evaluated different schemes of digital signature for the purpose of end-to-end authentication in UWSNs environment in terms of consumption of energy. The results obtained in this research revealed that the schemes which perform better in WSNs do not necessarily perform better in UWSNs due to the exceptional characteristics of UWSNs environment. Different characteristics of digital signatures schemes are identified in order to make them suitable for UWSNs. Three different digital signature schemes are evaluated for different UWSNs scenarios in this research in terms of power consumption. These schemes are: i) zhangsafavi-naini-susilo (ZSS), ii) elliptic curve digital signature algorithm (ECDSA) iii) boneh-lynn-shacham (BLS). The signature generation time for ZSS is 229ms, ECDSA is 134ms and BLS is 302ms. The signature size of ZSS is 21 bytes, ECDSA is 40 bytes and BLS is 21 bytes. It has been revealed in this research that the use of aggregate and short signatures can effectively increase the energy efficiency in UWSNs. The authors in [28] proposed a secure neighbor discovery scheme for UASNs. This research proposed a suite of protocols which performs secure discovery of neighbor resilient against wormhole attack in UASNs environment. The proposed protocol utilizes the direction of arrival (DoA) signals approach. The proposed protocol can resist against wormhole attack with very good probability and without the requirement of traditional hard requirements. The proposed solution in this research comprises of four protocols. Protocols such as B-NDP and MA-NDP are appropriate for UANs environment with little density and applications in which end-to-end delay and connectivity are of the main concern. On the other side protocols such as SDV-NDP and DV-NDP are appropriate for those applications in which there is high density of node and extraordinary requirements for the resilience of wormhole attack. The research conducted in [29] proposed a security suite for UASNs comprised of both static and mobile sensor nodes. The proposed security suite is comprised of cryptographic primitives and secure routing protocols for achieving confidentiality and integrity while considering the constraints of UASNs environment. The authors proposed a protocol known as FLOOD. The FLOOD protocol was not secure and for protecting the confidentiality and integrity of the FLOOD protocol the authors proposed a secure flood (SeFLOOD) protocol. The experimental results revealed that the protocols suite proposed in this research is suitable for UASNs environment and it has affordable power consumption and communication overhead. The authors claim that so far their proposed protocol suite is the first practical, complete and efficient solution for providing confidentiality and integrity in the UASNs environment. The authors in [30] proposed Tic-Tac-Toe AI-MINIMAX algorithm for implementing security in UWSNs. The focus of this research is to find the best and secure path for routing in UWSNs. The approaches of game theory are extensively utilized in this research in order to protect the sensor nodes in UWSNs from attacks and to protect the security of data in UWSNs communication. The implementation of Minmax algorithm includes two players such as min and max. While its implementation in UWSNs max is the sender and min is the attacker. The pre-requisite of Minmax algorithm is Tic-Tac-Toe. The Tic-Tac Toe considers all the available situations and chooses the optimal and secure move in UWSNs environment without any threat. The implementation of the proposed algorithm is done using the following procedures such as i) finding best move ii) current move better condition iii) GameOver state condition iv) Making our AI smarter. The authors predict that utilizing the AI models intelligent attacks can be mitigated up to some extent in the UWSNs environment. The authors in [31] proposed a trust management model called a trust cloud model (TCM) for UWSNs. The purpose of the trust model is quantifying trust relationship among sensor nodes in UWSNs environment. After evaluating the quantified results the sensor nodes can determine whether the other node is trust worthy or not. Therefore, in the UWSNs environment only trust worthy sensor nodes will be selected for the transmission of data. The performance of the proposed TCM has been evaluated using the following three aspects: i) performance of detection of the malicious node: TCM performed well as compared to LCT and CBTM because both the LCT and CBTM did not take trust timelines into account. ii) performance of the calculation of trust value: It has been observed that in TCM the behavior of communication of normal nodes is good and their trust values rise with the passage of simulation time whereas in the presence of malicious nodes the trust values decreases due to the loss of packets by malicious nodes. iii) performance of the transmission of data: It has been observed that the rate of communication which is successful under the TCM is more as compared to the other two trust models. III. MOTIVATION AND CONTRIBUTIONS Research has been done on different issues in UWSNs such as the researchers in [32]- [38] conducted study on the recent applications, issues and challenges in UWSNs environment. Majority of the research done in UWSNs environment have energy efficiency as primary concern because the sensor nodes in UWSNs environment are operated on built-in battery having limited life time. The research conducted in [39]- [47] proposed different energy efficient routing protocol for UWSNs environment. Security is equally important to energy efficiency in the UWSNs environment. Compromising security in UWSNs cannot be tolerated in most of the cases. Therefore, this research work proposed a secure, energy efficient and cooperative routing (SEECR) protocol for UWSNs environment. The performance of SEECR is compared with AMCTD [48] a well-known routing protocol for UWSNs environment. This research implemented attack in both SEECR and AMCTD protocol in order to reveal the consequences of security attack in both the routing protocols. This research will help the research community to realize the impact of a security attack in UWSNs environment and must consider security mechanism while designing routing protocols for UWSNs environment. Secure solutions should have minimum computations considering the resource constrained environment of UWSNs. IV. UNDERWATER WIRELESS CHANNEL This section includes the attenuation, noise, signal to noise ratio (SNR) and path loss modeling in underwater wireless channel. A. ATTENUATION IN UNDERWATER WIRELESS CHANNEL The propagation speed of sound in water (c) is approximately 1500 m/s. The attenuation which takes place over the distance l, for a signal of narrow-band having carrier frequency f , is calculated in Eq.(1) [49]. where A 0 is a constant known as the normalizing constant, k is called the spreading factor, having value k = 1.5, and a(f ) is called the absorption coefficient which is modeled by using the Thorp's formula [50]. where a (f ) is calculated in dB/km and f is calculated in KHz. The value of total attenuation A (l, f ) can be calculated in Eq. (3). 10 log(A(l, f )) = k × 10 log(l) + l × 10 log(a(f )) (3) where k ×10 log(l) denotes spreading loss and l×10 log(a(f )) denotes absorption loss. where N t , N s , N w , and N th are the noise due to turbulence, shipping, wind, and thermal activities respectively. The whole spectral density of noise power is calculated in Eq. (4) [51]. C. SNR IN UNDERWATER WIRELESS CHANNEL The signal to noise ratio (SNR) for underwater wireless channel is computed in Eq. (5) [51]. D. PATH LOSS IN UNDERWATER WIRELESS CHANNEL Signals in underwater wireless channel practice frequency as well as link length-dependent path loss that is quite complex as compared to the radio channels and it is modeled in Eq. (6) [52]. V. SEECR: SECURE ENERGY EFFICIENT AND COOPERATIVE ROUTING PROTOCOL FOR UNDERWATER WIRELESS SENSOR NETWORKS The proposed SEECR protocol utilizes multi-hop networking in UWSNs environment by utilizing the cooperation technique. In the proposed scheme, data packets which are generated from source node are forwarded to the destination node or sink node via hop by hop. Deployment of the relay node is done at the joint which contains two consecutive hops that accepts the arriving packets and after that amplifies them, and then retransmits these packets towards the destination. The proposed scheme detects common active routing attacks which drop packets and eliminates attacker nodes from the network efficiently. A. NETWORK MODEL Fig. 2 shows a network model of UWSNs, where S is the source node, R1 and Rb are relay nodes, D is the destination/sink node and A1 and A2 are attacker nodes deployed in UWSNs. Rb is the best relay node selected among the available relay nodes such as R1 and Rb. The line dark in color reflected in Fig. 2 is the communication path which is direct and the dotted lines show the routes which are cooperative and these cooperative routes are used when the direct path is either unavailable or infeasible. The relay node will be checked whether it is attacker node or not, if it is attacker node then it will not be selected for the transmission of data. B. CONFIGURATION AND INITIALIZATION In this phase configuration and initialization is done. Each sensor node broadcast its depth as well as residual energy information to the neighbors using hello packets for the initialization of network operation. In this phase each sensor node knows about their neighbor sensor nodes. Sink node sends the hello packet to all sensor nodes. Each sensor node computes its weight using Eq. (7). where W i is the weight of node i, T l reflects the path loss, R i represents the residual energy of node i, D w represents depth of water and D i shows the depth of node i. C. MOVEMENT SCHEME OF COURIER NODES For eliminating flooding process the value of depth threshold of the sensor nodes is marked to 60m. When the numbers of dead nodes is increased by 20% then for increasing the number of depth based threshold neighbors, the depth threshold is set to 40m. When the number of dead nodes in UWSNs environment increase to 75% then the depth of the courier nodes are adjusted for improving the network lifetime as follows: where N d is the number of dead nodes C 1d , C 2d , C 3d , C 4d are the depth of courier nodes 1, 2, 3 and 4 respectively, as also utilized in AMCTD. D. ELIMINATING ATTACKER NODES For detecting and eliminating attacker node each sensor node stores packet which are sent and received by neighbor sensor nodes. The packets are stored in Q j and Q k . The incoming packets P in of sensor node are stored in Q j and the outgoing packets P out of sensor node are stored in Q k . After the packets are stored, both the values Q j and Q k will be compared. If both the values are not equal and the sensor node is not a sink node then there are chances that the sensor node is an attacker node. The value of A i (attack indicator) will be incremented by 1. If the value of A i reaches x for a sensor node then that node will be detected as an attacker node and it will be eliminated so that it cannot contribute in the routing process. The value of x is set to 3. The value of x can be adjusted according to the environment. The value of x should be set carefully. If the value of x is very high the attacker node will be there in network for more time, if the value of x is very low then there is possibility of removing genuine sensor node as attacker node. The process of attacker node detection and elimination has to be performed by all sensor nodes in order to ensure that the attacker node cannot participate in any activity of the network. Since the sensor nodes are positioned underwater in UWSNs and physical access to the sensor nodes is difficult and time consuming due to which after the detection of attacker node(s) it can still physically exist in the premises of UWSNs but the attacker node(s) will not be able to participate in any operation of the network. Q j = P in Store incoming packets P in of a sensor node in Q j Q k = P out Store outgoing packets P out of a sensor node in Q k where S k represents sink and A i is the attack indicator. Where S represents source, Rb represents best relay node, and D represents destination. Y sd = H s * g sd + n sd (8) where Y sd as shown in Fig.3 is the signal that is directly conveyed from the source to the destination node, H s reflects channel among the source and destination node or sink node, g sd reflects the information which is broadcasted from the source to the destination node, and n sd reflects the ambient noise which is added to the channel H s . Y sr = H s * g sr + n sr (9) Y sr shows the signal transmitted among the source node and the relay node, g sr is the information which is conveyed from the source towards the relay node, n sr reflects the ambient noise which is added to the channel H s , Y rd = Y sr * g rd + n rd (10) During network operation when the destination node do not receive data packets from the source node then the relay node starts transmission and send data packets. Y rd represents the data which is transmitted from the relay node towards the destination node, g rd represents the information which is forwarded by the relay node towards the destination node, and n rd reflects the noise which is added on the channel H s to the information Y sr . F. RELAY STRATEGY AND ROUTING PHASE A source sensor node S has n encompassing sensor nodes in its vicinity and it depends on the condition of the channel in order to figure out the neighbors most suitable for transferring its data towards the sink node. The source node selects the relay node from its neighborhood by equating their weights. The sensor node having the maximum value of W i is selected for the transmission of data. If the residual energy of the source is more than or equal to the residual energy of the relay node, then direction transmission will be done otherwise the transmission will be through relay node. where R s is residual energy of source node and R r is residual energy of relay node. The technique known as amplify and forward is considered at the relay node which uses an amplification factor on the signal received from the source and prior to forwarding the signal towards the destination node. G. COMBINING STRATEGY The destination sensor node D utilizes signal to noise ratio combining (SNRC) technique as combining technique in order to combine the signals which are arriving from the source S and relay R. The SNRC is where the signals combined at the receiver have a weight equal to the SNR seen at each array element. SNRC definitely works better than equal ratio combining (ERC) because it considers the small scales fluctuations and weights those signals less (low SNR) while combining. SNRC can be calculated as: (11) where Y d shows the output signal which is combined at the destination node D, X 1 represents weight of direct path and X 2 represents weight of relay path. H. WEIGHT UPDATING PHASE When the number of dead nodes is less than or equal to 20% then weight of sensor nodes will be same as the weight calculated in the initialization phase in Eq. (7). When the number of dead nodes in UWSNs environment is more than 20% and less than 75% then statements following if condition will be used to calculate the weight. If the numbers of dead nodes are more than or equal to 75% of the sensor nodes then statement following else condition will execute and the weight will be computed according to the statement following else condition as follows: If N d > 20 and N d < 75 The complete working of SEECR is reflected in a flowchart as shown in Fig. 4. VI. SIMULATION ENVIRONMENT AND PERFORMANCE EVALUATION PARAMETERS SEECR and AMCTD protocols are evaluated in two different scenarios such as with and without attack. The attack scenario contains eight attacker nodes whereas there is no attacker node deployed in without attack scenario. The numbers of sensor nodes deployed are 225 whereas the numbers of sink nodes deployed at the surface of the water is set to 10. The total number of rounds in simulation is set to 9000 as shown in Table 1. A. PERFORMANCE EVALUATION OF SEECR For evaluating the performance of proposed SEECR, it is compared with AMCTD using different performance evaluation parameters. The performance of SEECR and AMCTD are evaluated in different scenarios such as with and without attack. B. PERFORMANCE EVALUATION PARAMETERS The following parameters are used in order to evaluate the performance of SEECR and AMCTD protocols in the scenarios such as with and without attack. 1) NUMBER OF ALIVE NODES The number of alive nodes is calculated by subtracting the number of dead nodes from the total number of nodes during the entire simulation. 2) TRANSMISSION LOSS It is the transmission loss between the source and sink node during a single round. The transmission loss is calculated in decibels (dB). 3) THROUGHPUT It is the entire number of packets which reach the sink node. 4) ENERGY TAX It is the energy consumed while forwarding the data from the source node towards the sink node. It is calculated in joules. 5) END-TO-END DELAY It is the time between packet generation and packet reach the sink. It is calculated in milliseconds. VII. RESULTS AND DISCUSSION The results obtained in this research from different performance evaluation parameters are discussed in this section of the research paper. The results obtained are presented in the form of both figures and tables. Fig. 5 shows the number of alive sensor nodes during the entire simulation. Initially the number of sensor nodes deployed is set to 225 which includes all the source and relay nodes. At the end of simulation the number of alive nodes in SEECR with and without attack is 111 and 112 respectively, the number of alive nodes in AMCTD with and without attack is 68 and 82 respectively. The results obtained show that attack significantly affects the energy consumption due to which the numbers of alive nodes is significantly less in AMCTD with attack scenario. Fig. 5 further revealed that stability is better in SEECR as compared to AMCTD in both the scenarios such as with and without attack. SEECR improves the overall stability of the network whereas attack causes more instability in AMCTD protocol with attack scenario. Moreover, due to the embedded security defense mechanism in SEECR there is almost negligible impact of attack on SEECR as we can see in the scenario of SEECR with attack. Table 2 shows the number of alive nodes after every 1000 rounds in different scenarios. It is clear from the results obtained in Table 2 that the performance of SEECR is better than AMCTD in terms of number of alive nodes due to the energy efficient approach adopted in SEECR protocol. The percentage of alive nodes is calculated for the entire simulation. Moreover, it can be seen in Table 2 that the overall percentage of alive nodes in SEECR with and without attack is 60.9% and 63.2% respectively, which shows very little impact of attack and the little impact is due to the computation done for the detection as well as elimination of attacker nodes from the network. On the other side it is clear from Table 2 that the overall percentage of alive nodes in AMCTD with and without attack is 51.7% and 56.6% respectively. There is a significant degradation in the performance of AMCTD routing protocol in the presence of attack. The attack consistently degrades the performance of AMCTD protocol due to no defense mechanism. Moreover, SEECR protocol beats AMCTD protocol in both the scenarios such as with and without attack. Fig. 6 shows the transmission loss of SEECR and AMCTD protocols in different scenarios such as with and without attack. The results obtained revealed that the transmission loss of SEECR with and without attack is significantly less as compared to the transmission loss of AMCTD with and without attack. The energy efficient approach utilized VOLUME 8, 2020 by SEECR protocol significantly reduced the transmission loss in SEECR protocol. Moreover, SEECR protocol beats AMCTD routing protocol in terms of transmission loss. Table 3 shows the transmission loss of SEECR and AMCTD protocols after every 1000 rounds in different scenarios. The results mentioned in Table 3 indicates that the transmission loss in SEECR protocol with and without attack is significantly less as compared to the transmission loss of AMCTD protocol with and without attack. Moreover, the overall percentage of transmission loss in SEECR protocol with and without attack is 43.7% and 43.1% respectively, whereas the overall percentage of transmission loss in AMCTD with and without attack is 100% and 98.8% respectively. It is clear from Table 3 that as compared to AMCTD there is more than 50% reduction of transmission loss in SEECR protocol in both the scenarios such as with and without attack. Fig. 7 shows the throughput of SEECR and AMCTD protocols in different scenarios such as with and without attack. The results obtained show that the throughput of SEECR protocol is more as compared to the throughput of AMCTD protocol in both the cases such as with and without attack. The better throughput in SEECR protocol than AMCTD protocol is due to the energy efficient and secure mechanism utilized in K. Saeed et al.: SEECR Protocol for UWSNs SEECR protocol. It is obvious from Fig. 7 that SEECR beats AMCTD in terms of throughput. Table 4 shows throughput of SEECR and AMCTD protocols after every 1000 rounds in different scenarios. The results mentioned in Table 4 revealed that the throughput of SEECR protocol is much better as compared to the throughput of AMCTD protocol. Moreover, it has been revealed in the results obtained that attack significantly degrades the performance of AMCTD protocol whereas the effect of attack is negligible on SEECR protocol due to the strong defense mechanism embedded in SEECR protocol. The overall throughput percentage of SEECR protocol with and without attack is 42.4% and 42.6% respectively, whereas the throughput percentage of AMCTD protocol with and without attack is 32.9% and 37.6% respectively, which shows significant degradation in the performance in AMCTD protocol in the presence of attack. Fig. 8 shows the energy tax of SEECR and AMCTD protocols in different scenarios such as with and without attack. The results obtained show that the energy tax of SEECR protocol is much less as compared to the energy tax of AMCTD protocol in both the scenarios such as with and without attack. Moreover, it is obvious from Fig. 8 that attack significantly degrades the performance of AMCTD protocol in terms of energy tax whereas the effect of energy tax on SEECR protocol is negligible. Table 5 shows energy tax of SEECR and AMCTD protocols after every 1000 rounds in different scenarios. The results obtained in Table 5 revealed that energy tax of SEECR protocol is much less as compared to the energy tax of AMCTD protocol in both the cases such as with and without attack. Moreover, it is also clear from Table 5 that in attack scenario the energy tax of AMCTD protocol increases. The overall percentage of energy tax of SEECR protocol with and without attack is 77.3% and 76.1% respectively, whereas the percentage of energy tax of AMCTD protocol with and without attack is 100% and 88.3% respectively. It can be concluded from Table 5 that the energy tax in SEECR protocol is up to 23% less as compared to AMCTD protocol. Fig. 9 shows end-to-end delay of SEECR and AMCTD protocol in different scenarios such as with and without attack. Results reflected in Fig. 9 shows that there is major difference in the end-to-end delay of SEECR and AMCTD protocols. The end-to-end delay is less in SEECR protocol as compared to the end-to-end delay of AMCTD protocol. Table 6 shows end-to-end delay of SEECR and AMCTD protocols after every 1000 rounds in different scenarios. The result obtained in Table 6 indicates that the overall percentage of end-to-end delay in SEECR protocol with and without attack is 70.2%, whereas the percentage of end-toend delay in AMCTD with and without attack is 95.6% and 100% respectively. It can be concluded from Table 6 that the end-to-end delay is 25% less in SEECR protocol as compared to that in AMCTD protocol. VIII. CONCLUSION AND FUTURE WORKS Routing attacks in UWSNs environment is an important factor which needs to be addressed. This research work proposed SEECR protocol for UWSNs environment. The proposed routing protocol efficiently utilizes the energy consumption and has built-in defense mechanism against common active attacks in UWSNs environment. The performance of SEECR protocol has been compared with AMCTD protocol in terms of different performance evaluation parameters. The results obtained revealed that SEECR protocol beats AMCTD protocol in terms of all performance evaluation parameters. Moreover, due to the built-in defense mechanism in SEECR protocol there is negligible impact of attack on SEECR protocol in UWSNs environment. This research work is focused on the importance of energy efficient security mechanism in routing protocol for UWSNs environment. The results produced in this research revealed that despite of computation for attacker node detection and elimination the proposed solution is still suitable for UWSNs environment. The proposed solution in this research will encourage the research community to design secure solutions for UWSNs as well as other environments. Some possible research directions in this area can be designing other secure routing solutions and using artificial intelligence models for mitigating attacks in UWSNs environment. In the future we have a plan to introduce other energy efficient secure solutions for different environments. IX. CONFLICTS OF INTEREST The authors declare that there is no conflict of interest regarding the publication of this paper. KHALID SAEED received the M.S. degree (Hons.) in computer engineering from the University of Engineering and Technology at Taxila, Pakistan, in 2011. He is currently pursuing the Ph.D. degree in computer science with the University of Engineering and Technology at Peshawar, Peshawar, Pakistan. His research interests include security in underwater wireless sensor networks, wireless sensor networks, cloud computing, mobile ad-hoc networks, delay tolerant networks, and information and communication technologies. Till date he has more than 20 publications to his credit in IEEE conferences and reputed journals. WAJEEHA KHALIL received the Ph.D. degree from the University of Vienna. She is currently an Assistant Professor with the Department of Computer Sciences and Information Technology, University of Engineering and Technology at Peshawar, Peshawar. Her research interests include distributed computing, human-computer interaction, and parallel computing. SHEERAZ AHMED received the Ph.D. degree in electrical engineering from COMSATS Islamabad, Pakistan, in the domain of underwater and body area sensor networks. He is currently a Professor with Iqra National University, Peshawar, Pakistan. His research interests include UWSNs, WBANs, smart grids, VANETs, FANETs, and so on. Till date he has more than 160 publications to his credit in IF journals and IEEE conferences. He has an experience of more than 20 years in teaching, research, and administrative positions. IFTIKHAR AHMAD received the master's degree in computer science from the University of Freiburg, Germany, and the Ph.D. degree in theoretical computer science from Saarland University, Saarbrücken, Germany. He is currently an Assistant Professor with the Department of Computer Science and Information Technology, University of Engineering and Technology at Peshawar, Pakistan. He is also leading the National Center of Big Data and Cloud Computing, Machine Learning Group, University of Engineering and Technology at Peshawar. His research interests include theoretical computer science, machine learning, graph theory, and blockchain and cryptocurrencies. MUHAMMAD NAEEM KHAN KHATTAK is currently working as the Chairman at the Department of Mechanical Engineering, University of Engineering and Technology at Peshawar, Peshawar. He has his specialization in technology and quality management. In particular, his areas of interest include total quality management and technology and innovation management. He has produced several publications internationally in the fields of his interest. VOLUME 8, 2020
9,601.2
2020-06-08T00:00:00.000
[ "Environmental Science", "Engineering", "Computer Science" ]
Using machine learning to examine preservice teachers’ perceptions of their digital competence . This research paper’s aim was to investigate both the pre-service teachers’ perceptions of their digital competence and if gender, type of the bachelor’s degree and age made the opinions different or not. To do so, the clustering analysis method was employed to analyze the areas and items of the digital competence questionnaires used as a data collection technique. The study group included 291 participants who are now teacher-trainees in Draa-Tafilalte Regional Teacher-Training Center in Errachidia and Ouarzazate in the south-east of Morocco. In so doing, a number of results attained and basically confirmed that the level of the teacher-trainees’ digital competence was low or weak and that the three parameters (the type of the bachelor’s degree, gender and age) played a significant role in shaping different opinions on their digital competence. This paper ended with some major implications that were drawn from the findings of this study in our bid to highlight some needs of the pre-service teachers that should be met in their training courses and related activities to develop their digital competence in teaching and learning process. Introduction Nowadays, no one can deny the fact that technology has dominated our lives from the very outset of this third millennium or so and becomes an indispensable part of our daily life.So, no one can keep up with the developments of this century without being at least an average computer-literate.The same can be said in the field of teaching, teachers are all invited to learn how to employ the latest technologies used in teaching and learning operation since it hugely contributes to improving teaching practices and styles and helps them to always keep abreast of the developments that this field witnesses ceaselessly (Williams, et al. 2004). The integration of technology into education is a sole guarantee that would make teaching and learning process successful (Sang et al. 2010) because it facilitates teaching activities for teachers and students and lessen the burden for them.We should mention at this point that there is a correlation between the use of technological devices and motivation.In other words, motivation has a direct relationship with technologybased instruction for students and teachers and their academic achievements and the improvement of their teaching practices and styles.Some studies confirmed that relationship by having reached the fact that the teachers who use technology are always ready to spend more time in class thanks to the enjoyment and lightheartedness of the teaching activities they devised with technological devices and computer programs (Vongkulluksn et al. 2018).We deduced from the literature that digital competence is unavoidable if we want to hone our skills and practices, especially in learning and teaching process. For the different dimensions of digital competence, it has been very clear from the literature that one or two digital competence-related areas have extensively been assessed (A.Çebi, İ. Reisoğlu, 2020).Apart from very few research studies that have often done in some countries across the globe, all other dimensions of the digital competence have so far been ignored by many studies conducted in this domain (Ibid).Consequently, the field needs studies which completely and thoroughly investigate the teacher-trainees' level of digital competency and related deficiencies to specify their perceptions and needs.These studies should also highlight the areas in which their digital competences vary according to variables like gender, age and type of the bachelor's degree, etc. In the same vein, the digital competence-related deficiencies or shortcomings should be identified with the purpose of making some illuminating proposals or suggestions from the nature of these spotted deficiencies so as to enrich training courses teacher-trainees take in teacher-training centers as a first step towards helping them improve their teaching practices.In this way, the digital competence-related deficiencies which could be identified would appropriately be addressed and exploited to draw up effective strategies to develop their digital competence.The effectiveness of the suggested solutions entails that studies, dealing with this subject, must be done within each country because the policies adopted in each country with respect to teacher training programmes are quite different.So, the necessary steps and initiatives should be taken by any country to meet its needs in this field. To our best knowledge, the researches conducted on such a topic have been rare in Morocco and as a result research about the digital competence of pre-service teachers has been terribly needed and how the intervening factors as age, gender, and the like come into play.As long as digital competence is an integral part of the teaching and learning operation in this century, it should be integrated into pre-service teacher training.In other words, if we want the educational system to keep up with the ever-changing prerequisites of teaching and learning, the current pre-service teacher training programs should enhance the teacher-trainees' digital competence.From our experience as Computer Studies teacher-trainers and from some studies carried out on this subject, we feel that teacher-trainees' level of digital competence should be further explored and elaborated to see how digitally competent they are and to gauge the effect of some intervening factors on their digital competence. In accordance with this fact, the present research paper, in our contribution to this field, aimed to explore the relevance of digital competence in teaching and how the teacher-trainees perceived its importance in their future teaching jobs and to what extent they mastered it.To evaluate this, we focused on how the teacher-trainees perceived their digital competences and the variables that could intervene and make their opinions about it vary. Research sample The study group composed of 291 teacher-trainees from the two institutions of Draa-Tafilalat Regional Teacher-Training Center; one based in Errachidia and the other in Ouarzazate.It is note-worthy at this point that the number of the participants might be unrepresentative across Morocco because the number was very limited and the subjects were only from Draa-Tafilalte Regional Center for Education and Training as mentioned above.However, this study could be complementary to other rare research papers some Moroccan researchers carried out before on this particular subject to broaden the scope of research and (re)explore and (re)investigate more variables that might be indispensable for developing digital competences of Moroccan pre-service teachers and practicing teachers.So, we conducted the research within the framework of the data garnered from the digital questionnaires we administered via social media outlets to 291 teachertrainees of the two cycles of our educational system, primary and secondary.To indicate, 50 questionnaires were excluded because they were not filled out properly.For the gender question, female participants made up (49%) of the subjects of the study and the remainder (51%) were male participants, (n= 118) and (n=123), respectively.The average age of the teacher-trainees was, as calculated, 31,63.For the branches of the members of the study group were broadly: Letters/ Arts and Science.However, they are all bachelor holders (Sociology, French, Arabic, Physics, Mathematics, Islamic Education…) and they all graduated from different Moroccan universities. Aim of the Study The sole purpose of this study was to investigate the perceptions or opinions of the teacher-trainees on their level of digital competence and how the three variables (gender, the type of the bachelor's degree and age) influenced their opinions and competency.To analyze the responses collected by the aforementioned questionnaires, we used the clustering analysis method to clearly and sufficiently analyze and discuss the items in consistency with the determined purpose of the present paper. So, we got the answers to the following research questions and we did discussion and interpretation of the results we reached: 1-How do the pre-service teachers perceive their digital competence?2-Do variables, gender, type of the bachelor's degree and age, play a certain role in making the pre-service teachers' opinions about their digital competence vary? Clustering Analysis This analysis tool is, as Ferreira and Hitchcock (2009) explained, was employed to discover the previously unknown relationships between objects and reduce the number of dimensions and to determine the outliers.Put it differently, cluster analysis is a type of Unsupervised Learning technique used to find commonalities between data elements that are otherwise unlabled and uncategorized.The goal of clustering is to find distinct groups or "clusters" within a data set.Using a machine language algorithm, the tool creates groups where items in a similar group will, in general, have similar characteristics to each other.To indicate, a good clustering method produces high quality clusters with high intra-class similarity and low inter-class similarity.Moreover, the quality of a clustering result depends on both the similarity measure used by the method and its implementation and also the quality of a clustering method is measured by its ability to discover some or all of the hidden patterns (Mark Ryan M. et al. 2015).From this, we could define it as putting objects/things into their natural clusters/groups depending on their similarities and common points.In general, this tool has three popular clustering techniques: Hierarchical Clustering analysis and Non-Hierarchical Clustering analysis and K-Means Clustering.The objective of this analysis (clustering analysis) is to identify patterns in data and express their similarities and differences through their correlations (Ibid).In the present study, we used a method that MacQueen proposed in 1967 so as to divide a universe with N number of dimensions into k number of clusters.The latter, In the K-means algorithm, consisted of a random point and were integrated in the clustering.From this, we used k-means since it has been the most common method used in nonhierarchical clustering analysis.Each universe was assigned with the closest mean and this mean for that group was re-calculated given the new universe as McQueen contended (1967). Data Collection Tool The data collection technique used in this scientific article was a digital competence questionnaire.This data collection tool has been developed by researchers on the DigComp framework.Five dimensions of digital competence were included in the questionnaire.These five dimensions were as follows: "Information and data literacy", "Communication and collaboration", "Digital content creation", "Safety", and "Problem-solving".The development of this went through the rectification and testing process after having been devised on a guide for DigComp prepared by Carreto et al. (2017) to see how suitable it would be for the purpose of our research questions.Some experts major in digital competence were asked to judge if the questionnaire's scope and expression and design were experimentally proven and acceptable.Based on their feedback, some changes were made to some specific items.To be sure if the language used in it was straightforward and clear, I asked an expert in French language to check if the form was linguistically correct and understandable.The questionnaire was a 6-point Likert-type varying from 1(= I need more knowledge about the subject) to 6 ("I have strong knowledge about the subject"). Findings To reiterate, the study's purpose was to know the opinions of the teacher-trainees on their digital competence and to determine whether these opinions were different owing to these three variables: gender, type of the bachelor's degree and age. By and large, it was obvious from the data the questionnaires contained that the teacher-trainees' average response to the items of the categories "information and data literacy" and "communication and collaboration" was 3.2 and above.The opposite remark was made for the items of both "digital content creation" and "problem-solving" in the sense that they had a relatively lower response average than the first ones.For the area of "problem-solving", we recorded the lowest average as per "I solve the technical problems I encounter when using digital media and devices".The mean and SD of the latter item stood at M=2.86; SD=1.13.From the means of the items of the area "Safety", we concluded that the teacher-trainees generally paid attention to their and others' privacy and personal data meanwhile the mean score of their opinions on the item "awareness of the effects of digital technologies on health and environment" went high.As opposed to the latter, from the two items of this area, we found them to be relatively low and the items were: "I know how to deal with online threats" and "I take different measures to protect my digital device and content" and their scores and SD were (M=3.17;SD=1.12) and (M=3.41;SD=1.09), respectively. From this brief analysis and evaluation of all the item averages, it seemed to us that the teacher-trainees' digital competence was less than average. However, the logical analysis required to detail the respondents' answers in the questionnaires in order that we could come up with a clear full picture about the teacher-trainees' digital competence and about the three variables we singled out to be included in compliance with the requirements of the goal of the present research paper.Thus, the following section was divided into three subsections; the first one was devoted to the descriptive statistics and the second to k-Means clustering and the third one to the demographic features and academic backgrounds of the teacher-trainees. Descriptive statistics The following table summarized numerically the questionnaires regarding teacher-trainees' digital competency: As its title explained, Table 1 showed the mean scores, standard deviations and correlations in relations to variables (IDL, CC, DCC, SAF and PS).From the descriptive statistics' results, we inferred that the teacher-trainees' mean score was the lowest in PS (M= 2,864; SD = 1,134) while in CC, the highest mean score was recorded (M=3,393; SD = 1,148).Also, it was very clear from the first examination of the results of the other variables that the scores were a little bit higher than the mean: IDL (M= 3,228; SD=1,175), DCC (M=2,962; SD1,152), and SAF (M=3,086; SD=1,141). K-means clustering For further illustration of clusters, the following bar graph was plotted to display the mean of the cluster centers that the k-means analysis formed: As could be seen in Fig. 1, the respondents were split, within the K-means cluster analysis into three clusters according to their responses to the digital competence questionnaire items.To scientifically analyze the data, we determined for each variable the mean in the cluster and we conducted one-way ANOVA to confirm the effective difference between the groups. Additionally important, some significant differences between the clusters were revealed thanks to the Bonferroni post hoc analysis.This might be due to each factor that had influence on the members' acceptance of the study group. Specifically speaking, the number of the members of the three clusters, as seen in Table 2, varied from 59 (Cluster 3) to 93 (Cluster 1).Given the teacher-trainees' strong and weak items, the areas were labeled as mentioned above (see Data collection tool section). Cluster 3 represented the teacher-trainees who had the lowest scores for the areas: IDL, CC, DCC, SAF, and PS (n=59) whereas in Cluster 1, the scores of the areas were neither strong nor weak.This meant that, for Cluster 1, which was the biggest group (n=93), the means were below zero (negative), and this result confirmed that the teacher-trainees' performance with respect to all the digital competence areas were rather weak.All the cluster members needed more training in this to be quite qualified in digital competence.It was observed, for Cluster 1, from the areas that the highest score seen in communication and collaboration (CC) area (-0,064) and the lowest was spotted in the area of problem-solving (PS) (-0,297).From the means of the these two clusters, it could be concluded that the teacher-trainees were in dire need of specialized training in Information and Communication Technologies (ICT) so that they could do their best in teaching and learning process. For Cluster 2, the areas were, on the contrary, very good and their means were all positive (IDL: 0,851, CC: 0,922, DCC: 0,843, SAF: 1,004, and PS: 0,984.Simply put, In this cluster (n=89), the teacher-trainees had the highest scores for (SAF) (1,004) and problem-solving (PS) (0,984) among all the DC areas.In the same cluster, the lowest recorded score belonged to digital content creation (DCC).It was observed that the members of the cluster were weak in information and data literacy (IDL) area which was necessary, as other areas, for many purposes as searching for data, information or digital content in online environments, etc (see the questionnaire). Demographic characteristics and academic background of the teacher-trainee Table 2 showed the difference between the cluster groups in terms of gender, type of bachelor's degree (Sciences or Letters/Arts) and age.To deepen our understanding, not only did we compare in the study the socio-demographic features of the three clusters in terms of gender, type of bachelor's degree and age but also we employed Chi-square to spot differences if they existed.In fact, the results showed a statistical difference in the k-means distributions of the respondents' gender, type of bachelor's degree and age in the clusters/ groups. Statistically speaking, the number of the female teacher-trainees, as could be observed from the table, was large in Cluster 3 (38,14) but their male colleagues made up the vast majority in Clusters 1 and 2 (39,02) and (46,34%), respectively; however, we could say that some sort of balance was seen in Cluster 1 (39,02% for males and 38,14% for females).Another observation could be made in terms of the teacher-trainees' type of bachelor's degree: those who held Bachelor's degrees in scientific categories (Mathematics, Physics, Chemistry, Earth-and-Life Sciences…..) were the majority of the respondents as the table devoted to this variable showed.The teacher-trainees, who major in Sciences, were 136 out of 291.For the last variable (age), we recorded that the respondents whose age ranged from 21 to 25 were the majority in cluster 1 (63%) while the two categories of 26 to 30 and 31 to 35 dominated Cluster 2 (44 % for both), but in cluster 3, those who were 41 to 45 were the majority by 58 %. Discussion and conclusions As already mentioned, we aimed to know how the pre-service teachers perceived their digital competence and see whether gender, branch and age had any kind of impact on their perceptions and conceptions.It was worth-noting that it could be inferred from the results attained in this research paper that the pre-service teachers' digital competence item responses to the areas of digital content creation (DCC) and problem-solving (PS) were lower while the digital competence item responses to the areas of information and data literacy (IDL), communication and collaboration (CC), and safety (SAF) were higher. The use of the digital technologies by the majority of the pre-service teachers on a daily basis could be enough evidence that could explain why they felt more competent/ advanced in the areas of information and data literacy, communication and collaboration, and safety.However, it was reached, in related research, that pre-service teachers felt that their level of competence in the areas of the digital content creation, safety and problem-solving was still low. As compared to other digital competence knowledge and skills, it was found that the teacher-trainees' knowledge and skills had a lower average in terms of developing content in simple forms employing digital technologies and solving technical problems when they used digital media and devices.And this finding might be attributed to the predominance of the theoretical knowledge in teacher training programs at the expense of real practice for the content development and technical problems. These inferences we made from the digital questionnaires were in agreement with what some researchers contended, especially (Gutiérrez-Porlán & Serrano-Sánchez, 2016), that this kind of teachers were competent enough in some informatics-related techniques; namely, searching, screening and assessment, storage, and organizing while they were less competent in digital content creation, its integration, copy-right and licensing (Napal-Fraile et al., 2018).It remains to note that their competencies were good too in both securing protection against devicesoriginated threats and in raising their awareness of the digital technologies' effects (physical, psychological and environmental).So, we could say that the results reached so far were in consistency with those stated in the literature. From the questionnaires, it was clear that the variable "gender" had some sort of bearing on the preservice teachers' digital competency.It was concluded that the male pre-service teachers were more competent than their female counterparts in terms of information and data literacy, digital content creation, safety, and problem-solving.Simply put, the males outdid the females not only in data and digital content-related information, identifying and accessing information, and data literacy but also in making changes to ready-made content and in developing content in simple forms in the field of the digital content creation. Also, in the area of online safety, the male counterparts did better than the females in protecting their digital devices and numerical contents by taking safety and privacy procedures.The fact that males had higher competency than female colleagues was more evident in the problem-solving too; i.e, the male preservice teachers overachieved the female ones in using different digital techs.to create innovative solutions and new developments to diagnose and overcome the obstacles they came across while using technical devices, and this males' ability was related to their nonstop development of their digital competence thanks to the fact that they were more interested in using digital technologies. We concluded that the results achieved at this stage were empirically useful and appropriate and they were in accord with what has been stated in the literature in terms of the differences between male pre-service teachers and their female counterparts.For example, Keskin and Yazar (2015) confirmed in their research that male teachers were so highly qualified in basic computer use and in garnering bits of information in digital media.Keskin's and Yazar's conclusion was espoused by Esteve-Mon et al. ( 2020) by stating that female pre-service teachers were less competent than their male colleagues when it came to sorting out technical problems and programming.In literature, it is crystal clear that Keskin's and Yazar's inference has been confirmed by many researchers as, to name but a few, Casillas-Martín et al., 2019; Guillén-Gámez et al., 2020 who came to the conclusion in their research papers that male pre-service teachers generally had higher digital competencies as compared to their female colleagues. Along with this fact, another difference was identified between the pre-service teachers and their colleagues specialized in Science.The main result reached in this paper was that the Science students were so competent digitally in nearly all areas than their colleagues , male or female, from the other branch (Letters/Arts).This fact was confirmed in information and data literacy by their outperforming their colleagues major in Letters/ Arts, especially in using searching strategies to access information, data and digital content.It could also be said that in the area of communication and collaboration they were, no doubt, able to better use digital technologies to work online collaboratively than their Arts/ Letters colleagues. The higher competency the Science teacher-trainees -those who held Bachelors of Sciences-had in the areas of digital content creation, safety and problem-solving was very clear.For the first area (digital content creation), it was inferred that they were better in using digital technologies to develop simple, different forms of content, change ready-made content and create digital content.For the second one, they were better, too, and more aware than their fellows (in the branch of Letters) of the footprint they left during their online navigation, they also knew how they searched in virtual environments while creating a digital identity and the potential online threats they could face up and how to tackle them.For the third one, they outperformed them in this area as well when it came to recognizing and locating the causes and solve the problems that could crop up when their using digital media and devices.In fact, they sorted out the problems innovatively and skillfully using different digital technologies.Their outperformance might be attributed to varied courses they take on digital competences in the curricula devoted to them at university as students majored in Sciences.In the literature, Çebi and Reisoğlu (2019), for instance, asserted in their research paper that the Science pre-service teachers were far more competent than their colleagues from other branches in the lower areas of digital competence, and this made it clear that the results attained were almost in line with the literature. For the pre-service teachers' age as the last variable to be elaborated and discussed, important differences were determined in all areas of digital competence and between the three clusters.For cluster 3, their digital competence was very weak and this might be due to their age; they were all over fifty years and maybe they did not study computer studies in their school life and university life.On the other side, we observed that cluster 2 achieved higher score as regards digital competence, and this might be attributed to their agethey are younger-and to the fact they were given a number of courses in Computer Studies in their school life from primary school to university.The what was said about the latter cluster could be said about Cluster 1; however, Cluster 2 was the best and most balanced than the two previous clusters. It is worth noting that for Cluster 2, the effect size was large in doing actions; namely, in these two areas: digital content creation and problem-solving.The effect originated from the items related to: the digital content creation and production, the development of content in varied formats using digital technologies, and the changes made to ready-made content. In short, the impact of the variable "age" was not the same in such a way that it had a huge impact on the fields related to the problem-solving, the identification of the causes and solutions to the problems met while using either the digital media and devices or digital technologies for innovating solutions or when boosting their digital competences by embracing the latest in this field. All in all, it was evident that the results we achieved in this study were in tune with other ones attained by a host of researchers who investigated and explored similar points in the literature.Among them, Napal-Frail et al. ( 2018) who concluded in their research that the master's degree pre-service teachers believed that they were not sufficiently competent in developing digital content and fuse different contents while Insterfjord and Munthe (2017) clarified that pre-service teachers' current digital competence in developing digital content was expected to be better in the internship schools.In 2016, Røkenes and Krumsvik suggested that the preservice teachers should go on courses on the digital content creation to develop and hone their knowledge and skills in these items and others.Taking into account these studies and others not mentioned in this paper and the results we achieved in it, it could be noted that the pre-service teachers were in dire need to be trained on developing digital content in particular and other computer-related techniques in general. By comparison, the results reached by some previous works and the ones achieved in this paper were tabulated in the following chart to display the complementarity of these works.We restricted our comparison to only the studies carried out by M. Limitations and suggestions As any study, our article had a handful of limitations and among them: the number of the participants was limited (N=291), the quantative data were only utilized without qualitative ones, and the possibility of the under-reporting of the data on the part of the subjects of the study, the problems of online self-assessment, the problem of over-or underestimate their real digital competence, etc.Also, due to the data collected through the questionnaires and the techniques and models used, we evaluated the results attained through the analyses that were done on an item basis and this was, in fact, another limitation we encountered in our survey.Thus, it seemed useful to resort to the measurement studies so that we could cover the studied areas of the digital competence in future studies and, as a result, the research based on the cause-effect relationship could be conducted when generalizable results could be reached in terms of digital competence. Further, the results attained in this current study made it clear that the pre-service teachers were invited to further develop their competencies in the areas of digital content creation (DCC) and problem-solving (PS) in particular and other areas in general.In tune with this inference, the knowledge and skills, which were needed especially to be further developed in specific training programs devoted to their digital competence, were as follows: developing content in simple forms using digital technologies and sort out technical problems when using digital media and devices. Additionally important, the following aspects should have been taken into consideration: the information, data, needs to be specified to access digital content, changes to be made to the ready-made content, measures to be implemented to protect the digital devices and contents, the safety and privacy related measures to be taken in the online environment, the causes to be identified and solutions to be created, and various digital technologies to be used for innovating solutions to develop digital competence issues or concerns of the current and future teacher-trainees. In a nutshell, the training courses, which are currently given to the pre-service teachers in the training centers should include the development of digital competence as the main part of the curricula of the training process with the aim of helping them develop fairly their competences in: 1-using digital technologies in cooperative work, and in various simple forms of content development; 2-using information search strategies to access information, data, and digital content in an online environment; 3-creating digital content through making changes to the ready-made content; 4-being conscious of digital footprints they leave when they browse to protect themselves from any potential danger; 5-paying ample attention when they create a digital identity or profile in online environments; 6-knowing how to deal with possible online threats that are likely to crop up at any moment; 7-finding the causes of the technical problems they meet when using digital media and devices and possible solutions to them; 8-creating solutions in an innovative fashion by using different digital technologies; Moreover, the teacher training programs of different branches and specialties and cycles may be rich and interesting if they include practical courses and activities to better develop the knowledge and hone the skills of the pre-service teachers' digital competence.This can be done, for example, by instructing them, in teaching practice courses, to devise appropriate tasks to develop their technology-enhanced teaching competence.In this way, the pre-service teachers are likely to be fairly competent enough to integrate technology in education as a means for more comprehensive transformation of pedagogical-didactic practices so that their teaching styles could be effective and efficient and thus guaranteeing the construction of useful pedagogical knowledge for practice and improvement of students' learning. Table 1 . Mean Scores, Standard Deviations and Correlations. Table 2 . gender, type of bachelor's degree (Science or Letters/Arts) and age. Benali et al.'s works because of the dearth of research on this topic in Morocco:
7,088.2
2021-01-01T00:00:00.000
[ "Education", "Computer Science" ]
Comparing perfomance abstractions for collective adaptive systems Non-functional properties of collective adaptive systems (CAS) are of paramount relevance practically in any application. This paper compares two recently proposed approaches to quantitative modelling that exploit different system abstractions: the first is based on generalised stochastic Petri nets, and the second is based on queueing networks. Through a case study involving autonomous robots, we analyse and discuss the relative merits of the approaches. This is done by considering three scenarios which differ on the architecture used to coordinate the distributed components. Our experimental results assess a high accuracy when comparing model-based performance analysis results derived from two different quantitative abstractions for CAS. Introduction The past few years have witnessed the emergence of collective adaptive systems (CAS).These systems crop up in many application domains, spanning critical systems, smart cities, systems assisting humans during their working or daily live activities, etc. CAS consist of distributed computational agents acting in a cyber-physical context.Typically, these agents are replicated (namely, they exhibit the same behaviour).However, there are distinguished features telling CAS apart from traditional distributed systems: on the one hand, agents must be autonomous and adaptive, while, on the other hand, they collectively contribute to reach the overall goal of the system (a.k.a.emergent behaviour).Remarkably, reconciling autonomous and adaptive behaviour with the overall goal of CAS is challenging.In fact, the emergent behaviour of CAS is the resultant of the contributions of each agent in the system.However, agents are autonomous and try to react to changes in their cyber-physical context which may not be uniform.A paradigmatic example is the use of artificial, autonomous agents in rescue contexts that may put operators lives at stake [4].An agent should autonomously and quickly adapt its behaviour to changes triggering danger situations in a nearby area.This behaviour should be driven by the changes occurring in the components operational environments as well as the changes in the local computational state of each component, collectively taken.Also, the global behaviour of CAS should emerge from the local behaviour of its components. Let us elucidate the points above considering the coordination of a number of robots patrolling some premises to make sure that aid is promptly given to human operators in case of accidents.A plausible local behaviour of each robot can be: (1) to identify accidents, (2) to assess the level of gravity of the situation (i.e. to choose an appropriate course of action), (3) to alert the rescue centre and nearby robots (e.g.divert traffic to let rescue vehicles reach the location of the accident more quickly), and (4) to ascertain how to respond to alerts from other robots (e.g. if already involved in one accident or on a low battery, a robot may simply forward the alert to other nearby robots). Note that robots' behaviour depends on the physical environment (tasks (1) to (3)) as well as their local computational state (task (4)). A possible expected global behaviour is that robots try to maximise the patrolled area while trying to avoid remaining isolated and to minimise the battery consumption.It is worth remarking that the global behaviour of CAS is not typically formalised explicitly.As in the robots scenario above, the global behaviour is informally stated and it is expected to emerge from the runtime behaviour of components.For instance, when designing the algorithm for the roaming of robots, one could assume that a robot will not move towards an area where there are already a certain number of robots. This paper applies behavioural specifications to the quantitative analysis of CAS.In fact, it is often the case that the specification of the global behaviour of CAS involves nonfunctional properties.In the example above, for instance, one would like to guarantee that there is always a minimum number of robots patrolling certain areas.Using a simple, yet representative, robot scenario, we apply two different approaches to the performance analysis of CAS.Besides showing how to use behavioural specifications to analyse non-functional properties of CAS (emergent) behaviour, this exercise is instrumental for our contribution, which is a study of the relation between two complementary approaches to the performance analysis of CAS recently proposed respectively in [31] and in [18].The quantitative analysis in [31] is based on generalised stochastic Petri nets, while the one in [18] relies on queuing networks.These approaches differ also on the methodologies for the quantitative modelling they support.The approach in [31] requires that the designer must directly come up with a performance model (rendered as generalised stochastic Petri nets).Instead, in the latter approach [18], the designer does not have to directly develop the queueing network for the quantitative analysis, because it is indeed 'compiled' from the behavioural specification of the CAS.In this sense, the former approach is a model-based methodology, while the latter is a language-based methodology. Through the paper we will highlight the main differences between the two methodologies, compare them and discuss their relative merits.More precisely, the paper uses a robot scenario to address the following two research questions: RQ1 To what extent the approaches in [18] and in [31] support performance-aware design of CAS? RQ2 How do the features of the approaches in [18] and in [31] compare? Methodologically, we identify three scenarios which differ among each other in the way components of the system are coordinated to achieve their goals.Each scenario corresponds to a different architecture and it is designed to capture some realistic case study.Then, the quantitative analysis of each scenario is performed using the two approaches.Finally, we analyse the results and draw our conclusions.As we will see, our analysis suggests a hybrid combination hinging on both approaches.Outline Sect. 2 describes the scenario used in the rest of the paper.We will consider three different architectures (i.e.independent, collaborative and centralised) for this scenario.Section 3 provides the models based on the specification language in [18] for the three architectures.Section 4 shows the performance analysis based on the proposed models of Sect. 3. The comparison between the approach in [31] and the one illustrated in Sect. 3 is discussed in Sect. 5. Final comments, related and future work are in Sect.6.This paper is a revised version of [28].The main difference with respect to [28] is that here we extend our analysis with a new scenario, dubbed centralised.Also, we significantly revised Sect. 1 to better motivate our work, and Sect. 2 to improve the presentation of our case study as well as to describe the new centralised scenario.Besides some minor changes to its original, Sect. 3 contains a completely new part (Sect.3.2).A new format of the messages is introduced to simplify the definition of the predicate ρ d : the position of the sender is now part of the sent message, while in [28] it was obtained through function pos.The presentation of Sect. 4 has been improved; also, this section has been extended with a new part on the analysis of the centralised architecture (cf.Sect.4.3).Sections 5 and 6 have also been revised.We extended the former to take into account the analysis of the centralised architecture and the latter to compare our work with other approaches that we could not include in [28]. A robot scenario Through the paper, we will rely on a scenario where robots have to transport some equipment necessary in an emergency from an initial zone to a target location.There are a few paths that robots can take to accomplish their mission.We consider the following options: • Robots can take a straightforward path. • Robots can attempt a shorter path. The first option is always viable, but it is slower than the other.The second option requires robots to go through a sequence of doors that can randomly switch between that states 'open' and 'closed'; for simplicity, we will consider cases with two doors to be open/closed.In all the considered architectures, we design robots so that they take the longer route as soon as they find a door closed.When this happens on the second door, it will take more time for robots to reach the destination than if the alternative route had been taken at the start of the journey.After the delivery, robots return to the initial zone trying to follow the reverse path and the Fig. 1 The analysed scenario.Two doors separate robots from their destination same constraints apply.A high-level view of the considered scenario is depicted in Fig. 1, which is borrowed from [31]. A clear requirement in this scenario is that the equipment is delivered as soon as possible to tackle the emergency.Designing an optimal, or at least acceptable, strategy for the robots to satisfy such requirement is however not trivial, even in our simple scenario.A bad strategy may lead most robots on the slowest path, but a good strategy is hard to find due to the fact that it depends on how robots react to environmental changes.Indeed, it is commonly accepted that the performance of a cyber-physical system varies with changes in the physical environment.Moreover, as experimentally measured in [31], dynamic changes in the cyber-physical space and architectural patterns impact on the performance of cyber-physical systems.This type of analysis suggests that in this domain it is useful to factor performance at design time.Following [31] The approach proposed in [31] hinges on Generalised Stochastic Petri Nets (GSPNs) [5] as suitable models of cyber-physical systems.In this paper, we apply such approach by adopting (i) a different modelling language, hinging on behavioural specifications and (ii) relying on queueing networks [20] for performance analysis.The modelling language used here has been advocated in [18] for specifying global behaviour of CAS.As shown in [18], this modelling language has a natural connection with queueing networks, therefore enabling performance analysis of CAS.A key feature of our modelling language is that it abstracts away from the number of agents' instances, i.e. • an arbitrary number of agents can embody the same role; in our case study, for instance, an unspecified number of devices impersonate the robot role; • communicating partners address each other through attribute-based communication instead of explicit channels (as in channel-based communication) or their identities (as in the actor model). Therefore our model allows us to specify complex multiparty scenarios regardless the number of agents' instances. A behavioural specification model The behavioural specifications in [18] are inspired by AbC, a calculus of attribute-based communication [1].Basically, attribute-based communication is an abstract mechanism for addressing partners of communications.Unlike communication mechanisms based on channels or direct addressing of senders and receivers, attribute-based communication allows one to specify many-to-many communication among dynamically formed groups of senders and receivers.Informally, components expose domain-specific attributes used to address communication partners according to predicate on such attributes.For instance, the robots and the doors in the scenarios in Sects. 1 and 2, may expose an attribute recording their physical position.This attribute can be used to specify communications among "nearby" robots through a suitable predicate so to determine the communication group as the set of robots satisfying such predicate.The attribute-based communication mechanism of AbC mechanism is rendered in [18] through interactions, which in their most general form are defined as where A and B are role names, ρ and ρ are logical formulae, e is a tuple of expressions, and e is a tuple of patterns, that is expressions possibly including variables.The intuitive meaning of the interaction in (1) is "any agent, say A, satisfying ρ generates an expression e for any agents satisfying ρ r ,d , dubbed B, provided that expression e matches e." [18] Fig. 2 A model for the independent architecture The conditions ρ and ρ r ,d predicate over components' attributes.The payload of an output is a tuple of values e to be matched by receivers with the (tuple of) patterns e ; when e and e match, the effect of the communication is that the variables in e are instantiated with the corresponding values in e. A send operation targets components satisfying a given predicate on such attributes.Let us illustrate this again using our scenarios.If pos is the position attribute exposed by robots, we can define a predicate ρ as abs(self.pos− pos) < 5mt, for receiving robots.Hence, ρ is satisfied by a receiving robot less than five meters away from a sending robot (i.e. the difference between the position self.pos of the receiver and the one pos of the sender is less than five metres).A robot disregards a message if its position is such that it does not satisfy ρ. Role names A and B in (1) are pleonastic: they are used just for succinctness and may be omitted for instance writing ρ Interactions are the basic elements of an algebra of protocols [37] featuring iteration as well as non-deterministic and parallel composition.For the sake of this paper, it is sufficient to think of a protocol as a regular expression in the alphabet of interactions of shape (1).Actually, we will avoid technicalities using the intuitive graphical presentation of this algebra given in [37].We use gates to identify control points1 of protocols: • entry and exit points of loops are represented by -gates, • branching and merging points of a non-deterministic choice are represented by -gates.This notation will be further described in the rest of this section which provides information on the architectures of our scenario in terms of the graphical notation sketched above. Independent architecture Figure 2 gives a possible model capturing the independent architecture described in Sect. 2 in the graphical notation of our specification language.The protocol behaviour is rendered by a loop whose body is delimited by the -gates.The model2 of the body consists of the sequential composition of the behaviour for the forward and the backward journey3 of robots.Robots try to avoid the longest route.On their forward journey, robots attempt to use the path going through the first door and then the second; likewise, on their backwards journey, they try to use the path going through the second and then the first door. Interactions among doors and robots do not involve value passing; for instance, robots detect the status of the first door when they pattern match on the tuples D1, o and D1, c for open and closed doors respectively (and likewise for the second door).Robots detect the status of a door according to the format of the messages they intercept.For instance, on its forward journey, a robot either pattern matches the tuple Fig. 3 A model for the collaborative architecture D1, o or the tuple D1, c from the first door.This choice is represented in Fig. 2 by -gate immediately below the topmost -gate.If the robot receives a D1, c tuple from the first door, it continues its journey on the alternative route after which it starts the backward journey.Otherwise, the robot approaches the second door and again goes through if D2, o is received, otherwise the robot takes the alternative route.The behaviour on the return journey is similar, depending on the order in which robots approach doors. Let us now refine this model so to take into account the distance of robots from doors.Indeed, for simplicity, Fig. 2 uses role names D 1 , D 2 , and R however, this is not very precise.Here we are interested to express that robots detect the status of a door only when they are "close enough" to it.Attribute-based communication can address this issue.To capture the behaviour described above let us assume that robots and doors expose the attribute ID yielding their identity.Then, we can define the conditions where d is a parameter of our model setting the maximal distance at which doors and robots communicate.Then we can replace in Fig. 2 and similarly for the interactions D 1 (2) and the one for the closed status state that the door4 with ID set to d 1 emits a tuple with their identity and the status.These tuples are intercepted by components whose state satisfy ρ d (x), where x is the variable instantiated with the position of the sender.Other components would simply disregard those messages.Note that y, substituted with the value d 1 , is dummy here; it will be used later. Collaborative architecture The collaborative architecture can be obtained by simply extending the independent one with the interactions among robots.A possible solution is given in Fig. 3 where the binary predicate ρ r ,d is defined as and is satisfied by any component within a radius r from component z and to a distance more than d from component z .Note that r is another parameter of our model setting the radius at which robots communicate with each other.For readability, Fig. 3 shows only the body of the loop and there o 1 and c 1 shorten D1, p,o D1, p,c , respectively, where p is the position of D1 (and likewise for o 2 and c 2 ). As in the independent architecture in Sect.3.1, there is a forward and a backward phase.Remarkably, the modification of the model in Sect.3.1 is pretty simple.The only difference is that, before continuing their journey on the alternative route, robots communicate to nearby robots the status of the door when they find it closed. The fact that the adaptation is quite straightforward is due to the features offered by our modelling language.The attributes of components are indeed allowing us to just reuse Fig. 4 A model for the centralised architecture.The diagram corresponding to the backward journey is similar and omitted for the sake of space the condition ρ d also for coordinating inter-robots interactions.There is the following crucial remark to be made. The behaviour of robots is to wait for three possible messages: the two sent by the door and one possibly coming from a robot which detected that the door was closed.As a consequence, there might be robots satisfying condition ρ r ,d (y, x) in Fig. 3, that is they are not close enough to the door, but have a nearby robot, say r, aware that the door is closed.These robots should therefore be ready to receive the communication from robot r.Our model accounts for this type of robots, but the graphical notion 'hides' this since it makes explicit only two branches on -gates.As we will see in Sect.4, this is a key observation for our performance analysis. Centralised architecture In this architecture, a coordinator C maintains information about the status of doors D1 and D2.The status is communicated to robots approaching a door.Moreover, the coordinator updates its records of a door status when a robot finds the door closed. The protocol is modelled by the diagram in Fig. 4 where, for the sake of space, we report only the body of the forward journey.When a robot is closer to a door less than a distance d, it receives from C the status of that door (messages D1, p 1 ,o and D1, p 1 ,c ).For instance, the topmost interactions in Fig. 4 describe the communication between the coordinator C and the robots R approaching door D1.If D1 is closed, C sends D1, p 1 ,o that robots R pattern match with y, x,c when they are within a distance d from D1.This makes robots R to get in variables y and x respectively the identity of the door and its position.If the door is closed, R continues on the longest route; otherwise, R continues towards the door. When R is away from the door a distance d < d, it can receive the actual status of the door, communicated by the door itself.As before, R decides which route to take depending on such status; however, when approaching a closed door, R updates C informing it that the door was found closed. Quantitative analysis In [18], we relate our modelling language to Queueing Networks (QNs) [20], a widely used mathematical model to study waiting lines of systems represented as a network of queues [12,13,16].Customers and service centres are two main elements of QN.Customers represent jobs, requests, or transactions that need to be handled by the system.Service centres are resources that process the customers of the systems.If a service centre is busy (i.e. it is processing a customer), other jobs need to wait in a queue for their turn to be served.Other QN elements are routers and delay stations, which allow forwarding customers to a specific centre and modelling processing lags that do not require system resources, respectively.In our previous work [18], we provide rules for the automatic generation of QN models from behavioural specifications.The main idea is to transform (i) an interaction into a service centre and (ii) a non-deterministic choice into a router.In the following, we apply this construction to our robot scenario and its architectures. The QN performance models derived from the behavioural specifications are compared for assessing their validity to GSPN models developed in our recent experience [31].A GSPN consists of places (represented as circles), tokens (represented as dots), transitions (represented as rectangles), and arcs that connect places to transitions and vice-versa.A transition is enabled (to fire) when all its input places (a.k.a.preset) contain enough tokens, i.e. at least as many tokens as specified by the weight of arcs.When a transition fires, the number of tokens specified by the arc weight is removed from input places and new tokens are generated into output places (if any, a.k.a.postset).After being enabled, a transition may fire immediately (immediate transitions, represented by a thin black rectangle) or after a stochastically distributed time (timed transitions, depicted as a thick white rectangle).In this paper, all timed transitions follow an exponential distribution. In [31], we use GSPNs to show that the performance of cyber-physical systems is affected by architectural patterns and dynamic space changes.Here, we aim (i) to investigate the performance of CAS using QNs and (ii) to compare these results with those obtained by studying the same system with GSPNs.This analysis is carried out using JSIMgraph, i.e. the simulator of Java Modelling Tools (JMT) [8].The JSIMgraph simulator discards the transient system behaviour, namely, behaviour for which the response of a system (i.e. its output metrics) changes over time.When all performance indices under analysis are within the desired confidence interval5 JSIMgraph stops the simulation.We set to 99% the confidence interval for our experiments. Independent architecture We answer RQ1 by using the approaches in [18,31] to study the performance of CAS.In this section, we consider the robot scenario with independent architecture. We obtain the QN in Fig. 5 by applying rules defined in [18] to the forward and backward boxes of the behavioural specification in Fig. 2. For example, • the first and second non-deterministic choices in the forward box of Fig. 2 The GSPN depicted in Fig. 6 is obtained by adapting the performance model proposed in [31] for independent robots to the scenario considered in Sect. 2. A delay centre (i.e.Robots in Fig. 5) represents the number of robots in the system as well as the time that each robot waits on average for a new task to be assigned.Our experimental setting considers a fixed number of robots (N = 100 in Table 1). 5 A confidence interval is a range of values that is likely to contain performance parameter with a certain probability.Initially, all robots are waiting to receive a task.After an exponentially distributed time, the transition wait fires.A task is assigned to one of the waiting robots, a token is removed from the input place of the wait transition, and a new token is generated in its output place.Such output place represents robots moving towards the first doors.Every time the transition D1 reach fires, a robot reaches the door.If the door is closed, the immediate transition D1 fail fires and the robot takes the alternative long route with the timed transition D1 alt., otherwise, the robot goes through the first door with the immediate transition D1 succ.and continues its journey towards the second one with the timed transition D1 straight.The two performance models are parameterised with numerical values from the literature [40] as shown in Table 1. Using the QN and GSPN models developed for the scenario under analysis, we can answer RQ2.Specifically, we compare the response time estimated by both models when the probability that each door is open varies.We define the response time as the time taken by each robot to complete the assigned task and return to the initial room.Results of this analysis are plotted in the left histogram of Fig. 7 with their 99% confidence interval.In this figure, the extreme cases of 0.01 and 0.99 probabilities are reported instead of 0 and 1 since the latter ones are not probabilistic by definition, i.e.,such values imply doors are always closed or open.Experimental results show high agreement in the performance predictions.The QN derived from the behavioural specification and the GSPN predicts a comparable response time independently of the considered probability.If the probability of doors being open is high, robots are likely to take the fastest route and the response time is minimum.The longest response time is observed when 0.2 ≤ Pr(Door is Open) ≤ 0.3, i.e. robots may find a door open and the other one closed. where R GSP N and R QN are the response times estimated using GSPN and QN, respectively.The MAPE is 1.18% on average and always less than 4%.This is an excellent result when estimating the system response time [35]. Collaborative architecture Here, we answer RQ1 and RQ2 considering the robot scenario in Sect. 2 deployed a collaborative architecture. Applying in [18] the behavioural specification of Fig. 3 the QN depicted Fig. 8. Now, routers can requests to three different branches.Indeed robots may go through an open door, get stuck in front of a closed door, or receive a message from a peer warning them that the next door is closed.The latter case is represented by the D1 msg.and D2 msg.service centres for the first and second doors, respectively.When robots get a warning message from their peer, they can immediately take the alternative route without spending time approaching the door and checking its status.The warning message needs to be issued by a robot that finds a closed door after having reached it.This is modelled by D1 send and D2 send service centres positioned after D1 closed and D2 closed, respectively. Performance predictions obtained using the QN in Fig. 8 are compared to those observed modelling the scenario under analysis with the GSPN described in [31] and depicted in Fig. 9.These performance models are parameterised with numerical values reported in Table 1, except for probabilities used in the QN routers (i.e.D1 status and D2 status).This is necessary for a fair comparison of QN and GSPN performance predictions.Indeed, the probability of a robot receiving a message from a peer is conditioned by other system attributes (e.g. the door status, as well as the number, position and velocity of robots).While the GSPN keeps track of dependencies among input parameters via the accurate and complex modelling of the whole environment, the QN performance model leverages only routing probabilities and the time taken to complete actions.Hence, we first analyse the GSPN model of the collaborative architecture given in Fig. 9.This analysis allows us to infer the probabilities for robots to receive a message from a peer under certain system circumstances so to properly parameterise the QN model. Similarly to the independent architecture, we estimate the response time of the collaborative scenario against the probability of doors being open, using both QN and GSPN models.Hence, we assess the quality of QN predictions by comparing response time to the one estimated using the GSPN model.The response time predicted by both performance models is plotted in Fig. 10 (i.e.left histogram).The QN parameterised as previously described generates predictions that agree with those from the GSPN.The response time of collaborative systems is generally shorter than the one observed with an independent architecture since robots share knowledge about the environment.Note that such shared knowledge might also affect negatively the performance of a CAS.This is the case when doors are open with a high probability, i.e.Pr(Door is Open) ≥ 0.9.Indeed, a robot that gets stuck behind a closed door propagates its finding making other agents take the alternative route, even if the probability that the door will soon turn open again is high. The maximum MAPE observed with the collaborative architecture is smaller (i.e. less than 1%, see the right histogram of Fig. 10) than the one of the independent scenario.Similarly, also the average MAPE (i.e.0.33%) improves due to probabilities directly derived from the GSPN and plugged into the QN routers.The probability value for which the maximum MAPE is observed varies across the two architectures, i.e.Pr(Door is Open) = 0.8 in the independent case and Pr(Door is Open) = 0.2 in the collaborative one.Different factors can cause this behaviour, e.g. the intrinsic stochasticity of the system as well as the different types of considered architectures. Centralised architecture Here, the approach presented in [18] is used to model and study the performance of a CAS deployed with the centralised architecture described in Sect.3.3. The application of the rules listed in [18] to the behavioural specification depicted in Fig. 4 yields the QN shown in Fig. 11.In order to model the behaviour of robots, which depends on the status of the doors communicated by the coordinator, the QN has two extra routers, D1 coord and D2 coord.If the coordinator communicates that the next door is closed (i.e.D1 closed (coord)), the robot takes the alternative and longer path that allows reaching the Springer passes through; otherwise, the robot takes the alternative route and communicates its findings to the coordinator (i.e.D1 send).The same process is repeated for every door (i.e.D1 and D2) in both directions (i.e.Forward and Backward). The performance results obtained by simulating the QN in Fig. 11 are compared to those returned by the GSPN model proposed in [31] and illustrated in Fig. 12.The GSPN and QN models are parameterised as in Table 1.Probabilities used in the QN routers are conditioned on other attributes (e.g., the probability that each door is open and the number of robots in the system) and require further attention to enable a fair comparison between GSPN and QN.Specifically, we parameterise the QN routers with the probabilities observed by running the GSPN model. The average time taken by a robot to complete its mission and go back to the starting point (i.e. the system response time) is estimated with the two approaches and depicted in Fig. 13 (left).Also in this architecture, QN and GSPN agree on the trend of the system response time, i.e., flat up to Pr(Door is Open) = 0.9 when it starts to decrease.Figure 13 depicts the MAPE (right) as well and highlights the accuracy of the QN model with respect to the GSPN one.The small error (i.e.1.88% on average and less than 3% for all considered probabilities) denotes the ability of the QN to model the same scenarios analysed with the GSPN even when a centralised architecture is considered. Discussion Our analysis confirms that both QNs and GSPNs are suitable for the performance evaluation of CAS. Our experience shows that there is a trade-off between simplicity and expressiveness in the use of these models (RQ1).The two modelling approaches offer different advantages, which we discuss hereafter.A main advantage of QNs is that they are conceptually simple: performance analysis is based on the probabilities assigned to observable events (e.g.door open).Moreover, QNs can be automatically derived from our behavioural specification of the system.A Fig. 14 System response time predicted using QNs to model the three scenarios key observation is that our behavioural specification models introduce a clear separation of concerns: the modelling of the system is orthogonal to its performance analysis that is done by using the derived QNs; hence one just needs to fix the probabilities for the observable events.However, this comes with a cost: it is not usually easy to determine such probabilities. Instead, the modelling with GSPNs does not require to directly specify probabilities, a clear advantage over QNs.Indeed, with GSPNs one has to simply select a suitable time distribution: this is therefore a more reliable approach compared to QNs.Besides, GSPNs allow for controlling events with a same process; for instance, if a door is open in a direction, it must also be open in the other direction; this cannot be modelled using probabilities only.Overall, GSPNs require more expertise on building the performance model, but its parameterisation includes timing values only, hence they may also be used for monitoring (see, e.g.[29]). However, GSPNs are more "rigid" than QNs because certain characteristics of the system are hardwired in the model itself.For instance, changing the number of doors robots have to traverse would require a more complex performance model.Moreover, this kind of generalisation will make the size of the model much larger, which can severely affect the performance of the analysis as the state space grows exponentially [5].This is not the case for QNs derived from our behavioural specification, because they permit to easily abstract Springer away from the number of system components.On the other hand, GSPNs allow to easily model other types of sophisticate coordination policies.For instance, in the collaborative architecture, it is easy to let the robot first noticing the closed door wait for all nearby robots to take the alternative route before continuing its journey.This is not simple to model with our behavioural specification language or with QNs. An interesting outcome of our simulations is that the two different model-based performance predictions match, the error is never larger than 4% denoting high agreement between the proposed performance abstractions (RQ2).Figure 14 plots the response time against the probability that doors are open, for the three architectures considered in this paper.These experimental results confirm some expectations for the analysed scenarios.In particular, the system response time is minimal when the probability that doors are open is high; this confirms the observations in [31].Our experiments highlight also that the independent architecture performs worse than the others when the probability that doors are open is below ∼70%.Instead, as this probability increases (from ∼80% onwards) the independent architecture outperforms the others.This is due to the fact that for the collaborative and centralised architectures robots can follow the alternative path more promptly avoiding the penalisation of attempting the quicker route and then taking the alternative paths due to a closed door.Moreover, the lack of communication prevents unlucky agents (i.e.robots that find a closed door despite the small chances) from "propagating" their bad luck to their peers.Considering only the response time observed with the collaborative and centralised architectures, experimental results show that the centralised architecture allows saving time when Pr(Door is Open) ≤ 0.4.Collaborative and centralised architectures show similar response time if 0.5 ≤ Pr(Door is Open) ≤ 0.6, whereas the collaborative architecture outperforms the centralised one when Pr(Door is Open) ≥ 0.7.When the probability that doors are closed is high, the centralised architecture optimize the response time, since the controller has a global knowledge of the environment.As already discussed, this turns into a disadvantage if Pr(Door is Open) is large, i.e. when the controller's knowledge gets old quickly.In this case, it is convenient leveraging up-to-date information provided by robots that have just sensed the door status. A problem common to both the QN and GSPN models presented here is that they are not suitable in experimental setting with very high "density" of robots deployed in the system; namely, when the parameter N in Table 1 is much higher than the physical dimension of the operating space.In this case, our QN and GSPN models do not take into account that robots can be significantly slowed down due to the constrained physical space.A possible way to overcome this limitation is to profile the average speed of robots when the population size changes, and parameterise the model accordingly. Conclusions, related & future work This paper investigates the performance analysis of CAS through the lenses of three different formal notations: (i) an adapted version of a calculus of attribute-based communication (AbC), (ii) Queueing Networks (QNs), and (iii) Generalised Stochastic Petri Nets (GSPNs).We present a case study of autonomous robots for which three architectural scenarios have been elaborated.Experimental results demonstrate the usefulness of our modelling effort that allows to derive performance characteristics of interest.We compare QNs and GSPNs by exploiting the models of the three architectures and observe a high level of agreement on the obtained model-based performance predictions. Behavioural Abstractions.Coreographic models have been applied to Cyber-Phisical Systems [24,25], IoT [23] and robotics [27].These papers focus on verification of correctness properties (e.g.deadlock freedom and session fidelity) and are not concerned with quantitative aspects or performance analysis.Some works in the literature exploit behavioural abstraction for cost analysis of message passing systems.Cost-aware session types [11] are multiparty session types annotated with size types and local cost, and can estimate upper-bounds of the execution time of participants of a protocol.Temporal session types [14] extend session types with temporal primitives that can be exploited to reason about quantitative properties like the response time.A parallel line of research studies timed session types [7,9,10], that is session types annotated with timed constraint in the style of timed automata [3].They have been used for verification of timed distributed programmes by means of static type checking [9,10] and runtime monitoring [6,29].Despite the presence of timed constraints, which makes timed session types appealing for performance analysis, they have never been applied in such setting.Session types have been extended with probabilities [17] for verification of concurrent probabilistic programmes, which is potentially useful for the CAS analysis.A common limitation of these approaches is that they do not easily permit to define the number of agents' instances embodying a specific role in the system specification.Our behavioural model instead allows it, as explained in the final remark of Sect.2, hence it is suitable for performance analysis that indeed requires such a system workload information. Quantitative Abstractions.Rigorous engineering of collective adaptive systems calls for quantitative approaches since there is need of designing and managing the coordination activities [15].A recent survey on verification tools for CAS formation [19] outlines that current analysis techniques are still immature to deal with possible changes in decisionmaking.This is the reason why quantitative approaches that keep track of behavioural alternatives and their impact on system performance, as we do in this paper, can be of high relevance for CAS.There exist approaches providing quantitative abstractions for CAS, for instance, Vandin at al. [38] make use of Ordinary Differential Equations (ODEs) to express large-scale systems that are analyzed through bisimulations.Unlike our approach, the analysis presented in [38] has the limitation that is not reliable when the number of agents is rather low, since ODEs provide better accuracy with a large population size.Probabilistic behaviour of agents is investigated by Loreti et al. [26], who also introduce the language CARMA along with a simulation environment to provide quantitative information on CAS.This interesting line of work has currently a limited scalability.Performance properties of CAS are discussed by Viroli et al. [39] with the goal of selecting performance-based optimal configurations while preserving system functionalities.However, this methodology requires the re-deployment of the system and it invalidates the possibility of switching to available alternatives at runtime.Pianini et al. [30] recently proposed a design pattern which partitions system devices into regions and enables internal coordination activities.This is analogous to our collaborative architecture where robots interact with nearby peers. There are some further approaches that can be considered complementary to ours.For instance, Lee et al. [21] present a language, based on Architecture Analysis & Design Language (AADL), for drones that collaborate in packet delivery, specifically each drone needs to adapt to the motions of the other drones for collision avoidance.Töpfer et al. [36] rely on modelling abstractions that incorporate machine-learning and optimisation heuristics to deal with uncertainty in the environment of CAS.The analysed scenario consists of workers going to a factory that may encounter delays and are replaced by standby workers.Similarly, our scenarios can benefit from specification of uncertainties (e.g.failures of robots and other components [33]) and strategies to rescue items, as recently investigated in [32].Lion et al. [22] present an operational specification of components as rewrite systems equipped with a Maude specification that is adopted to incrementally analyze the system design.The illustrative application includes two energy sensitive robots roaming on a shared field, and results demonstrate that the introduced coordination prevents livelock behaviour.Summarizing, to the best of our knowledge, our approach differs from the state-of-the-art in the goal of automatically deriving quantitative abstractions from the behavioural specification of CAS.This way, we aim to simplify the derivation of performance indicators of interest, and provide support for a rigorous engineering of CAS. Future work.Our research agenda includes the investigation of more complex application scenarios, thus to assess the soundness and scalability of our model-based performance analysis.As short-term research direction, we are interested to explore the possibility to automatically deriving the structure of GSPN from our behavioural specifications.We think that this could result beneficial for overcoming some of the drawbacks of GSPN while avoiding the need to determine probabilities.As long-term research direction, we aim to explore the presence of dependencies among input parameters and their impact on the performance analysis of CAS.For instance, in the analysed scenarios, we can consider synchronised behaviour of doors so that they change state in a coordinated way, or the effect of dynamic workloads on the system performance [2,34].This fosters some implications in the parameters of the scenarios, e.g. the probability for a robot to find the second door open (after it crossed the first one) relies on its speed and the time when the robot went through the first door. Acknowledgements The authors thank the anonimous reviewers for their valuable feedback that helped to improve the quality of the paper.Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. become respectively the D1 status and D2 status router in the FORWARD box of Fig. 5; • interactions on the left branch of each choice in the forward box of Fig. 2 become the D1 open and D2 open service centres in the FORWARD box of Fig. 5; • similarly, interactions on the right branch of each choice in the forward box of Fig. 2 become the D1 closed and D2 closed service centres in the FORWARD box of Fig. 5. Fig. 5 Fig. 5 of the independent architecture Places and transitions govern the door status (i.e.open or closed).For example, places of the first door are D1 open and D1 closed, whereas its transitions are D1 opening and D1 closing.A door is closed or open until the related transition (i.e. for the first door D1 opening and D1 closing, respectively) fires and changes the door status. Fig. 6 Fig.6GSPN of the independent architecture Fig. 11 Fig. 11 QN of the centralised architecture Fig. 13 Fig. 13 Centralised architecture: System response time (left) and MAPE (right) vs. probability of door being open Funding Open access funding provided by Gran Sasso Science Institute -GSSI within the CRUI-CARE Agreement.Research partly supported by the EU H2020 RISE programme under the Marie Skłodowska-Curie grant agreement No 778233.We acknowledge the partial support of MUR project PRIN 20228FT78M DREAM (modular software design to reduce uncertainty in ethics-based cyber-physical systems), MUR project PRIN 2017FTXR7S IT MATTERS (Methods and Tools for Trustworthy Smart Systems), MUR project PNRR VI-TALITY (ECS00000041) Spoke 2 ASTRA -Advanced Space Technologies and Research Alliance, and the MUR project PON REACT EU DM 1062/21. Table 1 Numerical values used for GSPN and QN models of independent, collaborative, and centralised architectures.Direction indicates Forward (F) or Backward (B).S D * o pen (QN) is obtained by summing S D * r each and S D * st r aight (GSPN).S D * closed (coor d) (QN) is obtained by summing S D * ask and S D * sk i p (GSPN). Fig. 7 Independent architecture: System response time (left) and MAPE (right) vs. probability of door being open
10,315.2
2023-11-02T00:00:00.000
[ "Computer Science", "Engineering" ]
Improved Succinate Production by Metabolic Engineering Succinate is a promising chemical which has wide applications and can be produced by biological route. The history of the biosuccinate production shows that the joint effort of different metabolic engineering approaches brings successful results. In order to enhance the succinate production, multiple metabolical strategies have been sought. In this review, different overproducers for succinate production, including natural succinate overproducers and metabolic engineered overproducers, are examined and the metabolic engineering strategies and performances are discussed. Modification of the mechanism of substrate transportation, knocking-out genes responsible for by-products accumulation, overexpression of the genes directly involved in the pathway, and improvement of internal NADH and ATP formation are some of the strategies applied. Combination of the appropriate genes from homologous and heterologous hosts, extension of substrate, integrated production of succinate, and other high-value-added products are expected to bring a desired objective of producing succinate from renewable resources economically and efficiently. Introduction Succinate and its desirable properties have been known for a long time. Succinate can be used as an important C4 building-block chemical, and its demand is sharply increasing since new applications of this chemical compound are reported in many publications. Of particular interest is that it can be used for 1,4-butanediol synthesis and further as a monomer in a polycondensation reaction yielding biodegradable poly(butylene succinate). 1,4-Butanediolbased polymers have better properties and greater stability in comparison to polymers produced from 1,2-propanediol or ethylene glycol. Further, CO 2 is assimilated during the succinate biosynthesis which can be considered as an environmental advantage. The commercialization of the polymer produced from biobased succinate by some multinational corporations, such as BioAmber, BASF, Myriant, Mitsubishi, and DuPont, showed that the aim of biobased bulk chemicals is feasible [1][2][3][4]. Biotechnological processes are particularly attractive since microorganisms usually utilize renewable feedstock and only produce few toxic by-products. However, there are limitations of microbial production like limited yields, concentrations and productivities, difficulties in the product recovery from the broth, or the need of pretreatment of most of the raw substrates. These limitations can be significantly improved through application of metabolic engineering. New strategies for succinate production improvement by metabolic engineering are frequently reported. Metabolic engineering of the microbial succinate production covers natural succinate overproducers (like Actinobacillus succinogenes, Anaerobiospirillum succiniciproducens, and Mannheimia succiniciproducens) as well as metabolic engineered overproducers like Escherichia coli, Corynebacterium glutamicum, and Saccharomyces cerevisiae. Genes from homologous and heterologous hosts were often used in combination to complete the pathway [1,2]. In this review, different overproducers for succinate production are examined, and the metabolic engineering strategies and performances are discussed. Finally, strategies for successful commercialization of succinate production by improvement of metabolic engineered are proposed. Succinate Formation Pathway Besides as an intermediate of the tricarboxylic acid (TCA) cycle, succinate can also be a fermentation end product when The glyoxylate pathway. The glyoxylate pathway operates as a cycle to convert 2 mol acetyl CoA to 1 mol succinate. (c) The oxidative TCA cycle. This pathway converts acetyl-CoA to citrate, isocitrate, and succinate and subsequently converted to fumarate by succinate dehydrogenase. Under aerobic conditions, the production of succinate is not naturally possible, and to realize succinate accumulation under aerobic condition, inactivation of sdhA gene to block the conversion of succinate to fumarate in TCA cycle is necessary. sugar or glycerol is used as a carbon source. There are three pathways for succinate formation including the reductive branch of the TCA cycle, the glyoxylate pathway, and the oxidative TCA cycle [5][6][7][8][9][10]. Under anaerobic conditions, succinate is the H-acceptor instead of oxygen, and therefore the reductive branch of the TCA cycle is used. Succinate accumulates derived from phosphoenolpyruvate (PEP), via some intermediate compounds of TCA reductive branch, including oxaloacetate (OAA), malate, and fumarate (Figure 1(a)). The pathway converts oxaloacetate to malate, fumarate, and then succinate, which requires 2 moles of NADH per mole of succinate produced [5][6][7][8]. The equation of anaerobic pathway is The maximum possible succinate yield based solely on a carbon balance is 2 mol mol −1 glucose when all the succinate is formed via the anaerobic pathway. One major obstacle to obtain high succinate yield through the anaerobic pathway is NADH limitation. This is because 1 mole glucose can provide only 2 moles of NADH through the glycolytic pathway. Therefore, the molar yield of succinate is limited to 1 mol mol −1 glucose assuming that all the carbon flux will go only through the anaerobic fermentative pathway. Another potential biosynthetic route for succinate is through the glyoxylate pathway, which is an anaplerotic reaction to fill up the molecule pool of the TCA. The glyoxylate cycle is essentially active under aerobic conditions upon adaptation to growth on acetate (Figure 1(b)). The glyoxylate pathway operates as a cycle to convert 2 mol acetyl CoA to 1 mol succinate [9]. The equation of glyoxylate pathway is: Since the conversion of glucose to succinate via glyoxylate pathway generates NADH, this route alone is not sufficient to balance the electrons. However, during anaerobic culture and in the absence of an additional electron donor, activated glyoxylate pathway will provide extra NADH to anaerobic fermentative pathway and benefit to achieve higher succinate yield. Succinate can also be formed from acetyl-CoA generated from pyruvate via oxidative TCA cycle under aerobic conditions [10]. This pathway converts acetyl-CoA to citrate, isocitrate, and succinate which is subsequently converted to fumarate by succinate dehydrogenase. Under aerobic conditions, the production of succinate is not naturally possible since it is only an intermediate of the TCA cycle. To realize BioMed Research International 3 succinate accumulation under aerobic condition, inactivation of sdhA gene to block the conversion of succinate to fumarate in TCA cycle is necessary (Figure 1(c)). The equation of aerobic pathway is Debottlenecking of the Succinate Pathway The yield of succinate in anaerobic culture from sugar or other feedstock is strongly decided by available NADH produced in the glycolysis route which results in by-products' accumulations [11][12][13]. The by-product formation is caused by the redox balance from substrate to product. By-product accumulations result in a substrate loss as well as usual product inhibition, such as formate, acetate, and lactate accumulation, whose undissociated form will be harmful for biomass formation and substrate consumption. Also, succinate itself harms the microorganisms in the same way like other weak organic acids. Since the dissociation constants of these weak organic acids and the resistance of microorganisms to these acids inhibition are different, genetic engineering tools can also be used to manipulate the metabolic pathways so that target product pathway can be strengthened or by-product pathways can be selectively eliminated [14]. Deletion of the one of undesired metabolites' pathways will divert carbon resource to other metabolites. However, in some case, the desired product is not improved due to unreasonable metabolic flux redistribution and new undesired metabolite's accumulation. Furthermore, elimination of some by-products will break original intracellular redox balance. So, a reasonable design of metabolic net is needed before genetic manipulations so that the redox or ATP is kept at good balance and the production of desired product is enhanced. Metabolic Engineering of the Succinate Producer A. succinogenes, A. succiniciproducens, M. succiniciproducens are well-known natural overproducers and there were some efforts to improve the yield. E. coli, C. glutamicum, and S. cerevisiae are not natural overproducers, and therefore completely genetic engineered pathway was sought for acquired ability to form succinate. Succinate production using different bacteria species in terms of performances and engineering strategies is compared in Table 1. A. succinogenes mutant strain FZ-6, with pyruvate formate lyase and formate dehydrogenase deletion, did not show improved succinate production. Only when electrically reduced neutral red or hydrogen was fed as the electron donor, the mutant can use fumarate alone for succinate production [34]. Major metabolic pathways in M. succiniciproducens resulting in acetate, formate, and lactate accumulation were successfully deleted by disrupting the ldhA, pflB, pta, and ackA genes. A modified strain LPK7 was developed to excrete 13.4 g L −1 succinate using 20 g L −1 glucose with little or no by-product accumulation. In fed-batch fermentation by occasional glucose feeding, M. succiniciproducens LPK7 produced 52.4 g L −1 succinate, giving a yield of 0.76 g g −1 glucose and a productivity of 1.8 g L −1 h −1 [15]. Escherichia coli. Due to plentiful of genetic tools available, fast cell growth, and simple culture medium, E. coli has turned to one of the most wholly studied systems for succinate production. Strategies in metabolic engineering of E. coli can be classified as four main methods: improvement of substrate or product transportation, enhancement of pathways directly involved in the succinate production, deletion of pathways involved in by-product accumulation, and their combinations ( Figure 2). These methods have been studied in many reports and some high efficient succinate producers have been constructed [35][36][37][38]. Improvement of Substrate or Product Transportation A fundamental change imposed on E. coli is the elimination of glucose transport by the phosphotransferase system (PTS). This modification addresses the limitation that glucose phosphorylation in E. coli is largely PEP dependent. If PEP is generated solely via the Embden-Meyerhof-Parnas pathway, this constraint imposes an artificial ceiling yield of 1 mol mol −1 glucose in succinate production. Although alternate routes for the generation of PEP exist, glucose phosphorylation is energetically more efficient if the PEPdependent system is replaced with ATP-dependent phosphorylation. Further, more PEP are reserved and used for the succinate formation route. Some successful constructed E. coli strains for succinate production, such as AFP111 and KJ060, mainly rely on glucokinase for glucose uptake, which has been confirmed by metabolic flux and enzymatic analysis [35,39]. Succinate export in E. coli is normally active under anaerobic conditions and import only under aerobic conditions. The dicarboxylic acid transport system of E. coli was modified by Beauprez et al. to enhance production of succinate. The engineering comprised the elimination of succinate uptake and the enhancement of succinate output. The gene responsible for succinate import, dctA, was knocked out, and the gene coding for succinate export, dcuC, was overexpressed with a constitutive artificial promoter. The combination of altered import (ΔdctA) and export (ΔFNR-pro37-dcuC) increased the specific production rate by about 55% and the yield by approximately 53% [40]. Enhancement of Pathways Directly Involved in the Succinate Production Overexpression of genes directly involved in the succinate production pathway, including PEP carboxylase, PEP carboxykinase, pyruvate carboxylase, and malic enzyme, was reported in a lot of publications. In a study by Millard et al., succinate production using E. coli JCL 1208 increased from Table 1: Comparison of succinate production using different bacteria species in terms of performance and engineering strategies. 3 g L −1 to 10.7 g L −1 by overexpressing native PEP carboxylase [16]. However, overexpression of PEP carboxykinase did not affect succinate production. Due to the intimate role that PEP is needed as substrate for glucose transport by the phosphotransferase system in wild-type E. coli, the consequences of overexpressing PEP carboxylase are decreasing rate of glucose uptake and organic acid excretion [41]. Another method is to improve pyruvate to the succinate synthesis route through the expression of pyruvate carboxylase, an enzyme which can convert pyruvate to oxaloacetate but not contained in E. coli. A wild-type E. coli strain MG1655 transformed with vector pUC18-pyc, which contained the gene encoding for Rhizobium etli pyruvate carboxylase, led to a succinate formation of 1.77 g L −1 , corresponding to a 50% increase in succinate concentration using the parent strain. The increased succinate was due to decreased lactate formation, whose final concentration decreased from 2.33 g L −1 to 1.88 g L −1 . The expression of pyruvate carboxylase had no effect on the glucose uptake, but decreased the rate of cell growth [17]. Deletion of Pathways Involved in By-Product Accumulation Only shift of carbon flux to succinate route is not enough to control the formation of other undesired metabolites. As a consequence, genetically modified strains without lactate and formate forming routes are developed to improve succinate fermentation. Mutant of E. coli LS1, which only lacked lactate dehydrogenase, had no effect on anaerobic biomass formation. However, E. coli NZN111, deficient in both the pyruvate-formate lyase and lactate dehydrogenase genes, respectively, gave few biomass formations on glucose. NZN111 accumulated 0.18-0.26 g L −1 pyruvate before metabolism ceased, even when supplied with acetate for biosynthetic needs. However, when transformed with the mdh gene encoding NAD + -dependent malic enzyme and cultured from aerobic environment converting to anaerobic environment gradually (by metabolically depleting oxygen initially present in a sealed culture tube), E. coli NZN111 was allowed to consume all the glucose. 12.8 g L −1 succinate was produced as one of the major metabolites [18]. Analogously, when the gene encoding malic enzyme from Ascaris suum was transformed into NZN111, succinate yield was 0.39 g g −1 and productivity was 0.29 g L −1 h −1 [18]. Optimization of the Succinate Yield by a Combination of Gene Operations Donnelly et al. screened a spontaneous chromosomal mutation in NZN111, and this mutation was named AFP111, which can grow on glucose in anaerobic environment. A succinate yield of 0.7 g g −1 was obtained using AFP111 in anaerobic fermentations under 5% H 2 -95% CO 2 flow, and the molar ratio between succinate and acetate is 1.97 [19]. Further, if AFP111 first grew under aerobic conditions for biomass formation and then was shifted to anaerobic fermentation with CO 2 aeration (dual-phase fermentation), higher succinate yield These theoretical yields in anaerobic culture were calculated assuming that NADH and NAD are balanced as a result of central carbon metabolism. These theoretical yields in aerobic culture were calculated assuming that oxygen is the H-acceptor. (0.96 g g −1 ) with a productivity of 1.21 g L −1 h −1 was obtained [20,21]. In Chatterjee et al. 's report, the difference between NZN111 and AFP111 was found to be the PTS. Because of the PTS mutation, AFP111 mainly relied on glucokinase for glucose uptake. No matter anaerobic or aerobic environment for cell growth, AFP111 exhibited obviously higher glucokinase activity than that of NZN111. Compared with the wildtype parent strain W1458, AFP111 also showed a lower glucose uptake [39]. According to Vemuri et al., the routes for PEP conversion to succinate also varied between NZN111 and AFP111. The key glyoxylate shunt enzyme, isocitrate lyase, was not present with both NZN111 and AFP111 grown under anaerobic environment but was detected after 8 h of aerobic culture, and NZN111 exhibited 4-fold higher isocitrate lyase activity than AFP111 [20]. Because the two strains have different modes of glucose uptake and different isocitrate lyase activity level, the distribution of end products in the two strains is different. Further, they concluded that the maximum theoretical succinate yield based on the necessary redox balance is 1.12 g g −1 glucose (Table 2). For obtaining the maximal succinate yield, the molar ratio of the carbon flux from fumarate to succinate to the carbon flux from isocitrate to succinate must be 5.0. Insufficient carbon flux through the PEP-to-fumarate branch or elevated carbon flux through the glyoxylate shunt lowers the observed yield. Achieving the optimal ratio of fluxes in the two pathways involved in anaerobic succinate accumulation requires a concomitant balance in the activities of the participating enzymes, which have been expressed during aerobic growth. The global regulation of aerobic and anaerobic pathways is further complicated by the presence of different isozymes. Based on carbon flux analysis, this group transformed AFP111 with the pyc gene (encode Rhizobium etli pyruvate carboxylase) to provide metabolic flexibility at the pyruvate node. A two-stage aeration strategy (firstly aerobic fermentations with 52 h −1 ; when dissolved oxygen concentration decreases to 90% of initial concentration, shift to anaerobic fermentation with oxygenfree CO 2 sparged) was applied in the fermentation for higher succinate yield and productivity. Using this strategy, a final 99.2 g L −1 succinate concentration with a yield of 1.1 g g −1 and a productivity of 1.3 g L −1 h −1 was obtained [21]. Serial publications on modification of E. coli MG1655 strain for improved succinate production were reported by San's group. Firstly, SBS110MG (pHL413) was created from an adhE, ldhA mutant strain of E. coli, SBS110MG, harboring plasmid pHL413, which encoded the Lactococcus lactis pyruvate carboxylase. After 48 h fermentation using this strain, 15.6 g succinate L −1 was produced with a yield of 0.85 g g −1 [22]. E. coli SBS550MG (pHL413) was further developed by deleting adhE, ldhA and ack-pta routes and by activating the glyoxylate routes by deactivation of iclR. By a repeated glucose feeding, SBS550MG (pHL413) produced 40 g L −1 succinate with a yield of 1.05 g g −1 glucose. They found that the best distribution ratio for the highest succinate yield was the fractional partition of OAA to glyoxylate of 0.32 and 0.68 to the malate [24]. In addition, this group studied the effect of overexpressing a NADH insensitive citrate synthase from B. subtilis on succinate fermentation. They found that this change had no effect on succinate yield but affected formate and acetate distribution. Furthermore, Sánchez et al. tried to place an arcA mutation within the host, SBS550MG, leading to a strain designated as SBS660MG. The phosphorylated arcA is a dual transcriptional regulator of aerobic respiration control, which also suppresses transcription of the aceBAK. Deactivation of arcA could further improve the transcription of aceBAK and hence led to a more efficient glyoxylate pathway. Unfortunately, SBS660MG did not improve the succinate yield and it reduced glucose consumption by 80%. It should be pointed out that succinate production experiments in the previous description were carried out in a two-stage aeration culture in bioreactors where the first stage was aerobic for cell growth followed by an anaerobic stage for succinate accumulation and the initial inoculum was very large (initial dry cell weight 5.6 g L −1 ). Aeration condition during the cell growth stage has great impact on anaerobic succinate accumulation using E. coli 8 BioMed Research International SBS550MG (pHL413) [23]. Martínez et al. found that a microaerobic environment was more suitable for succinate production. Compared with microaerobic environment, the high aeration experiment led to more pyruvate accumulation, which correlated with a lower pflAB expression during the transition time and a lower flux towards acetyl-CoA during the anaerobic stage. The improvement in glyoxylate shuntrelated genes expression (aceA, aceB, acnA, acnB) during the transition time, anaerobic stage, or both increased succinate yield in microaerobic environment [42]. Because intracellular acetyl-CoA and CoA concentrations can be increased by overexpression of E. coli pantothenate kinase (PANK) and acetyl-CoA is a promising activator for PEP carboxylase (PEPC) and pyruvate carboxylase (PYC), Lin et al. constructed E. coli GJT (pHL333, pRV380) and GJT (pTrc99A, pDHK29) by coexpressing of PANK and PEPC, and PANK and PYC, respectively. They found that coexpression of PANK and PEPC, or PANK and PYC, did enhance succinate accumulation, but lactate production decreased significantly [25]. In a subsequent report, GJT (pHL333, pHL413) coexpression of PEPC and PYC contributed to 2.05 g L −1 succinate accumulation. When both the acetate (ackA-pta) and lactate pathways (ldhA) were deactivated in YBS132 (pHL333, pHL413), succinate concentration and yield increased by 67% and 76%, respectively, compared with the control strain GJT (pHL333, pHL413) (3.4 g L −1 versus 2.1 g L −1 , 0.2 g g −1 versus 0.11 g g −1 ). No lactate was detected and acetate accumulation reduced by 76% [27]. For higher cell growth, faster substrate consumption, and product accumulation, mutation in the tricarboxylic acid cycle (sdhAB, icd, iclR) and acetate pathways (poxB, ackApta) of E. coli HL51276k was developed to construct the glyoxylate cycle for producing succinate aerobically. After 80 h fermentation, succinate production reached 4.61 g L −1 with a yield of 0.43 g g −1 glucose. The substantial accumulations of pyruvate and TCA cycle C 6 intermediates were observed during aerobic fermentation which hindered achieving the maximum succinate theoretical yield. PEPC from Sorghum was overexpressed in the strain HL51276k [26]. At approximately 58 h, HL51276k (pKK313) produced 8 g L −1 succinate with a succinate yield of 0.72 g g −1 glucose. Overexpression of PEPC was also effective in decreasing pyruvate accumulation (30 mM in HL51276k (pKK313) versus 48 mM in HL51276k). Further, they found that an overexpression of PEPC and deactivation of ptsG combined strategy was the most efficient in decreasing pyruvate accumulation. By a combination of gene deletions on E. coli ATCC 8739 and multiple generations of growth-based selection, a high succinate producing strain KJ060 (ldhA, adhE, ackA, focA, pflB) was obtained, producing 86.6 g L −1 of succinate with a yields of 0.92 g g −1 glucose and a productivity of 0.9 g L −1 h −1 [28]. After selection, PEP carboxylase was replaced by the gluconeogenic PEP carboxykinase through spontaneous mutation to the major carboxylation pathway for succinate formation. The PEP-dependent phosphotransferase system was deactivated by a spontaneous point mutation and functionally replaced by the GalP permease and glucokinase. By these improvements, the net ATP molar yield during succinate formation was increased to 2.0 ATP per glucose. This improved E. coli pathway is similar to the pathway of native succinate producing rumen bacteria. For higher yield and lower by-product formation, new generation KJ134 (ΔldhA ΔadhE ΔfocA-pflB ΔmgsA ΔpoxB ΔtdcDE ΔcitF ΔaspC ΔsfcA Δpta-ackA) was constructed, producing 71.6 g L −1 succinate with a yield of 1 g g −1 glucose and a productivity of 0.75 g L −1 h −1 in batch fermentations using mineral salts medium under anaerobic environment. Compared with KJ060, by-product acetate of KJ134 decreases 85% (4.4 g L −1 versus 29.5 g L −1 ), which is very useful in product recovery for the commercial production of succinate. 8.1. Corynebacterium glutamicum. C. glutamicum is a fastgrowing, nonmotile, gram-positive microorganism with a long history in the microbial fermentation industry for amino acids and nucleic acids. C. glutamicum, under oxygen deprivation without growth, produces organic acids such as lactic acid, succinate, and acetic acid from glucose. These features allow for the use of high-density cells, leading to a high volumetric productivity. Some studies have been carried out on C. glutamicum with respect to the microbial production of succinate [29,43]. In order to eliminate lactic acid production, deletion of ldhA gene coding for L-lactate dehydrogenase was targeted in C. glutamicum R. A lactate-dehydrogenase-(LDH-) deficient mutant was not able to produce lactate, suggesting that this enzyme has no other isozyme. Moreover, overexpression of genes coding for anaplerotic enzymes in the mutant demonstrated that the rate of succinate production of the resultant strain C. glutamicum ΔldhA-pCRA717 with enhanced pyruvate carboxylase activity was 1.5-fold higher than that of parental mutant strain. Using this strain at a dry cell weight of 50 g L −1 under oxygen deprivation, succinate was produced efficiently with intermittent addition of sodium bicarbonate and glucose. The succinate production rate and yield depended on medium bicarbonate concentration rather than glucose concentration. Succinate concentration reached 146 g L −1 with a yield of 0.92 g g −1 and a productivity of 3.2 g L −1 h −1 [29]. 8.2. Saccharomyces cerevisiae. S. cerevisiae is genome sequenced, genetically and physiologically well characterized, and can produce organic acids even at the low pH that facilitates downstream. Many tools for genetic improvement are established. These features make S. cerevisiae suitable for the biotechnological production of succinate. Due to this fact, attempts to engineer S. cerevisiae for the succinate production were made. Using 13 C flux analysis, Camarasa et al. found that during anaerobic glucose fermentation by S. cerevisiae, the reductive branch generating succinate via fumarate reductase operates independently of the nitrogen source. This pathway is the main source of succinate during fermentation, unless glutamate is the sole nitrogen source, in which case the oxidative decarboxylation of 2-oxoglutarate generates additional succinate [44]. S. cerevisiae is a well-known glycerol and ethanol producer. Therefore, with respect to this microorganism, synthesis of succinate must limit the formation of glycerol and ethanol. In a patent issued by Verwaal et al., S. cerevisiae RWB064 with a deletion of the genes alcohol dehydrogenase 1 and 2 and the gene glycerol-3-phosphate dehydrogenase 1 was used for the parental strain. The genes PEP carboxykinase from A. succinogenes, NADH-dependent fumarate reductase from Trypanosoma brucei, fumarase from Rhizopus oryzae, malate dehydrogenase from S. cerevisiae, and malic acid transporter protein from Schizosaccharomyces pombe were overexpressed. The recombinant SUC-200 produces 34.5 g L −1 succinate, and the main by-products include 4.5 g L −1 ethanol, 7.7 g L −1 glycerol, and 7.8 g L −1 malate. Further improvement of the recombinant SUC-297 included overexpression of pyruvate carboxylase from S. cerevisiae. Succinate formation increases to 43 g L −1 and no malate accumulates. Since succinate, glycerol, and ethanol have different volatility, they can be easily purified [30]. Arikawa et al. reported an improved succinate production using sake yeast strains with some TCA cycle genes deletions [45]. Compared with the wild-type strain, succinate production was increased up to 2.7-fold in a strain with simultaneous disruption of a subunit of succinate dehydrogenase (SDH1) and fumarase (FUM1) under aerobic conditions. The single deletion of gene SDH1 led to a 1.6-fold increase of succinate. These enhancements were not observed under strictly anaerobic or sake brewing conditions. Absence or limitation of oxygen resulted in decreased succinate production in sdh1 and/or fum1 deletion strains [31]. In another study on sake yeast strains, the deletion of genes encoding for succinate dehydrogenase subunits (SDH1, SDH2, SDH3, and SDH4) also resulted in increased succinate production only under aerobic conditions [46]. In order to redirect the carbon flux into the glyoxylate cycle and to improve succinate accumulation, the disruption of isocitrate dehydrogenase activity is also part of a metabolic strategy for succinate production in the oxidative branch. Succinate production reduced to approximate half in comparison with the parental strain was observed for yeast strains with disruptions of isocitrate dehydrogenase subunits (IDH1 or IDH2) under sake brewing conditions [47]. The constructed yeast strains with disruptions in the TCA cycle after the intermediates isocitrate and succinate by four gene deletions (Δsdh1 Δsdh2 Δidh1 Δidp1) produce 3.62 g L −1 succinate at a yield of 0.072 g g −1 glucose and do not exhibit serious growth constraints on glucose. The main by-products are 14 g L −1 ethanol, 3.8 g L −1 glycerol, and 0.8 g L −1 acetate [33]. With the aim to reduce fermentation by-products and to promote respiratory metabolism by shifting the fermentative/oxidative balance, the constitutive overexpressions of the SAK1 and HAP4 genes in S. cerevisiae were carried out also by Raab et al. Sak1p is one of three kinases responsible for the phosphorylation, and thereby the activation of the Snf1p complex, which plays a major role in the glucose derepression cascade. Hap4p is the activator subunit of the Hap2/3/4/5 transcriptional complex. Hap4p overexpression resulted in increased growth rates and biomass formation, while levels of ethanol and glycerol were decreased. The sdh2 deletion strain with SAK1 and HAP4 overexpression produced 8.5 g L −1 succinate with a yield of 0.26 mol mol −1 glucose. No glycerol formation was found, and the ethanol produced after 24 h fermentation was consumed for acetate formation [48]. A multigene deletion S. cerevisiae 8D was constructed by Otero et al., and directed evolution was used to select a succinate producing mutant. The metabolic engineering strategy included deletion of the primary succinate consuming reaction (succinate to fumarate) and interruption of glycolysis derived serine by deletion of 3-phosphoglycerate dehydrogenase. The remodeling of central carbon flux towards succinate minimized the conversion of succinate to fumarate and forced the biomass-required amino acids L-glycine and Lserine to be produced from glyoxylate pools. The mutant strain 8D with isocitrate lyase overexpression represented a 30-fold improvement in succinate concentration and a 43fold improvement in succinate yield on biomass, with only a 2.8-fold decrease in the specific growth rate compared to the reference strain [32]. Final Remarks At present, some succinate productions using A. succinogenes and A. succiniciproducens are studied under environments absolutely free of oxygen for cultivation. Anaerobic fermentation is preferred because of its lower capital and operational costs when compared to aerobic fermentations. However, their applicability in the industrial process is limited to some extent because of their maximum possible succinate yield (1 mol mol −1 glucose). Also, few genetic tools may seriously limit these strains' reconstruction of metabolic engineering. E. coli is the ideal host of choice in future due to in-depth knowledge on this species gained over the past decades, high possible succinate yield (1.72 mol mol −1 glucose), and general acceptance of their usage in industrial processes. Other metabolic engineered overproducers like C. glutamicum ΔldhA-pCRA717, which can produce 146 g l −1 succinate in a cell recycling fed-batch culture, deserve more attention in the future. In order to improve the succinate production by metabolic engineering approaches, various strategies have been investigated and applied in different hosts with acquired ability to produce succinate. Multiple studies have been dedicated to overcome current limitations of the process, like introduction of an ATP-dependent glucose transport system, knockout of LDH, ADHE, or ACKA encoding genes to eliminate by-products formation, to favorably change the ATP formation, to restrict accumulation of pyruvate, and so forth. To seek economical and robust biotechnological production of succinate, considerable progress has been made. However, much work needs to be done in order to achieve the desired process efficiency. Better understanding of metabolic background of both native succinate producers and heterologous hosts is essential for further efforts of directed metabolic engineering. Improvements with internal redox balance, efficient overexpression of required enzyme Glycerol and fatty acid [49] K. pneumoniae LDH526 Deletion of LDH 102.1 g L −1 1,3-propanediol, 13.8 g L −1 succinate Glycerol [50] K. pneumoniae CICC 10011 Enhanced CO 2 level in the medium 77.1 g L −1 2,3-butanediol, 28.7 g L −1 succinate Glucose [51] A. succinogenes ATCC 55618 Fractional treatment and separate utilization for succinate production. 64 g L −1 succinate, glucoamylase Wheat [52] activities, and modification of by-products formation are expected to approach higher product concentration and yield. After the great success of succinate production from glucose, it is the time to develop an efficient process from raw glycerol and biomass (like lignocellulosic substrates or agroindustrial waste products). Being more reduced than glucose, each glycerol could be converted to succinate and could maintain redox balance. Thus, the use of glycerol is more favorable for succinate yield. Hydrolysate from biomass is a mixture of sugars containing mainly glucose and xylose. When glucose and xylose are present at the same time, xylose consumption generally does not start until glucose is depleted. Microbial preference for glucose is caused by the regulatory mechanism named carbon catabolite repression. Metabolic engineering approach to eliminate glucose repression of xylose utilization is very important, so that glucose and xylose could be consumed simultaneously to produce succinate [53][54][55][56][57][58][59][60]. Furthermore, to increase the competitiveness of biological succinate, strategies can be efficiently utilized for the coproduction of succinate and other highvalue-added products, such as propanediol and succinate, isoamyl acetate and succinate, and polyhydroxybutyrate and succinate (Table 3). This type of fermentation producing two commercial interests at the same fermentation process might be considered for a promising biological production process which will decrease the production cost by sharing the recovery cost and operation cost. Conclusion Metabolic engineering focuses on the improvement of microbial metabolic capabilities via improving existing pathways and/or introducing new pathways. The metabolic engineering approach has been widely applied to improve biological succinate production. As a consequence, there has been significant progress in optimization of succinate producing machineries, elimination of biochemical reactions competing with succinate production, and the incorporation of nonnative metabolic pathways leading to succinate production. However, the performance of most succinate producers in terms of concentration, productivity, yield, and industrial robustness is still not satisfactory. More studies, like extension of substrate, combination of the appropriate genes from homologous and heterologous hosts, and integrated production of succinate with other high-value-added products, are in progress.
7,403.2
2013-04-18T00:00:00.000
[ "Engineering", "Chemistry" ]
About Projections of Solutions for Fuzzy Differential Equations In this paper we propose the concept of fuzzy projections on subspaces of F(R), obtained from Zadeh’s extension of canonical projections in R, and we study some of the main properties of such projections. Furthermore, we will review some properties of fuzzy projection solution of fuzzy differential equations. As we will see, the concept of fuzzy projection can be interesting for the graphical representation of fuzzy solutions. Introduction Consider the set ⊂ R .Denote by F() the set formed by the fuzzy subsets of whose subsets have support compacts in .Some properties for metrics F() can be found in [1].If is a subset of , we will use the notation to indicate a membership function for the fuzzy set called membership function or crisp of . Consider the autonomous equation defined by where : ⊂ R → R is a sufficiently smooth function. For each ∈ , denote by ( ) the deterministic solution (1) with initial condition .Here we are assuming that the solution is defined for all ∈ R + .The function : → will be called deterministic flow. To consider initial conditions with inaccuracies modeled by fuzzy sets [2], consider the proposed Zadeh's extension , the application φ : F() → F(), which takes the fuzzy set x ∈ F() and the fuzzy set φ (x ).In the context of this paper we call the application φ of fuzzy flow.Given x ∈ F(), we say φ (x ) is a fuzzy solution to (1) whose initial condition is the fuzzy set x . The conditions for existence of fuzzy equilibrium points and the nature of the stability of such spots were first presented in [2].The concepts of stability and asymptotic stability for fuzzy equilibrium points are similar to those of equilibrium points of deterministic solutions, and stability conditions for fuzzy equilibrium points can be found in [2].Conditions for the existence of periodic fuzzy solutions and the stability of such solutions can be found in [3]. In this paper, we propose the concept of fuzzy projections on subspaces of F(R ), obtained from Zadeh's extension defined canonical projections in R , and study some of the main properties of such projections.Furthermore, we review some properties of fuzzy projection solution of fuzzy differential equations.As we will see, the concept of fuzzy projection can be interesting for the graphical representation of fuzzy solutions. Projections in Fuzzy Metric Spaces We restrict our analysis to the set F() whose elements are subsets of a fuzzy set whose -levels are compact and nonempty subsets in .The fuzzy subsets that are F() will be denoted by bold lowercase letters to differentiate the elements .So x ∈ F() if and only if [x] is compact and nonempty subset for all ∈ [0, 1]. We can define a structure of metric spaces in F() by the Hausdorff metric for compact subsets of .Let K() be the set formed by nonempty compact subsets of the metric space (, ).Given two sets , in K(), the distance between them can be defined by dist (, ) = sup ∈ inf ∈ (, ) . ( The distance between sets defined above is a pseudometric to K() since dist (, ) = 0 if and only if ⊆ , not necessarily equal value.However, Hausdorff distance between , ∈ K() defined by is a metric for all K().so that (K(), ) is a metric space.It is also worth that (, ) is a complete metric space, so (K(), ) is also a complete metric space [4]. Through the Hausdorff metric , we can define a metric for all F().Here we denote it by ∞ .Given two points u, k ∈ F(), the distance between u, k is defined by It is not difficult to show that the distance defined above satisfies the properties of a metric and thus (F(), ∞ ) is a metric space.Nguyen's theorem provides an important link between levels image of fuzzy subsets and the image of his -levels by a function : × → .According to [5], if ⊆ R and ⊆ R and : → is continuous, then Zadeh's extension f : F() → F() is well defined and is worth for all ∈ [0, 1] and u ∈ F(). Projections Fuzzy. Consider the application 𝑃 Provided that R can be characterized as a subset of R + by identifying it with the subset R × {0}, then the application can be seen as the projection of R + on the set R .For this reason, we say that is the projection in R ; the point (, ) ∈ R + . Notice that a point (, V) is in the image of if and only if V = 0. Furthermore, (, ) = for all ∈ R .Thus, given a point z ∈ F(R + ), with membership function z : R + → [0, 1], the image P (z), obtained by Zadeh's extension projection , has the membership function The application P : F(R + ) → F(R ), obtained by Zadeh's extension of , that for each z ∈ F(R + ) associates the point P (z) ∈ F(R ) can be seen as a projection of F(R + ) in F(R ), as it can be identified with the subset F(R ) × {0} .Similarly the projection satisfies: Based on this, we can define the projection of fuzzy z ∈ F(R + ) in F(R ) as the point x ∈ F(R ) with a membership function We also consider the function : R + → R that for all (, ) ∈ R + associates the point (, ) = ∈ R .In this case, the image of a point z ∈ F(R + ), with the membership function z : R + → [0, 1], is a point y ∈ F(R ) with the membership function which we call fuzzy projection z in F(R ).Thus the application P : F(R + ) → F(R ) can be viewed as a fuzzy projection Here are some examples. The image of z by applying P , in this case, has a membership function: Since min { a (), b (V)} ≤ a (), so, sup As b ∈ F(R ), so V ∈ R so that b (V) = 1.So, the fuzzy projection x of z about F(R ) has a membership function: In Figure 1, the membership functions of z ∈ F(R 2 ), defined from a and b ∈ F(R) and your fuzzy projection in F(R), respectively, can be seen.In this figure, With similar argument, we can show that b ∈ F(R ) is a fuzzy projection of z in F(R ). We can also define x = (a, b) ∈ F(R + ) through the - product, that is, The projection of z in F(R ) has a membership function: sup Moreover, the projection F(R ) has a membership function: sup Similarly, we can show that fuzzy projections z = (a, b) in F(R ) and F(R ) for all - Δ are, respectively, a and b.First, for any - Δ, we have So, sup But the ultimate is reached if we take Then, the projection of z = (a, b) in F(R ) has membership function for all - Δ. Example 2. Consider z ∈ F(R 2 ) determined by membership function For this case, we have the fuzzy projections x and y on F(R), respectively, determined by In Figure 2 we can see the membership functions z and x, respectively.Proposition 3. Let x = P (x) and y = P (y), with x and y ∈ F(R + ).The distance between the fuzzy projections x and y is always limited by the distance between x and y.We can prove that dist Proof. In fact, for all The fuzzy projection p ∈ F(R ) to a point p ∈ F(R + ) satisfies another important property of the projections.Namely, the projection p is the point that minimizes the distance between the point p ∈ F(R + ) and the set F(R ), the latter set is considered as a subset of F(R + ). Proposition 4. The fuzzy projection p in Proof.First, let us note the abuse of notation in the statement.The term ∞ (p, z) only makes sense because we can see Therefore, we have ).Thus, we can conclude that, for all q ∈ F(R ), ∞ (p, q) ≥ ∞ (p, p), which proves the assertion. We can also define fuzzy projections z ∈ F( × ) in F() and F(), where ⊂ R and ⊂ R .In this case, the supremum in membership functions ( 8) and ( 9) is taken on the sets and , respectively, and properties shown above metrics remain valid. We can also consider the projection : R → R from a point = ( 1 , 2 , . . ., ) ∈ R in th coordinate axis; that is, () = .As shown before, the projection of Zadeh's extension defines the application π : F(R ) → F(R) that we call for the th fuzzy projection of F(R ) on F(R).Thus, given a point x ∈ F(R), the th fuzzy projection of x on F(R) is a point x with membership function given by Again, if x = (a 1 , a 2 , . . ., a ) is defined by fuzzy Cartesian product, then th fuzzy projection of x ∈ F(R ) in F(R) is a point a .For simplicity, consider x ∈ R 3 defined by By the properties of -, it follows that for all , , ∈ R. Thus, the second fuzzy projection x on F(R) is the point x 2 where the membership function is For the previous inequality, we have Taking and such that a 1 () = a 3 () = 1, equality is attained in the supremum, and hence, Induction proves the general case in which x ∈ F(R ). Through expression (8), we can determine the -levels of fuzzy projection x ∈ F(R ) to a point z ∈ F(R + ).Indeed, if x () ≥ , so, ∈ R such that z (, ) ≥ so that (, ) ∈ [z] .The reciprocal is also true, because if z (, ) ≥ , then by (8), x () ≥ .Thus, we conclude that: or Since applying is continuous, we can use the equality (5) to show that the th fuzzy projection x ∈ F(R) of x ∈ F(R ) has -levels: Projection of Fuzzy Solutions where () : → R is the projection of the deterministic flow th coordinate axis; that is, () ( ) is the th solution component ( ), or even () ( ) is the solution of the equation By applying Zadeh's extension to () , we have the application φ() : F() → F(R) that for each x ∈ F() associates the image φ() (x ) ∈ F(R).As in the deterministic case, we show that the application φ () : F() → F(R) ia an th fuzzy projection to fuzzy flow φ : F() → F() on F(R). We showed in [3] that the equilibrium point deterministic flow : → depends on the initial condition ∈ ; then the equilibrium point for flow fuzzy φ : F() → F() is obtained by Zadeh's extension : → .Let () ( ) be an th coordinated of equilibrium point .Similarly, we can prove that th projected of the equilibrium point fuzzy x = x (x ) ∈ F() is the point x = x() (x ) ∈ F(R) where x() : F() → F(R) is a Zadeh's extension of () : → R.More briefly, for x ∈ F(), the equality holds following: where x is th fuzzy projection of the fuzzy equilibrium point x .Consider just a few examples of the results presented previously. Figure 3 shows the time evolution of the fuzzy projection of φ (x ) on and , respectively.Take the initial condition 0 defined by the membership function. The solution of the model , defined by functions Figure 3: Time course of (1) (x ) and (2) (x ), respectively. According to what is discussed in [3], for all x ∈ F(R 2 + ), the fuzzy solution φ converges to the equilibrium point fuzzy x = x (x ). According to the equality (46), projections of the equilibrium point x on the coordinate axis are obtained by extension of Zadeh components .That is, the projections are fuzzy, respectively, x 1 = {0} and x, whose membership function is given by By Proposition 5, fuzzy projections of fuzzy solution φ (x ), on F(R), of model are obtained by extension of Zadeh, the components (1) and (2) , given by To illustrate, suppose the force infection is = 0.01, and we take the initial condition x ∈ F(R 2 + ) defined by membership function Figure 4 shows the evolution of applications φ(1) (x ) and φ(2) (x ) with the time evolution.Note that φ(1) (x ) converges to x 1 = {0} , whereas φ(2) (x ) converges to x 2 with the membership function given by (52).We also consider that the number of individuals in the population is known, say .In this case, the variables and are related by equality + = .Under this assumption, the deterministic solution converges to the point of equilibrium ( , ) = (0, ), and, therefore, the fuzzy solution converge to the equilibrium point fuzzy {(0,)} .In this case, the projections φ(1) (x ) and φ(2) (x ) converges to {0} and {} , respectively. In Figure 5, we plot the projections of the fuzzy solution (x ), to the initial condition = 20 and given by fuzzy set The graphical representation of fuzzy projections of this work is established as follows: given an ∈ [0, 1], the region in plan bounded by -level φ() [0,] (x ) is filled with a shade of gray.If = 0, then the region bounded by φ() [0,] (x ) is filled with the white color, whereas if = 1, then the region bounded by φ() [0,] (x ) is filled with black.Thus, the larger the degree of membership of a point , the darker its color. So, for all ∈ , the value equality is as follows: which proves the assertion. The proof of the proposition can also be made through the -levels.In fact, we must show that φ (y ) = P (ψ (y )) for all y ∈ F( × ) and ∈ R + .Using the continuity of applications and , we have for all ∈ [0, 1].The previous equality concludes the proof proposition. In contrast to [6,7], when the equation depends on parameters such as (56), the fuzzy solution proposed by fuzzy Buckley and Feuring in [8] is obtained by Zadeh's extension flow deterministic ( , ).This way, Proposition 8 ensures that the solution of fuzzy Buckley and Feuring is the fuzzy projection of the fuzzy solution proposed by [6,7]. Consider that subjective parameters in (56) contributes to an increase in uncertainty.Set a parameter ∈ , and given a fuzzy initial condition x , the -levels to the fuzzy flow generated by (56) are the sets On the other hand, if the -levels of p ∈ F() contain , so, by Proposition 8, we have So, we have Example 9. Consider the case where the parameter in the equation is a fuzzy parameter.In the previous equation, the solution : R 2 → R, in terms of and , is given by and thus the flow 2-dimensional : R 2 → R 2 , for the case in which the parameter is incorporated into the initial condition, is given by ( , ) = ( + ( − ) − , ) . For any initial condition y ∈ F(R 2 ), we show that ψ converges to the equilibrium points y which is Zadeh's extension : R 2 → R 2 given by ( , ) = ( , ).That is, the equilibrium point y has membership function and we have ∞ (φ (y ), x) → 0 as → ∞. In Figure 6, we have the graphical representation of the fuzzy solution ψ (y ) and its fuzzy projection φ (y ). Conclusions In this paper, we define the concept of fuzzy projections and study some of its main properties, in addition to establishing some results on projections of fuzzy differential equations.As we have seen, different concepts of fuzzy solutions of differential equations are related by fuzzy projections.Importantly, by means of fuzzy projections, we can analyze the evolution of fuzzy solutions over time. 1 𝜇Figure 1 : Figure 1: Membership function of z and a respectively. Figure 4 : Figure 4: Time evolution of the fuzzy projection φ (x ) on the axes and , respectively. Figure 5 : Figure 5: Time evolution of the fuzzy projection φ (x ) on the axes and respectively.
3,980.2
2013-06-17T00:00:00.000
[ "Mathematics" ]
Methodological approach to assessing the efficiency of the use of digital technologies in the activities of customs authorities . In conditions of limited time, which is dictated by the modern rhythm of life, digital technologies are becoming one of the main tools for optimization and rationalization of all spheres of human activity. The use of modern digital technologies in the customs system is primarily aimed at increasing the efficiency of its activities. Properties such as quality, speed, transparency, generated with the help of information customs technologies, create prerequisites for stimulating Russia’s foreign economic activity and developing the national economy. However, at present, the Federal Customs Service has a number of problems that impede the development of digitalization of customs activities, due to external and internal factors, as well as contradictions in some of the tasks facing the customs authorities. Therefore, the task of analyzing the efficiency of the use of digital technologies in the activities of customs authorities becomes urgent. The authors of the article proposed a methodological approach to assessing the efficiency of the use of digital technologies in the activities of customs authorities, which is based on assessing the quality of electronic customs services. Introduction Currently, an important condition for the use of digital technologies in the activities of customs authorities is the organization of a clear sequence of legislatively enshrined actions that consider the variety of factors in the digital development of this organization. The relevance of the scientific article is evidenced by the unfinished Federal Target Program "Electronic Russia" for 2002-2010, approved by the decree of the Government of the Russian Federation of January 28, 2002, the purpose of which was the total informatization of the activities of state bodies [1] and the unfinished program "Information Society" on 2011-2020 years [2]. To further assess the quality of these programs in the activities of customs authorities, the authors proposed a methodology for assessing the efficiency of the use of digitalization, based on assessing the quality of electronic customs services. Methods The management issues in customs authorities are considered in the works of N However, in the works of the listed authors, there are no methodological developments for organizing the management of the activities of state bodies, including the customs service, using digital technologies. The problem lies in the contradiction between the importance of using digital technologies in the activities of the Federal Customs Service of Russia, and the lack of a methodological approach to solving issues related to the digitalization of state activities. The theoretical and practical significance of the identified problems and their insufficient elaboration determined the goals and objectives of the study. As a basis for the research, two theoretical and methodological approaches were used: humanitarian and technocratic. When using a humanitarian approach, we consider information technology to be an important component of human life, which is important both for organizing the activities of customs authorities and for the social sphere associated with foreign economic activity. Within the framework of the technocratic approach, we consider information technologies as a means of increasing the labor productivity of tax authorities and limit their use to the spheres of production and management. Results One of the components of the management of customs authorities is quality control of the provision of electronic public services, which is a continuous process of managerial influences on customs officials, ensuring their purposeful behavior under changing internal and external conditions [4] (Fig. 1). The purpose of quality management of customs services is to achieve unification of the actions of all customs officials in the implementation of activities to meet the needs of participants in foreign economic activity with a guarantee of ensuring the economic security of the state and high quality of the provided customs services. The solution to these problems requires the complexity of the introduced evaluation criteria. However, the monitoring of the quality of electronic customs services currently being implemented in the customs authorities does not give a full picture of the situation and, accordingly, the ability to quickly and timely respond to emerging problems. To develop the most efficient tool for assessing the quality of the provision of electronic customs services in order to identify shortcomings and further enhance it, it is necessary to rely on three main blocks (Fig. 2). Let us consider each block in more detail, highlighting the main criteria for assessing the quality of customs services in the framework of the use of digital technologies. Customs services provided electronically can be assessed using standard analysis and valuation methods. The method of questioning (survey) users of the services of customs authorities in electronic form. Herewith, within the framework of the general mechanism of indicators of the use of digital technologies in the activities of customs authorities, a possible survey of not only third-party organizations and participants whose activities are related to customs, but also conducting surveys among customs officials. In the Federal Customs Service of Russia, a similar method is used annually in the form of a questionnaire, the results of which are presented on the official website. The method of questioning in the customs authorities is based on a survey of legal entities and individuals. It should be noted that a separate questionnaire on the provision of customs services in electronic form is not provided. Using the described method, it is possible to assess the impact of digital technologies on the quality of public services provided by customs authorities using the described method only on the basis of comparing the number of users of electronic services and an overall assessment of the degree of satisfaction, as well as the total time of providing customs services. Therefore, it becomes necessary to supplement the questionnaire with such criteria as satisfaction with the receipt of customs services via the Internet, the speed of the portal of public services or the departmental website, comments on the interface of the departmental website (including when searching for the necessary information, instructions, etc.). In addition, a feedback can be a survey of customs officials who directly respond to requests from users of public services regarding the speed and correctness of the software, as well as the convenience of transmitting the response to the request (provision of services) in electronic form. Another widespread assessment method is the method based on the calculation of indices, which currently has no implementation in assessing the use of digital technologies in the activities of customs authorities, including the quality of electronic customs services. The index that can be applied in the framework of a systematic assessment of the quality of customs services is the Customer Satisfaction Index (CSI) 5]. The basis for calculating CSI is the correlation of the main assessed criteria of the service to the degree of satisfaction of the consumer of the service. The degree of satisfaction of participants in foreign economic activity with electronic customs services, e.g., can be calculated as the average time for the provision of electronic customs services to the expected time for the provision of electronic customs services. In this case, part of the data for such a calculation will be obtained by the survey method. With regard to the mechanism of indicators of the activities of customs authorities, based on the digitalization of the main processes, the index method will allow, in general, to assess the degree of the expected result (expected indicator) with the actual one. Among the generally recognized principles and methods of assessment, the main one in developing a methodology for assessing the quality of customs management and, in particular, state services of customs authorities, should be noted the international standard ISO 9000 series (ISO 9004: 2009 Managing for the sustained success of an organization -A quality management approach (IDT)) [5]. However, the requirements for the services of customs authorities in electronic form are not enshrined in legislation. Normally, only general provisions for the provision of public services in electronic form, as well as for the organization of the provision of such services, are defined, which is included in the next block of components of the mechanism for assessing the quality of electronic customs services. The second assessment block is formed on the basis of the requirements established by the regulatory legal acts of the Russian Federation as part of the development of the use of digital technologies in the field of public services. Pursuant to the provisions of the Decree of the Government of the Russian Federation of 03/26/2016 No. 236 "On requirements for the provision of electronic state and municipal services", it is possible to form the following requirements for the provision of services of customs authorities in electronic form, which generally affects their quality \ [6] : providing information on the procedure and timing of the provision of customs services in electronic form; ensuring the possibility of forming an electronic request for the provision of electronic customs services (technical componentavailability, serviceability); ensuring the acceptance and registration by the customs authority of an electronic request and electronic documents necessary for the provision of electronic customs (technical component -exclusion of failures/errors in the transfer of information); ensuring payment for public services and payment of customs duties levied pursuant to the legislation; ensuring the receipt of customs services in electronic form; providing information on the progress of the request for the provision of electronic customs services, etc. Considering the requirements for the customs authorities in terms of the provision of public services in electronic form, it is possible not only to determine the estimated boundaries, but also the composition of actions of the customs authorities in the provision of such services. In this regard, such requirements should be developed and considered, including in the mechanism of indicators for the use of information technologies in the activities of customs authorities. The third block of a comprehensive assessment of the quality of customs services is the established indicators within the framework of strategic planning of the activities of customs authorities. For instance, as an indicator of the development of the system of electronic public services, one can use the share of foreign economic activity participants who positively assess the quality of these services in the total number of respondents. Herewith, when assessing the quality of electronic customs services, it is advisable to consider other indicators of the use of digital technologies: the use of technology for remote release of goods, automation of customs control processes, acceleration of customs operations when customs declaring goods in electronic form; ensuring the transparency of customs operations; reduction of time and number of documents, enhancement of information and technical support of customs activities. Correlation of the degree of achievement of the indicators set by the target indicators for improving customs regulation and information and technical support will allow identifying the strengths and weaknesses of the development of the customs system and, accordingly, identifying the problems of managing the quality of electronic customs services in the current operating conditions (external and internal factors of development) [7]. Thus, the assessment of the quality of customs services in electronic form should be based on a comprehensive step-by-step analysis of the efficiency of the use of information technologies in the activities of customs authorities, considering the factors that influence the results achieved (Fig. 3). The process of assessing the quality of electronic customs services indicates the relationship of the given parameters, returning the results and forecast values to the original supporting elements of the assessment. An important step in assessing the quality of digitalization of the customs system, including the provision of electronic services, is to perform control and measuring measures to identify inconsistencies with the indicators established by law and enshrined in the development strategy of the Federal Customs Service of Russia [8]. The proposed scheme for assessing the quality of customs services in electronic form can be interpreted in terms of the methodology for analyzing the efficiency of the use of information technologies in the activities of customs authorities (Fig. 4). The elements of the methodology for analyzing the efficiency of customs activities using information customs technologies will be: a system of factors (macro-and micro) information and technical development of customs authorities; fundamental blocks for analyzing and assessing the efficiency of activities in the context of digitalization, consisting of the selected analysis methods, legislative requirements for customs activities and established development indicators, which underlie the comprehensive detailed analysis. Discussion of the results In the formation and functioning of the system for managing the activities of the Federal Customs Service in the context of digitalization, it is necessary to consider many elements that affect its development. Therefore, within the framework of the proposed methodology for assessing the efficiency of the use of digital technologies in the activities of customs authorities, it is assumed that the following elements will be included in the management structure of this process: the subject of management (management of customs authorities), the object of management (the process of providing customs services), requirements for the processes of providing customs services, their assessment quality, determination of directions for adjusting the ongoing processes [9] In view of its versatility and complexity, the proposed approach to analysis and assessment should be used to create an algorithm for the use of information technologies in managing the main processes of customs activities [10,11]. The implementation of the methods and recommendations listed in the study is impossible without consolidating the efforts of the member states of the Eurasian Economic Union aimed at creating and implementing a unified highly efficient information support for the processes of foreign economic activity. The results of such transformations in the Federal Customs Service of Russia should be an increase in multilateral awareness, enhancement of the regulatory framework, cost reduction, standardization and unification of customs operations, enhancement of the quality and efficiency of customs authorities. Conclusions In the current conditions of problems, contradictions and, at the same time, potential and opportunities, there is a need to develop tools in the form of an algorithm of legislatively enshrined actions to introduce digital technologies into the activities of customs authorities in order to enhance management efficiency. The basis of such an algorithm should be a comprehensive assessment and analysis of both achieved and predicted results. Practical recommendations of the study can be used when developing a new strategy for the development of customs authorities, creating new or revising existing indicators of its activities.
3,339
2021-01-01T00:00:00.000
[ "Economics" ]
Feeding ecology of a nocturnal invasive alien lizard species , Hemidactylus mabouia Moreau de Jonnès , 1818 ( Gekkonidae ) , living in an outcrop rocky area in southeastern Brazil We studied in fieldwork, the feeding ecology of a Hemidactylus mabouia population from southeastern Brazil throughout one year in a region with marked climatic seasonality. A sampling of availability of arthropods in the environment was carried out, which evidenced that the availability of food resources influenced the composition of the diet of H. mabouia. There were no seasonal differences on diet composition, which may be due to the relatively constant availability on prey throughout the year. In general, this population can be classified as generalist and opportunistic regarding diet. There was a high food niche overlap among juveniles and adults, although juvenile lizards tend to eat higer number of prey (but in lower volume) when compared to adult lizards. The ability to exploit a wide array of prey in an efficient way, maintaining a positive energetic balance, may be a factor determining the efficiency of this exotic species to occupy invaded areas. Introduction Invasive alien (or exotic) species constitute a great problem for natural environments and for native species and it has been suggested that after habitat fragmentation, they are the second most important factor causing the loss of biodiversity and extinctions in most areas worldwide (Vitousek et al., 1997;Mooney and Hoobs, 2000;Sutherst, 2000;McNeely et al., 2001).The spread of invasive alien species is creating complex and farreaching challenges that threaten both the natural biological world and the well being of citizens (McNeely et al., 2001).Presently, many countries are working together in a "Global strategy on Invasive Alien Species" de-signed to cope with the problem caused by these species (McNeely et al., 2001).One limiting factor preventing better understanding of the nature of the problem caused locally and the subsequent management of invasive species, is the lack of information on aspects of the ecology of those species and on their interaction with sympatric native species, especially in the field. The gecko Hemidactylus mabouia is a broadly distributed species in the tropics which has invaded the New World after accidental introductions from its Old World native range in historic times, probably via slave ships coming from Africa during European coloniza-tion of the Americas (Ávila-Pires, 1995, Federico andCacivio, 2000;Fuenmayor et al., 2005).This species has proven to be a very successful invader species in the Americas and presently is largely distributed in southern North America, as well as in Central and South America (Fuenmayor et al., 2005).Despite its successful colonization, little information regarding aspects of its ecology exists and, especially for the New World, ecological information obtained from non-urban populations is considerably limited, which prevents a better understanding of its impact as an invader on native species.At Valinhos, in São Paulo State, in southeastern Brazil, a population of H. mabouia live in nature in an outcrop rock area dominated by grassland, sharing food resources with local native species such as Tropidurus itambere (Van Sluys, 1993) and Mabuya frenata (Vrcibradic and Rocha, 1998). In this paper we investigate in the field the feeding ecology of a non-urban population of Hemidactylus mabouia, a species known for its generalist feeding habits (cf.Vitt, 1995;Zamprogno and Teixeira, 1998;Rocha et al., 2002).The population studied lives in a non-urban area (Valinhos) with marked climatic seasonality, in Southeastern Brazil.We specifically addressed the following questions: i) as expected for an invader species, does this population of H. mabouia have a broad diet?ii) is there a seasonal variation in the types of consumed prey considering the seasonal environment of Valinhos?iii) is the rate of prey consumption a function of relative availability of prey in the environment?iv) does the diet composition of adult and juvenile lizards differ in terms of the consumption of the commonest food items?and v) to what extent is the diet composition of this invader similar to that of the local native species T. itambere and M. frenata for which diet information is available?Also, regarding the positive energetic balance of this population, we would expect a considerable amount of lizards to have an empty stomach, according to the assumption that geckonid lizards, mainly nocturnal ones, tend to evidence a high rate of individuals with an empty stomach, as pointed by Huey et al. (2001). Material and Methods Fieldwork was carried out from April 2002 to March 2003 in a grassland area located within a farm (Fazenda Manga) in Valinhos municipality (22° 56' S and 46° 55' W; elevation ca.700 m), São Paulo State, southeastern Brazil.The general area, which is mostly used for pasture, has abundant granite boulders surrounded by grassy and shrubby vegetation (Van Sluys, 1993;Vrcibradic and Rocha, 1998).The rainy season in Valinhos extends from October to March and the dry season from April to September, and the mean annual temperature (± sd) and total annual rainfall are 20.7 ± 2.2 °C and 1379 mm, respectively (Van Sluys et al., 1994).During the period of this study, rainfall totalized 230 mm in the dry season and 1047 mm in the wet season (Figure 1) [all climatic data were obtained from the Centre for Research in Agriculture (CEPAGRI) of the Universidade Estadual de Campinas]. Lizards were collected with a noose or by hand.Immediately after capture, each lizard was transferred to a plastic sack containing cotton embedded in ether, in order to anesthetize and euthanaze them.In the laboratory, we measured the snout-vent length (SVL), head width (HD), head length (HL) and mouth length (ML) of each individual with a caliper (to the nearest 0.01 mm) prior to fixation with 10% formalin solution and storage in 70% alcohol solution.To evaluate the difference in body size between adult males and adult females, we used the Student-t test (Zar, 1999).The lizards were later dissected and their stomachs were removed for posterior analyses of their contents under a stereomicroscope.For analyses, we only considered the items present in the stomachs.The stomach contents were identified to the taxonomic level of Order, except for Hymenoptera, for which we separated ants (Formicidae) from the rest of the group.The contents that could not be identified were pooled into a "non-identified" category.The volume (in mm 3 ) of each food item was estimated by the formula of the volume of an "ellipsoid": (1) where L = length and W = width.The length and the width of each prey was measured with a caliper (to the nearest 0.01 mm). We estimated the Importance Value Index (IVI) to each prey category (Gadsden and Palacios-Orona, 1997).The IVI index took into consideration together the proportions of the number, volume and frequency of occurrence of each category of prey. The food niche breadth was estimated by the Shannon Diversity Index, H', (Krebs, 1989).Additionally, we estimated the standardized diversity index (H P ) by sharing the H' by H' MAX (maximum value of H') (Krebs, 1989).The standardized index range is from 0 to 1 and allows a more efficient comparison of sexual and ontogenetic variation of dietary niche breadth. To estimate the relative availability of potential prey in the field (and to obtain a reference collection of prey -cf.Van Sluys, 1991), we sampled arthropods in the environment.The samples were carried out only during the time of lizard activity, and in the same microhabitat (granite rocky surfaces) used by the lizards in the area.To determine the available arthropods, we used an entomological suction glass provided with a hose of 9.0 mm diameter.The diameter of the hose was similar to the maximum aperture diameter of the lizard's mouth, which limited the maximum volume of the arthropods sampled to a size nearest to maximum size of the prey found in the lizard stomachs.Each arthropod sampling session lasted five minutes and during this period the largest possible area of the surface of the rock was sampled.The relative arthropod abundance was estimated by the number of individuals of each species, divided by the total number of arthropods sampled at the study area. To evaluate the seasonal, intersexual and ontogenetic differences in proportions of prey categories (in terms of number and volume) consumed by lizards, we used the Kolmogorov-Smirnov Test (Zar, 1999). We used Analysis of Variance (One-Way ANOVA) to test for intersexual and ontogenetic differences in mean number and mean volume of prey consumed by H. mabouia (excepting for ontogenetic differences in mean volume of prey, in which case we performed a Mann-Whitney test, due to the fact that these data do not fit a normal distribution) To evaluate the relationship between the proportion of food items consumed by the lizards and their proportion in the environment, we performed a spearman rank correlation (Zar, 1999).The same test was applied to evaluate the relationship between lizard size (SVL) and the mean size of the five largest prey (size and volume).On each stomach we performed a Spearman Rank Correlation (Zar, 1999). We used Pianka's measure of niche overlap (Pianka, 1973) to determine the diet similarity between adult and juvenile lizards and between males and females: where P 2i and P 1i are the rate of consumption of prey type i by sexes1 and 2 respectively.We compared the observed overlap value against a null model (1000 interactions) generated by the algorithm of randomization R3 (Lawlor, 1980) using the software ECOSIM 7.0 (Gotelli and Entsminger, 2001).This same methodology was used to compare the overlap between dietary items for another two sympatric lizards, T. itambere and M. frenata, for which some diet data are available in the literature (Van Sluys, 1993;Vrcibradic and Rocha, 1998). The diet composition of adult males and females did not differ in terms of number of (Kolmogorov-Smirnov; D MAX = 0.100; p > 0.05) nor in volume of prey consumed (D MAX = 0.100; p > 0.05).Further, there was no signifi-cant difference in the diet composition between adult lizards (pooled males and females) and juveniles in terms of number (D MAX = 0.100; p > 0.05) and volume (D MAX = 0.333; p > 0.05) of prey consumed. Considering the arthropods sampled in the environment, the more representative groups were ants, termites, hemipterans and orthopterans (Figure 2) with a diversity value of H' = 2.31 (H' MAX = 0.74).The proportion of each arthropod taxon sampled in the environment did not differ between dry and wet seasons (Kolmogorov-Smirnov; DMAX = 0.217; p > 0.05). The diet composition of H. mabouia was positively related to the relative availability of arthropods in the environment, both in terms of number (Spearman Rank Additionally, the frequency distribution of prey types found in the stomachs did not differ from that of potential prey sampled in the environment (Kolmogorov-Smirnov; D MAX = 0.231; p > 0.05). Concerning the food niche overlap between H. mabouia and two other sympatric lizards, the overlap between H. mabouia and M. frenata was 34.0% (in terms of number of prey) and in terms of volume of prey, the overlap was 32.87% (RA3, p < 0.05).With respect to H. mabouia and T. itambere, the food niche overlap in terms of volume of prey was 38.1% (RA3, p < 0.05).The data to number of prey consumed by T. itambere were not available in Van Sluys (1993).The overlap between two diurnal lizards, M. frenata and T. itambere was the higher (O 12 = O 21 = 0.472, in terms of volume, RA3, p < 0.05) concerning these three sympatric species. Discussion In general, our data indicated that the population of Hemidactylus mabouia from Valinhos is generalist and opportunistic regarding its diet.The diet of this population consisted essentially of several arthropods and was similar to that of other Brazilian H. mabouia populations studied (Vitt, 1995;Zamprogno and Teixeira, 1998;Rocha et al., 2002). The diet composition did not show any seasonal changes, which can be related to a relative constancy of prey availability throughout the year in the area, combined with the opportunistic foraging behavior of this lizard.Similarly to our study, Vrcibradic et al. (1998) during a study in the same area also, did not find a significant variation in abundance of arthropods throughout the year. The data showed that the composition of the diet did not vary between adult males and females, not even in ontogenetic terms.This may result from the fact that adult and juvenile lizards, which are opportunistic predators and are active at the same time, also use the same microhabitat (surface of rocks) in Valinhos, being potentially exposed to a similar array of prey.Also, there were no significant size differences (neither in morphologic differences of the mouth) between adult males and females of H. mabouia from Valinhos, which may have contributed to the high trophic niche overlap found between the sexes.The slightly higher value of diversity (H'p) of prey in adults compared to juveniles and the greater food niche overlap between adults compared to juveniles may be due to the inclusion of some few prey types by adults in their diet, which are not available to juveniles, due to their mouth size restrictions relative to adults.It is known that, for most lizard species, adults tend to consume similar food items utilized by juvenile lizards, just adding new items to their diets as they grow (Pough et al., 1998). The data show a general tendency of juveniles to consume a higher number of prey, but less voluminous when compared to adult lizards.This must result from the constraints to consume some larger prey due to their volume, which is imposed by morphological limitation in mouth size (width and breadth) of juvenile lizards.As smaller lizards are limited by mouth size and have to consume comparatively smaller prey when compared to adult lizards, they must invest in the consumption of a higher number of prey in order to maintain a favorable energetic balance.For the tropical tropidurid lizard Liolaemus lutzae, the number of prey in the stomach decreases with body size (Rocha, 1989).To be energetically profitable to a lizard, the prey should contain more energy than that dispended by the predator in searching, subjugating and swallowing them (Rocha, 1989). The energetic balance of a particular lizard population can be estimated by the proportion of individuals having empty stomachs (Huey et al., 2001).Nocturnal lizard species tend to show higher proportions of individuals with empty stomachs (24.1%) when compared to diurnal ones (10.5%;Huey et al., 2001).This pattern remains when diurnal (7.2%) and nocturnal geckonids (21.2%) are compared (Huey et al., 2001).The H. mabouia population from Valinhos presented a relatively small proportion of individuals with empty stomachs (4.8%).This proportion was similar to that found in another H. mabouia population in Espírito Santo State (4.6%; Zamprogno and Teixeira, 1998), and considerably lower than that reported for the cogeneric H. turcicus (13.5%) (Saenz, 1996), and also generally lower than those values recorded for nocturnal geckonids by Huey et al. (2001). The proportion of empty stomachs found for H. mabouia from Valinhos when compared to those from other nocturnal geckonids (see Huey et al., 2001) is suggestive that the population of this study may be in a better positive energetic balance than other geckonids.This may result from the relative constancy in the availability of prey throughout the year in the area and the generalist habits of the species, together with the fact that H. mabouia is the only nocturnal lizard species in the area.It is possible that these characteristics, which tend to result in a relatively favorable energetic balance, as found in the present study and in that of Zamprogno and Teixeira (1998), may contribute in an important way to the colonization success of this exotic species. Interspecific competition can be inferred when the overlapping of resources used by two species living in al-lopatry are higher than the overlapping of resources when the species are living in simpatry (Colwell and Futuyma, 1971).The dietary niche breadth of H. mabouia living in simpatry with another nocturnal gekkonid (Phyllopezus pollicaris) in the Caatinga (a semi-arid shrubby physiognomy) of northeast Brazil (Vitt, 1995) seems to be lower (based on number and proportion of category of consumed prey) than the dietary niche breadth found in this population of H. mabouia of Valinhos (H'p = 0.76; H' = 2.62).The values of niche overlapping found by Vitt (1995) for that two species of geckonid (O 12 = 60.8%) were also inferior to dietary overlap values between males and females (O 12 = 79.0%) in the population of H. mabouia studied here.This may be an indication that, even considering the high similarity in diet composition between males and females, the effect of intersexual competition in this species seems to be low. The overlap in diet niche between H. mabouia and two other sympatric lizards were lower than intraspecific overlap in H. mabouia.These results could be due to a temporal separation in time activity between nocturnal (H.mabouia) and diurnal species (T.itambere and M. frenata) and by differential preferences in microhabitat use (Van Sluys, 1991;Vrcibradic and Rocha, 1998).We can accept that this invader geckonid has been partitioning some food resources with sympatric native species. The diet composition of nocturnal lizards associated to human buildings could differ (in terms of category of prey consumed) between populations that inhabit urban environments and those living in natural ones (Hódar and Pleguezuelos, 1999).The diet composition of a Mourish gecko population (Tarentola mauritanica) living in nature, was compared with that of a population living in an urban environment (Hódar and Pleguezuelos, 1999).Their results indicated that the diet composition of the two populations differed basically by the presence of winged groups (such as lepidopterans and dipterans), which presented in higher number and frequency in the diets of lizards from urban areas.Similarly, the most important food items in the diet of H. mabouia populations living in natural environments (Vitt, 1995;Zamprogno and Teixeira, 1998; present study) were spiders, orthopterans, homopterans and eruciforms larvae, whereas in an urban population of H. mabouia from Campinas city, close to Valinhos, the most important items in diet were winged insect groups such as dipterans and winged hymenopterans (Ariedi-Jr et al., 2001), which probably are attracted by the artificial illumination of residences. We conclude that the generalist and opportunistic feeding habits of the H. mabouia population from Valinhos, together with the relative constancy in the local abundance of prey among seasons, result in a positive energetic balance of the studied population. Figure 1 . Figure 1.a) Number of prey found in the stomachs of adults and juveniles of Hemidactylus mabouia sampled at Valinhos, São Paulo State, between April 2002 and March 2003.Arrows indicate the mean number of prey found in the stomachs of adult (2.6 prey) and juvenile (2.7 prey) (Mann-Whitney; U = 7420.5;p < 0.05; n = 275).;and b) Volume of prey found in the stomachs of adults and juveniles of Hemidactylus mabouia sampled at Valinhos, São Paulo State, between April 2002 and March 2003.Arrows indicate the mean volume of prey found in the stomachs of adults (147.6 mm 3 ) and juveniles (40.9 mm 3 ) (Mann-Whitney; U = 4668.0;p < 0.001; n = 275). Table 1 . General composition of diet of Hemidactylus mabouia (n = 291) expressed in terms of number (N), frequency (F), volume (V, in mm 3 ) and importance value index (IVI) of each prey category.Adults and juvenile lizards were collected betweenApril 2002 and March 2003 at Valinhos, São Paulo State.
4,454.8
2007-08-01T00:00:00.000
[ "Biology", "Environmental Science" ]
Association of norepinephrine transporter methylation with in vivo NET expression and hyperactivity–impulsivity symptoms in ADHD measured with PET Attention deficit hyperactivity disorder (ADHD) is a common neurodevelopmental disorder with a robust genetic influence. The norepinephrine transporter (NET) is of particular interest as it is one of the main targets in treatment of the disorder. As ADHD is a complex and polygenetic condition, the possible regulation by epigenetic processes has received increased attention. We sought to determine possible differences in NET promoter DNA methylation between patients with ADHD and healthy controls. DNA methylation levels in the promoter region of the NET were determined in 23 adult patients with ADHD and 23 healthy controls. A subgroup of 18 patients with ADHD and 18 healthy controls underwent positron emission tomography (PET) with the radioligand (S,S)-[18F]FMeNER-D2 to quantify the NET in several brain areas in vivo. Analyses revealed significant differences in NET methylation levels at several cytosine–phosphate–guanine (CpG) sites between groups. A defined segment of the NET promoter (“region 1”) was hypermethylated in patients in comparison with controls. In ADHD patients, a negative correlation between methylation of a CpG site in this region and NET distribution in the thalamus, locus coeruleus, and the raphe nuclei was detected. Furthermore, methylation of several sites in region 1 was negatively associated with the severity of hyperactivity–impulsivity symptoms. Our results point to an epigenetic dysregulation in ADHD, possibly due to a compensatory mechanisms or additional factors involved in transcriptional processing. Introduction Presented by symptoms of hyperactivity, inattention, and impulsivity, attention deficit hyperactivity disorder is (ADHD) one of the most frequent neurodevelopmental disorders in children that persists into adulthood in ∼30% of the cases. While the exact underlying neurobiology of ADHD remains elusive, there is a general consensus that genetics contribute significantly to the etiology of the disorder, with an estimated heritability factor of 0.77 [1]. With many genes being investigated and only a few risk genes having been identified, the complex mechanism of the disorder is elucidated and suggests a role for gene-environment interactions [2]. Among the several genes investigated in ADHD the SLC6A2 gene which encodes for the norepinephrine transporter (NET) is included. It regulates norepinephrine homeostasis and is responsible for the reuptake of norepinephrine (and dopamine in prefrontal regions) into the presynaptic neuron [3]. It is implicated in ADHD as common medication such as methylphenidate (MPH) target the dopamine and norepinephrine transporters [4]. Several single nucleotide polymorphisms (SNPs) within the NET gene have been investigated in ADHD and some have been associated with the disorder and related behavioural phenotypes [5][6][7]. We previously did not detect differences in the NET binding potential between patients and controls [8], while on the other hand we observed genotypic differences in the NET binding potential in the thalamus and cerebellum between adults with ADHD compared with healthy controls (HC). Furthermore, we detected an association between hyperactivity-impulsivity symptom scores and cerebellar NET binding that was genotype dependent in adult ADHD patients [9]. Therefore, epigenetic mechanisms are potentially involved in the pathophysiology of ADHD. In recent years, the importance of DNA methylation has received increased attention as a possible modulator in psychiatric disorders in addition to the influence of genetic polymorphisms. DNA methylation is an epigenetic mechanism in which a methyl group is added to cytosine in cytosine-phosphate-guanine sites (CpG). This process can directly affect the activity and function of a gene without altering the DNA sequence: the methylation of these sites interferes with the binding of transcription factors and on the other hand of methyl-binding proteins can repress gene expression [2,10]. A few investigations have examined the role of DNA methylation in ADHD. The study by van Mil et al. detected lower DNA methylation profiles of several genes at birth to be correlated with ADHD symptoms at the age of 6 years [11]. Another study investigated DNA methylation levels of dopamine transporter (SLC6A3) with methylphenidate response in children with ADHD. They report a negative correlation between methylation levels and response to treatment, namely oppositional and hyperactive-impulsive symptoms, indicating that lower levels of methylation were associated with greater symptom improvement [12]. We know of only one study that investigated the SLC6A2 methylation in ADHD. In that study, the authors examined an abundance of genes, including the SLC6A2 in boys with ADHD and found that SLC6A2 methylation was associated with Cue-P3 task which is related to the posterior attention network [13]. Given our previous findings on the genetic influence on NET availability and behavioral symptoms, we sought to extend and complement our previous investigation in order to gain more insight by establishing whether any potential influence or interaction of epigenetic factors is present. With that we sought to assess if interaction of polymorphisms and DNA methylation potentially affect behavior and brain function. Our first aim was to test whether there is a difference in DNA methylation levels of CpG sites in the NET promoter between patients with ADHD and HC. Secondly, effects of candidate SNPs on the DNA methylation levels were explored. Thirdly, we assessed any potential associations between behavioural symptoms and NET methylation levels. Finally, we tested whether observed differences in methylation profiles translate to differential expression levels of the NET measured by PET. Methods In total 23 adult ADHD patients (age ± SD: 32.2 ± 10.9, 16 males) and 23 HC (age ± SD: 30.9 ± 10.6, 16 males) of which data have been published previously participated in the study [8,14]. Subgroup analysis for association with NET binding potential (BP ND ) and for testing of potential influence of SNPs included 18 adult patients with ADHD (age ± SD: 30.3 ± 10.5, 11 males) and 18 HC (age ± SD: 29.9 ± 10.5, 11 males). Subjects were recruited through the ADHD outpatient clinic at the Department of Psychiatry and Psychotherapy, Medical University of Vienna and via advertisement as previously published [8,14]. Patients had been free from any psychopharmacological treatment at least 6 months prior to study inclusion. Subjects underwent physical examinations and were tested for current substance use with a urine test. Inclusion criteria demanded for patients to have history of symptoms in childhood and a current diagnosis of ADHD. Subjects were interviewed using the Conners' Adult ADHD Diagnostic Interview for DSM-IV (CAADID, Conners, 1999), Conners' Adult ADHD Rating Scale Investigater-Screen Version (CAARS-Inv:SV), Conners' Adult ADHD Rating Scale: Observer-Screen Version (CAARS-O:SV), and the Conners' Adult ADHD Rating Scale: The Self-report Screening Version (CAARS-S:SV). Subjects were excluded if they had any comorbid DSM-IV Axis I and II disorder as determined by the Structural Clinical Interview for DSM-IV. Written consent was aquired from all participants and they were financially reimbursed for their participation. The Ethics Committee of the Medical University of Vienna approved this study. Selection of single nucleotide polymorphisms and genotyping Four SNPs were included based on our previous publication: rs28386840, rs2242446, rs40615, and rs15334 [9]. Procedures were performed as previously described [9]. In short, 9 ml of blood from each subject was collected in EDTA blood tubes. Isolation of DNA was done using the QiaAmp DNA blood maxi kit (Qiagen, Hilden, Germany). Genotyping was performed using the iPLEX assay on the MassARRAY MALDI-TOF mass spectrometer. Allele specific extension products were selected and genotypes assigned by Typer 3.4 Software (Sequenom, San Diego, CA, USA). Quality criteria (of individual call rate >80%, SNP call rate >99%, and identity of genotyped CEU trios (Coriell Institute for Medical research, Camden, NJ) with HapMap database >99%) were applied and met. Detailed protocol of the DNA methylation design and profiling using EpiTYPER is described by Suchiman et al. [18]. In short, around 100 ng of genomic DNA was converted into bisulfite using EZ-96 DNA methylation kit, Shallow-Well Format (ZYMO Research). This was followed by PCR amplification. The primers used for PCR amplification of regions are listed in Supplemental Table 1. Step down PCR reaction using 20 ng of bisulfite converted DNA was performed as by protocol starting with 15 min at 95°C, followed by 4 cycles at 20 s at 95°C, 30 s at 65°C, and 1 min at 72°C, thereafter 4 cycles: 95°C for 20 s, 58°C for 30 s, and 70°C for 1 min. Last, 38 cycles as described: 20 s at 95°C, 30 s at (AT)°C for 3 min, and at 72°for 1 min. The final extension was carried out at 72°C for 3 min and cooled down to 4°C. To check for the generation of PCR products, selected samples were run on a 1.5% Agarose gel. Following dephosphorylation of unincorporated dNTPs, PCR products were cleaved into smaller fragments using the MassCleave reaction at 37°C for 3 h (Sequenom). After removal of excess ions, 15-20 nl of each sample were spotted onto a SpectroCHIP II-G384 and analyzed with the Epityper 1.2 (Agena Bioscience). Quality control included the following criteria: sample call rate >50%, CpG call rate >85%, and duplicate values with stdev <0.1). In order to confirm and increase our call rates, a new measurement was done using the same method. With that we were able to confirm our previous results as well as increase our call rates with the addition of 13 CpG sites. All sites not fulfilling the criteria were excluded from further analysis. The resulting data from the mass spectrometer was preprocessed using the EpiTYPER Analyser. A period between number annotated at the same CpG illustrates that the sites occur within the same fragment. Average methylation was calculated for each of the three regions investigated. Positron emission tomography (PET) Subjects underwent PET (General Electric Medial Systems, Milwaukee, WI, USA) scans at the Department of Biomedical and Image-guided Therapy, Division of Nuclear Medicine at the Medical University of Vienna applying the tracer (S,S)-[ 18 F]FMeNER-D 2 [19]. Detailed information regarding the scans have been described previously [8]. A retractable 68 Ge rod source for tissue attenuation correction was performed prior to the dynamic emission scan, during a 5-min transmission scan and acquired in 3D mode. The acquisition of data started at 120 min after a bolus i.v. injection of 4.7 MBq/kg body weight (ADHD patients: 393 ± 95 MBq, HC: 384 ± 61 MBq; p > 0.05, t-test) of (S,S)- The mean value of the specific radioactivity of (S,S)-[ 18 F]FMeNER-D 2 was 537 ± 383 GBq/ μmol (ADHD patients) and 473 ± 218 GBq/μmol (HC), (p > 0.05, t-test). Series of six consecutive time frames each lasting 10 min in an interval of 120-180 min after tracer bolus application was performed to measure radioactivity in the brain. The collected data was reorganized in volumes consisting of 35 transaxial sections (128 × 128 matrix) using an iterative filtered back projection algorithm (FORE-ITER) with a spatial resolution of 4.36 mm full width at halfmaximum 1 cm next to the center of the field of view. Magnetic resonance (MR) images from subjects taken on a 3 Tesla Philips scanner (Achieva) using a 3D T1 FFE weighted sequence, yielding 0.88 mm slice thickness and in plane resolution of 0.8 × 0.8 mm were used for coregistration [8]. Data preprocessing and quantification of norepinephrine transporter Information on data preprocessing and the quantification of the NET is described in detail elsewhere [8]. In short, individual time frames of the dynamic PET scan were readjusted to the mean of frames with no head motion, determined by visual inspection. The readjusted images were then coregistered to each subjects MRI scan using a mutual information algorithm in SPM8 (Wellcome Trust Centre for Neuroimaging, London, UK: http://www.fil.ion. ucl.ac.uk/spm/). The caudate is considered devoid of NET [20] and was therefore used as the reference region for the parametric images of NET BP ND . The caudate was manually delineated on individual MRIs using PMOD image analysis software, version 3.1 (PMOD Technologies Ltd, Zurich, Switzerland, www.pmod.com). NET quantification was calculated according to Arakawa et al. [21]. BP ND was calculated as the ratio between the area under the timeactivity curve of the target region and the area under the time-activity curve for the reference region minus 1. An integration interval of 120-180 min was applied. The caudate was manually delineated on individual MRIs using PMOD image analysis software, version 3.1 (PMOD Technologies Ltd, Zurich, Switzerland, www.pmod.com). The developed transformation matrices were applied to the coregistered parametric images and then warped into MNI standard space. Regions of interest (ROIs) The selection of brain ROIs was based on regions containing high expression of the NET [20,22] as well as target regions in behavioral control [23]. Those regions included the thalamus, locus coeruleus, putamen, cerebellum, and the raphe nuclei. The ROI NET BP ND was extracted from the Hammer Maximum Probability Atlas (Hammers, et al. 2003) and through manual delineation on the MNI T1 single-participant brain. The (S,S)-[ 18 F]FMeNER-D 2 radioligand introduces a potential bone spill over and hence cortical regions were excluded from the analysis. So far, the cause of the observed spillover remains unresolved. In in vitro experiments, defluorination and subsequent binding to bone could not be confirmed. Possibly, some other metabolic degradation route is responsible for this phenomenon [24]. Statistical analysis Descriptive parameters were computed and NET methylation levels were assessed for normality using the Shapiro-Wilk test. In case of deviation from normality, Mann-Whitney was computed to test for differences between study groups. Effects of group (ADHD patients vs HC) on methylation levels were tested using linear mixed model using the average mean of methylation levels from each region, as well as individual CpG sites methylation values as the dependent variables. Potential confounding factors, such as previous medication status, age, and sex were accounted for and excluded if rendered insignificant. The model tested for main effects and any possible interactions between group and CpG sites on methylation. If rendered significant, post hoc analysis included Mann-Whitney tests and t-test in case of normality. Effects of SNPs (homozygous major vs minor allele) and group (ADHD vs HC) on binding potential and behaviour (see Supplement Page 2, Supplemental Table 2, Supplemental Figs. 2, 3), as well as methylation levels were also tested for using genotype (major vs minor allele), group (ADHD vs HC) as fixed factors and binding potential and methylation levels as the dependent variables. Potential association of NET BP ND and methylation levels were examined using a linear mixed model with a stepwise backward elimination procedure. The association of behavioral scales and methylation levels were investigated using Pearson correlation in patients only. Further regression analysis tested the combined effects of genotypes and methylation levels on binding potential and behavioral scales. SPSS version 22.0 (IBM Corp. Released 2013. IBM SPSS Statistics for Windows, Armonk, NY: IBM Corp) was used for the analyses. The significance level was set at p < 0.05 and corrected for multiple comparisons using the Benjamini-Hochberg correction [25]. Demographic characteristics Demographic information of subjects is provided in Table 1. Demographics for the subgroup analysis is provided in Table 2. No difference of either age or sex was detected between groups. DNA methylation of the promoter of SLC6A2 The rates of methylation levels across regions are comparable to previous studies on SLC6A2 DNA methylation [16,17]. Toward the 5′ end in promoter region 1, CpG sites were hypermethylated in all subjects, while toward the 3′ end the sites were hypomethylated or marginally methylated. Across CpG sites in promoter region 1 the individual methylation ranged from 9 to 59%. Promoter region 2 is a particularly dense region in terms of CpG sites, several sites were omitted from analysis as they did not fulfil the quality control criteria. In region 2, the individual methylation ranged between 2 and 11% and between 2 and 10% in region 3. Table 3). Effect of SNPs on methylation No effects of any of the investigated SNPs were found on either the averaged mean of methylation levels nor on individual CpG site methylation. Association of NET methylation with NET BP ND Linear mixed model analysis revealed a main effect of CpG site 4 in region 1 (F 19.99 = 68.14, p < 0.001) as well as an interaction effect of group and CpG site 4 (F 11.30 = 101.85, p < 0.001). Post hoc analysis showed negative correlation between CpG 4 site in patients with ADHD in the following Table 4 and Supplemental Figs. 4-7). One potential influential outlier was detected in the locus coeruleus in the patient group. When logtransforming the data, the correlation coefficient increased slightly to r = −0.556, p = 0.01 (Spearman correlation r = −0.581, p = 0.01). Removal of outlier resulted in a coefficient of r = −0.776, p < 0.001. However, no associations between NET methylation and NET BP ND were observed in HC. Combined analysis of SNPs and methylation on NET BP ND and behavior Lastly, combined effects of genotypes and methylation levels on NET BP ND on one hand and behavioral scales on the other hand were accounted for. No effects withstanding corrections for multiple testing of any of the SNPs investigated in combination with methylation levels on NET BP ND or on behavioral symptoms were found. Discussion In this study, we present the DNA methylation profile of the SLC6A2 gene in patients with ADHD and HC. Our results suggest the differential NET methylation in ADHD to be a An asterisk indicates the statistically significant regions promoter region specific. Hypermethylation was detected toward the 5′ end of the promoter in patients with ADHD compared with controls, while this effect reversed toward the 3′ end. Negative association was detected between hyperactivity-impulsivity symptom scores with NET methylation levels for several CpG sites. In a subgroup analysis, we demonstrate for the first time a negative correlation between methylation of a single CpG site with in vivo NET expression in several brain regions in patients only. In promoter region 1, hypermethylation at the 5′ end was detected in patients in comparison with controls indicating decreased transcriptional activity of the NET. Several potential mechanisms may come at play here. The family of DNA methyltransferases (DNMTs) enzymes are involved in the transfer of methyl groups to DNA. DNMTs may recruit histone deacetylase and histone methylase resulting in transcriptional repression. Secondly, DNA methylation can directly decrease expression by preventing transcriptional factors from binding to the DNA. Thirdly, DNA methylation can repress transcriptional elongation caused by reduced RNA polymerase II occupancy and chromatin accessibility over the gene body. Lastly, methyl-CpGbinding proteins (MBPs) can identify methylated DNA and recruit corepressors in order to silence the transcription and alter the surrounding chromatin [26,27]. Of the MBPs, the methyl-CpG-binding protein 2 (MeCP2) is perhaps the most studied and a key transcriptional regulator associated with transcriptional repression. It binds to methylated DNA and recruits other factors that alter the chromatin structure [28]. NET is hypothesized to be repressed in human disorders where DNA hypermethylation has been demonstrated using peripheral whole blood such as in panic disorder and cardiovascular disease [29,30]. In the study of Esler et al., it was evident that MeCP2 binds to the methylated promoter region of the NET in panic disorder patients [30]. Another investigation found MeCP2 expression levels to be significantly decreased in boys with ADHD [31]. It is therefore likely that the MeCP2 binds distinctively to methylated regions of the NET in ADHD patients. Furthermore, it can be assumed that MeCP2 binds distinctively to hypermethylated regions of the NET promoter resulting in extended repression of NET expression in ADHD patients. The MeCP2 is though not exclusively bound to methylated DNA. It has previously been determined that it also binds to hypomethylated sites in the promoter region of the NET [15,16]. For certain sites in promoter region 1 and 3 we found the effect to be reversed for patients, they had lower methylation levels in comparison with controls, which potentially could be explained by the multifunctional role of the MeCP2 of it being able to bind to hypomethylated regions. This is however up to speculation and requires further research in order to unravel the underlying mechanism. Interestingly, we detected negative association between methylation of a single CpG and NET expression in several brain regions of interest. Higher methylation levels were associated with lower in vivo expression of the NET in the thalamus, locus coeruleus, and the raphe nuclei. This finding supports previous evidence of the molecular effect of DNA methylation on expression [27,32]. Strikingly, this was only observed in patients and not in HC. In addition to the potential effects of the before mentioned epigenetic factors, other factors may come into play here. As the SLC6A2 has many cis-regulatory elements in its promoter region it is possible that they behave distinctively in ADHD. The cis-regulatory transcription factor nuclear factor kappa B (NF-кB) is a part of that particular CpG site, possibly affecting the transcription in patients [33]. The exact mechanism of action is up to speculation, however a previous study has shown that inhibition of NF-кB significantly upregulates the NET [34]. Although NF-кB is a key regulator in inflammatory response it has also been shown to be involved in synaptic plasticity, memory, stress, addiction, and locomotor activity [35]. The NET is well established for its role in memory and stress [3] and is also implicated to modulate synaptic plasticity [36]. It is plausible that transcription at this particular site differentially affects patients with ADHD depending on the amount of exposure to environmental factors. Studies have implicated the role of oxidative stress [37,38], stress, smoking, alcohol, and other pre-and perinatal risk factors for ADHD [39,40]. Future studies should therefore consider other various risk factors in order to get a clearer picture of the underlying pathophysiology. We found several sites within promoter region 1 to be negatively associated with symptoms of hyperactivity and impulsivity. More precisely, lower methylation levels were associated with increased symptom severity. Decreased methylation levels may represent higher transporter expression resulting in increased uptake of extracellular norepinephrine. This is of particular importance as norepinephrine modulates multiple cognitive processes, including inhibitory control, that are often impaired in ADHD. Furthermore, common medication for ADHD such as MPH and atomoxetine significantly improve clinical symptoms such as hyperactivity by blocking uptake of the NET and increasing NE levels [41][42][43]. Our results are in line with our previous study where we found hyperactivity-impulsivity scores to be genotype dependent. We found patients carrying the major allele of rs40615 and rs15534 to have higher scores and higher NET availability [9]. Our results are also transferable to studies on the dopamine transporter (DAT1) as medications for ADHD target these two systems by increasing levels of dopamine and norepinephrine. One study found a correlation between the DAT1 gene and symptom responses of hyperactivity-impulsivity following MPH treatment. They found that less methylation was associated with greater MPH response [12]. Another study detected negative association between DAT1 methylation and scores of hyperactivity [44]. We can only speculate about the differential association found between certain CpG sites with either symptomology or in vivo expression. Firstly, as the NET has several regulatory elements and transcription factor binding sites within the gene it is possible they behave in a distinct way having different consequences on behavior or brain function. Secondly, the effects may be too small to detect due to the sample size. Third option is the possible influence of polymorphisms located on the gene possibly affecting or interacting with epigenetic mechanisms resulting in a certain phenotype. We failed to demonstrate any effect of SNPs on methylation levels or any associations between methylation levels and NET binding to be genotype dependent, suggesting that the epigenetic effect is stronger and independent of genotypic variation. We can however not rule out any potential effects of genetic variation on the methylation levels as the sample size in the subgroup analysis is quite small. Moreover, no combined effects of methylation levels and genotypes on brain binding potentials or on behavioral scales was found, further emphasizing the need for future testing using larger sample sizes. Furthermore we only investigated a handful of SNPs that we had previously shown to have a genotype dependent effect on the NET binding [9]. We must acknowledge several limitations to our study. Firstly, as with many neuroimaging studies, the sample size is considered quite small, thus replications in larger samples are warranted. Secondly, we can only estimate DNA methylation of the NET from whole blood as a proxy for the brain, but DNA methylation tends to be tissue specific [45]. We cannot draw definite conclusions about the methylation patterns in the brain although using peripheral blood is considered to be feasible as several studies have shown correlations between peripheral markers and the brain [46]. Last, although we did a new analysis and were able to confirm our previous results and successfully increase our call rates, we were unable to do the analysis using a different method. Further studies using different methods such as pyrosequencing are necessary in order to validate our results. On the other hand, the pattern of methylation observed within regions is in line with the study by Bayles et al. [17]. Regardless of our limitations, we give rise to new insights of the role of epigenetic mechanisms underlying NET imbalance in ADHD. We demonstrate for the first time differential DNA methylation levels in the SLC6A2 between patients with ADHD and HC. Differential methylation in patients may possibly be due to transcription factors behaving in a distinct manner in ADHD. Higher sitespecific methylation at CpG 4 seems to predict in vivo availability in a region specific manner and lends support to altered transcriptional control in ADHD. We show an epigenetic effect of DNA methylation on behavioural control for several sites, namely hyperactivity-impulsivity symptoms. While these results look promising, future studies are required, including larger sample sizes and genetic variants covering the whole region of the NET gene. Furthermore, although not detected in this study, future research should also include patients currently undergoing pharmacotherapy as it may affect the DNA methylation [47,48]. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/.
6,197.8
2019-08-05T00:00:00.000
[ "Medicine", "Psychology", "Biology" ]
Development of the late glacial Baltic basin and the succession of vegetation cover as revealed at Palaeolake Haljala , northern Estonia The 4.5 m thick Haljala sequence in North Estonia was studied to provide information on palaeoenvironmental changes between 13 800 and 11 300 cal yr BP. Late glacial environmental history of North Estonia was reconstructed using AMS-dated pollen record, sediment composition, plant macrofossils, and ostracods. The obtained data show environmental fluctuations that are linked to the climate shifts of the Last Termination in the North Atlantic region. Decrease in the arboreal pollen accumulation rate around 13 700–13 600 and 13 300–13 100 cal yr BP refers to short deterioration of climate within the Allerød Interstadial and has been correlated with the cooling of the Greenland Interstadial GI-1c and GI-1b events, respectively. Between 13 100 and 12 850 cal yr BP the pollen accumulation rate of trees, shrubs, and herb as well as organic matter increased, indicating short-term climate amelioration and establishment of pine-birch woods. This change has been correlated with the GI-1a event. Climate deterioration during the Younger Dryas (GS-1) was inferred from the reduction of tree pollen and flourishing of cold-tolerant species, such as Artemisia, Chenopodiaceae, and Cyperaceae. New data show that ice cover of the Pandivere Upland started to perish already about 13 800 cal yr BP. INTRODUCTION The Pandivere ice-marginal formations were shaped by the ice streams of the retreating Scandinavian ice sheet.The proglacial lake systems that developed in front of the decaying ice margin, their formation and configuration were controlled by geology, local topography, and dynamics of the ice sheet as well as climatic fluctuations.The late glacial history of Estonia has been investigated for a long time, yet any palaeogeographical and palaeoenvironmental reconstructions of the area at the closing stage of the last glaciation have been hampered due to lack of chronology.To fill this gap, we revisited the Haljala site and performed multi-proxy studies of sediment cores. The Haljala overgrown lake was first examined by R. Männil and R. Pirrus, who studied the Holocene lacustrine lime distribution and palynostratigraphy (Männil 1961;Männil & Pirrus 1963).The pollen diagrams by Pirrus (Männil & Pirrus 1963;Pirrus 1965;Pirrus & Sarv 1968) provided a general picture on the vegetation succession and climatic history during the late glacial.On the basis of pollen composition she differentiated the Allerød and the Younger Dryas sediments.However, the palynological data suffered from simplified taxonomic composition (mostly tree pollen accounts were included), sparse resolution between pollen samples, limited pollen sums, absence of pollen concentration data, and, most importantly, lack of radiocarbon dates, which hindered detailed correlation of the environmental changes. The palaeolake at Haljala was an ancient lagoon of the Baltic Ice Lake (BIL), which isolated from the BIL during the proglacial Lake Kemba (A 2 ) phase (Saarse et al. 2007).The former palaeolake, today a bog, which has been drained by numerous ditches and mainly reclaimed and transformed into pasture, is located in the depression above a 20 m deep buried valley.The buried valley is filled in with till, clayey silt, lacustrine lime, and peat.The distribution of lacustrine lime roughly outlines the ancient lake about 4.6 km long and 200 m wide, in the north dammed up by a spit, which was formed during the proglacial lake A 2 phase (Saarse et al. 2007). The key objective of the present study was to determine the chronology of the late glacial vegetation succession, as well as environmental and climatic changes, and to adjust the timing of ice recession in North Estonia.For this purpose we used the analyses of AMS 14 C dating, lithological composition, pollen stratigraphy, ostracods, and macrofossil remains of plants, mosses, and algae. MATERIAL AND METHODS The coring site in Palaeolake Haljala (59°25′27″N, 26°17′42″E) at an elevation of 67.4 m a.s.l is located about 90 km east of Tallinn, near the crossing of Rakvere-Võsu and Tallinn-Narva highways (Fig. 1), just where the road crosses the main draining trench.Multiple 1 m long sediment cores were taken from the central part of the ancient lake with a Belarus peat sampler in 2006 and 2007.One centimetre thick sub-samples for loss-onignition (LOI) analyses were taken continuously.Bulk samples were dried at 105 °C overnight, and burnt at 525 and 900 °C to calculate moisture, organic matter (OM), carbonate, and minerogenic compounds.Losson-ignition analyses were performed from all cores and served as a basis for the correlation of cores.Grain size distribution of clayey deposits untreated by chemicals was determined in the Institute of Ecology at Tallinn University with a Fritsch Analysette 22 laser particle size analyser. Sub-samples for pollen analyses (2 cm 3 ) were taken at 5 cm intervals and prepared according to Berglund & Ralska-Jasiewiczowa (1986).In addition, minerogenic samples were treated with concentrated hydrofluoric acid.Lycopodium tablets were added to calculate pollen concentration (Stockmarr 1971).Up to 350 terrestrial pollen grains were counted.Pollen identification followed Moor et al. (1991).Both the percentage and pollen accumulation rate (PAR) diagrams were constructed using TILIA and TGView programs (Grimm 2000).The zonation of pollen data is based on the constrained cluster analysis by the sum of squares (CONISS) method.Palynological richness was estimated by rarefaction analysis (Birks & Line 1992) using the PSIMPOLL 4.10 program (Bennett 1994(Bennett , 1998)).All identified terrestrial pollen taxa were included and standardized to the lowest pollen sum. Macrofossils were extracted by soaking 5 cm thick silt samples in water and sieving through a 0.25 mm mesh.Thirteen samples were prepared for fossil seed and moss fragment analysis.The samples were treated according to the method proposed by Birks (2001).Ostracod subfossils were picked out of five sediment samples (sample size ~ 25 cm -3 ) under the binocular microscope using a fine wet brush.Species identification and ecological preferences are based on the monograph by Meisch (2000).Ostracod shells and valves were photographed by the scanning electron microscope at the Natural History Museum, London, UK. Accelerator mass spectrometry 14 C radiocarbon dating was performed in Poznan and Uppsala Radiocarbon Laboratory and dates were calibrated on the basis of the IntCal04 calibration curve (Reimer et al. 2004) and Sediment lithostratigraphy and grain size distribution On the basis of grain size composition the sediment sequence was subdivided into five lithological units (Fig. 2; Table 1).The base of the studied sequence consists of fine-grained grey silt overlain by alternating grey clayey and fine silt, and ends with greenish-grey calcareous sandy silt (Fig. 2).The grain size and LOI results revealed the homogeneous nature of silt, with clay fraction fluctuating within 7-22%, silt within 72-85%, and sand within 5-8%.The mineral matter content was between 90% and 95%.The content of OM in bottom clayey silt was low, around 1-2%, except between core depths of 340-460 cm and 210-225 cm, where it reached 4% and 7.3%, respectively (Fig. 2).isolation of the basin.Transition from silt to calcareous silt was sharp, with carbonates reaching 45% and mineral matter decreasing to 50%.The sediment accumulation rate, ca 3 mm yr -1 between 13 600 and 12 940 cal BP, indicates rapid influx of minerogenic sediments (Fig. 2). After that the sedimentation rate decreased, being roughly 0.9 mm yr -1 . Chronology Altogether eight levels in the Haljala sediment sequence were dated by the AMS 14 C technique (Table 2).Three datings of terrestrial macrofossils (Haljala 2, 4, and 7) gave ages consistent with sediment depth and pollen stratigraphy.However, at an earlier stage of investigations, when only small sediment samples were available, finds of terrestrial plant remains were scarce.Therefore unidentified pieces of organic debris were sent in for AMS dating.Three 14 C dates (Haljala 1, 3, and 5) provided too young ages, perhaps due to contamination by root penetration from the surface or some other reason.Two dates (Haljala 6 and 8) were apparently too old in comparison with pollen stratigraphy and deglaciation chronology (Kalm 2006).Such discrepancy is not fully understandable, but it seems that selection of the dating material is crucial.Contamination during coring seems not to be an issue, as the obtained cores displayed well-preserved lamination (Fig. 3).Controversial dates were excluded from the age-depth model, which currently is based on three dates a time span of ca 13 800-11 300 cal yr BP.Due to the scarcity of radiocarbon dates the sedimentation rate estimations should be considered as a first approximation and the pollen accumulation diagram should be interpreted accordingly.In addition, we used in the current study the event stratigraphic units of Greenland Ice Core 2005 (GICC05) and their ages based on dates as signed to the corresponding boundaries suggested by Lowe et al. (2008), originally put forward by Björck et al. (1998).All these dates are calibrated ages that correspond to AD 2000: GI-1d -Older Dryas (14 100-13 950 cal yr BP); GI-1c -Allerød warmer period (13 950-13 300 cal yr BP); GI-1b -Allerød colder period (13 300-13 100 cal yr BP); GI-1a -Allerød warmer period (13 100-12 900 cal yr BP); GS-1 -Younger Dryas (12 900-11 700 cal yr BP). Palynostratigraphy A total of 67 samples including 59 terrestrial taxa were analysed.Seven local pollen assemblage zones (LPAZ) were distinguished based on statistical evaluations of pollen spectra. Juniperus was present constantly, Salix sporadically.The percentage of Alnus (4-12%), Picea (8-17%), Ulmus, Tilia, Populus, Fraxinus, and Corylus pollen (QM -up to 7%) was rather high, indicating the redeposition phenomena, obviously from the Eemian deposits, whose outcrops are located on the islands of Prangli and Uhtju in the Gulf of Finland.The NAP content was around 25%, with Cyperaceae, Poaceae, Artemisia, and Chenopodiaceae dominating.Betula nana was present in low values (1-4%).The concentration of tree and shrub pollen fluctuated between 6000 and 15 000 grains cm -3 , whereas herb values were over three times smaller (1900-5300 grains cm -3 ).Increase in Pinus pollen, together with decrease in Betula, Picea, Juniperus, and Salix, marked the upper boundary of the LPAZ.Such a pollen assemblage is characteristic of the late glacial, obviously of the beginning of the Allerød chronozone (Mangerud et al. 1974) and of GI-1c (13 950-13 300 cal yr BP; Lowe et al. 2008). LPAZ Ha-3 (408-333 cm, ca 13 600-13 300 cal yr BP, clayey silt) This LPAZ was characterized by irregular Betula and Pinus pollen curves; Picea and Alnus were present in almost equal values (Fig. 4).Total AP frequency was about 70%.Salix and Juniperus were more frequent than in the previous LPAZ.At the peak of Pinus pollen at 355 cm Betula nana and Salix pollen was missing and Betula had decreased.The proportion of Artemisia was decreased and that of Chenopodiaceae increased.The concentration of tree pollen increased and fluctuated between 6000 and 19 800 grains cm -3 , that of shrubs between 150 and 1000 grains cm -3 , and herbs between 2900 and 7100 grains cm -3 . LPAZ cm, ca 13 100-12 800 cal yr BP, upper part of fine-grained silt and lower part of clayey silt) The LPAZ was determined by changes in the AP/NAP ratio.Herb pollen percentages had increased, especially of Artemisia, Poaceae, Cyperaceae, and Chenopodiaceae, while tree pollen percentages had decreased, first of all of Pinus and Picea (Fig. 4).Juniperus and Artemisia were the main taxa to profit from the decline in AP.They colonized fresh areas around Haljala, which emerged after the isolation of the basin from the large proglacial lake that occurred between 13 000 and 12 900 cal yr BP.The concentration of tree pollen was high but irregular, reaching up to 22 800 grains cm -3 , but declined to 11 800 grains cm -3 at 230 cm.The PARs of trees (3700-6900 grains cm -2 yr -1 ), shrubs (350-450 grains cm -2 yr -1 ), and herbs (3500-4800 grains cm -2 yr -1 ) reached their maxima at the LPAZ lower limit around 13 100 cal yr BP and decreased sharply at the LPAZ upper limit (Fig. 5).Pollen concentration remained high up to 12 500 cal yr BP.This LPAZ roughly coincides with GI-1a (13 100-12 900 cal yr BP). LPAZ clayey silt) This LPAZ was marked by the abundance of herb pollen, reaching 65% (Fig. 4).Cyperaceae and Artemisia had their maxima, 25% and 23%, respectively.Pinus pollen percentages decreased, but the frequency of Picea remained on the previous level.Shrubs, especially Juniperus, were abundantly present.The value of redeposited QM species had also decreased, particularly that of Corylus.Apart from NAP increase up to 22 300 grains cm -3 , AP concentration decreased (7300 and 17 000 grains cm -3 ), referring to cooling in the Younger Dryas.The amounts of Sphagnum and Pediastrum boryanum spores increased.The concentration of shrubs also reached its maximum (600-4400 grains cm -3 ). LPAZ cal yr BP, calcareous sandy silt) Near the lower limit of the LPAZ tree pollen percentages were the lowest throughout the studied sequence, but shrubs (Betula nana, Juniperus, and Salix) reached their maxima (Fig. 4).The share of Artemisia diminished, but Poaceae and Urtica, on the other hand, met their maxima.At the top of the LPAZ Betula pollen increased sharply to 80% and herb pollen decreased from 50% to 20%.Especially suffered Poaceae, Cyperaceae, Artemisia, and Chenopodiaceae; constituents of thermophilous trees Quercetum mixtum and Picea disappeared from the pollen spectra.The concentration of tree pollen fluctuated between 4000 and 11 500 grains cm -3 , amounting to 61 500 grains cm -3 in the topmost sample.The PARs of all taxa increased upwards: of tree pollen from 1000 to 2300 grains cm -2 yr -1 , shrubs from 200 to 1200 grains cm -2 yr -1 , and herbs from 1100 to 3300 grains cm -2 yr -1 (Fig. 5).This LPAZ represents the transition between the Younger Dryas and the Preboreal (Mangerud et al. 1974). Plant macroremains As silty sediments of Haljala were very poor in macroremains, almost the entire sequence was sieve-washed and macroscopic plant remains were picked out and identified under the microscope to find suitable material for radiocarbon dating.The content of thirteen samples, prepared specifically for plant macrofossil analysis, is presented in Table 3.The small number of samples and their non-contiguous placement within the core does not allow drawing firm conclusions about the vegetation succession in and around the palaeolake of Haljala based on plant macrofossil analysis only.As the results of plant macrofossil analysis mainly reflect species from local vegetation, it is not surprising that most of the seeds and vegetative parts belonged to the aquatic species: were present in different late glacial sequences in Europe (e.g.Wohlfarth et al. 2002) and in Estonia (L.Amon, unpublished data).It may be due to the small sample size in case of Haljala, but may also be limited by the lack of ecological conditions B. nana needed for spread, growth, and fructification.While browsing plant material for radiocarbon dating or identification for plant macrofossil analysis, pyritized and poorly preserved material was observed in several samples.Pyritization is a complex process in palaeobotany (Grimes et al. 2001), linked to decomposition of organic matter in the anoxic and reducing (water) environment (Yansa 1998).Organic material decaying in suitable environment is affected by sulphate-reducing bacteria, which mediates the formation of pyrite aggregates (Tovey & Yim 2002). Deglaciation and palaeogeography The new evidence from Haljala suggests that the area started to deglaciate about 13 800 cal yr BP, thus about 500 years earlier than previously thought (Vassiljev et al. 2005;Saarse et al. 2007).According to Demidov et al. (2006), decay of the ice margin since the Allerød turned from aerial downwasting to frontal-type deglaciation, the ice sheet melted quickly due to the ameliorated climate, and large proglacial lakes appeared at the front of the ice margin.This seems to be valid for Haljala as well, because the sequence reveals deposits that accumulated both in a large proglacial lake and in an isolated lake.Simulation of water level surfaces showed that during the Baltic Ice Lake stage A 1 the water level near Haljala was about 86 m, during A 2 -69 m, Baltic Ice Lake I (BIL I) -60 m, BIL II -57 m, and BIL III -54 m a.s.l.(Saarse et al. 2003(Saarse et al. , 2007;;Vassiljev et al. 2005).During the A 2 stage, about 13 000-12 900 cal yr BP, a spit formed between Tatruse and Vanamõisa bedrock hillocks at an elevation of 69-70 m a.s.l., which isolated the elongated narrow lagoon in the Haljala depression (Fig. 1).The isolation contact at 210-220 cm is marked by increased moisture, OM, and carbonate contents and decreased sediment accumulation rate (Fig. 2).Haljala lagoon separated finally between 13 000 and 12 900 cal yr BP, forming a coastal lake where clayey silt deposited.At the Younger Dryas/Holocene transition silt deposition was gradually replaced by calcareous sandy silt and lacustrine lime. Palaeoenvironmental events Based on palaeobotanical, lithological, and chronological data, five main environmental stages have been recognized.These stages coincide rather well with the event stratigraphy proposed by Lowe et al. (2008). 13 800-13 300 cal yr BP The samples from the basal part of the Haljala sequence between 500 and 330 cm consist of fine-grained and clayey silt, incorporate pollen zones Ha 1-3, and delineate a time span of 13 800-13 300 cal yr BP.These sediments were deposited in a large proglacial lake in the middle of the Allerød-Bølling warming event (GI-1c).The relatively warm climate is supported by rather high OM accumulation, peaking around 13 400 cal yr BP (Fig. 2).The stable representation of the Pediastrum algae up to 13 100 cal yr BP suggests that the water/climatic conditions were rather favourable during this late glacial period.Pollen spectra display high and uniform tree pollen percentages (around 60-70%), with a noticeable admixture of Picea, Alnus, and Corylus (Fig. 4).These taxa were not constituents of the flora at that time.Sediment deposition in a proglacial lake with a vast pollen source area and low local pollen production was the main reason why pollen spectra include abundantly redeposited pollen grains (Picea, Alnus, Corylus, QM, etc.), typical of interglacial deposits.Around 435 cm depth (13 700-13 600 cal yr BP) corresponding to Ha-2, a considerable decline in the concentration and palynological richness of pollen is recorded (Fig. 5), which might represent a short-term cooling inside GI-1c.The mentioned decline/cooling could be a result of the ice margin standstill not far from Haljala.The 10 Be exposure ages in the area are somewhat contradictory, but still show ice-free northern Estonia since 13 600 10 Be yr BP (Rinterknecht et al. 2006).An OSL date of sand from the Pikassaare kame field (21 km west of Haljala) gave a similar age (13 700 yr BP; Raukas & Stankowski 2005), however, the authors considered kame deposits as unpromising material to study the ice recession phenomena.Within this time interval seven sediment samples were analysed for plant macrofossils (Table 3).Characeae oospores are present in all samples.Other aquatic species found are Ranunculus sect.Batrachium and Potamogeton sp.Ranunculus sect.Batrachium is a pioneering species often found in late glacial sediments (Birks 2000).The lowermost samples contained seeds indicating mild climate: Betula humilis, Urtica dioica, and Scirpus sylvaticus, suggesting amelioration of climate in GI-1c.However, also two common arctic species are present (Dryas octopetala and Salix polaris). Three samples were studied for ostracods.Benthic freshwater ostracods migrated through passive transport, e.g. by wind, drifting vegetation, flowing waters, birds, mammals, fishes, amphibians, and invertebrate animals (Meisch 2000).Ostracod assemblages are typical of shallow lakes at depth intervals 450-460 cm and 440-450 cm and refer to the vegetation of the littoral zone, e.g.Limnocythere inopinata (Fig. 6A) and Cyclocypris ovum (Fig. 6E, F).Limnocythere inopinata preferred a sandy substrate (Scharf 1998).The occurrence of Eucypris cf.virens (Fig. 6I) may point to a temporary water body, because the species prefers grassy pools that dry up in summer (Meisch 2000).Climate amelioration in the Allerød was favourable for littoral species.There is no record of mixed ostracod assemblages of species populating deep lake and shore areas; the subfossil material shows littoral derivation.In Scharf (1998) the existence of littoral species in deep lake is explained by limnetic sediment transportation into deeper parts of the lake during storms in the circulation periods.The high sedimentation rate caused the preservation of carapaces of both adult and juvenile specimens.Ostracods became trapped among sediment particles rapidly, which diminishes the opportunity of separation of valves.Anoxic conditions prevented microbial organisms from disarticulating ostracod valves after death (De Deckker 2002).The composition of the ostracod assemblage changed sharply at a depth of 390-400 cm.Cytherissa lacustris (Fig. 6C, D), which preferred cool and deep oxygenated lakes, appeared (Meisch 2000). Decline in the concentration, accumulation rate, and palynological richness of pollen also shows an environmental change around 13 700-13 600 cal yr BP (Ha-2; (Lowe et al. 2008).The scale on the left is δ 18 O values in ‰ and on the right, % values for LOI data and 10 -3 PAR values for Betula.The first appearance of birch is also denoted.Figs 5,7).This change seems to be caused by the emergence of Tatruse Island, which isolated a narrow sound in the Haljala ancient valley (Fig. 1) and protected sediment influx from the large proglacial lake. Mineral sediments, coupled with low tree pollen concentration (less than 20 000, commonly 6000-8000 grains cm -3 ) and palynological richness, indicate sparse vegetation (Fig. 4).Xerophilous steppe and tundra assemblages were dominated by Betula.Pinus was distributed only in favourable habitats characteristic of the open woodland tundra.Still, Pinus macrofossils were not found; its pollen may derive from older sediments or may be long-distance transported by winds.The pollen assemblage zones (Ha 1-3) at Haljala are quite similar to that of Visusti (central Estonia): total content of AP about 70%, NAP about 30%, pine pollen dominating over birch, and a notable percentage of thermophilous taxa (Pirrus & Raukas 1996).At Visusti such a pollen composition was correlated with the Older Dryas, in Haljala with the Allerød chronozone GI-1c.In the westernmost part of European Russia cold arid conditions and treeless vegetation have been reconstructed for the time period of 14 000-12 000 cal yr BP (Subetto et al. 2002;Wohlfarth et al. 2002Wohlfarth et al. , 2006Wohlfarth et al. , 2007)).In southern Lithuania macrofossils of Betula, Pinus, and Picea were found before 13 700 cal yr BP (Stančikaite et al. 2008).In northern Estonia birch has been present since 13 800 cal yr BP, as indicated by Betula macrofossils and PAR around 1000 grains cm -2 yr -1 (Fig. 7), which according to Hicks (2001), is the threshold value for the presence of birch forest.Yet, we must bear in mind that in the late glacial environment where the vegetation is sparse, and soils and melting ice are the source for inwash of older sediments and hence pollen to the lakes, sediment focusing and artificially high PARs can be observed (Seppä & Hicks 2006), which may distort our estimations about the existence of Betula woods in the early Allerød in northern Estonia. 13 300-13 100 cal yr BP During this short time span fine-grained silt deposited with lower OM values than during the previous years (Figs 2, 7).In pollen composition (LPAZ Ha-4) Pinus and Picea percentages reached close to their maxima, up to 41% and 11%, respectively.High percentages of secondary pollen, such as Picea, Alnus, Corylus, and Ulmus, together with a low primary pollen accumulation rate and OM content, refer to cold climate and can be correlated with the GI-1b cooling.The interpretation of the elevated Picea curve is complicated, as the high pollen percentage (11%) and accumulation rate of Picea (300-400 pollen grains cm -2 yr -1 ) between 13 300 and 13 100 cal yr BP could suggest the presence of Picea. Finds of Picea wood and high pollen percentages (28%) in Allerød clayey deposits of Kunda (Thomson 1934, p. 103) support the presence of spruce in North Estonia already during the Allerød.The average pollen composition (Betula 18.5%, Pinus 56%, Picea 20%, Alnus 5.5%) of Kunda clayey deposits is also similar to that of Haljala clayey deposits at a depth of 280-320 cm, where Betula fluctuates within 16-24%, Pinus within 25-41%, Picea within 6-11%, and Alnus within 4-9%.However, considering the size of the sedimentation basin at Haljala and the large pollen source area, redeposition of Picea pollen from older sediments, first of all from Eemian deposits, could not be ruled out.As a whole, pollen concentration and accumulation rate values are low for all taxa before 13 100 cal yr BP (Fig. 5), which could be explained by a high sedimentation rate of mineral matter (3 mm yr -1 ).On the basis of pollen composition, open woodland tundra, mostly with birch, pine (?), spruce (?), juniper, and willow, spread in the Haljala basin. The ostracod assemblage with Limnocytherina sanctipatricii (Fig. 6B) and Cytherissa lacustris (Fig. 6C) at a depth interval 270-260 cm indicate a cold and deep oligotrophic freshwater lake.The highest densities of C. lacustris occur in oligotrophic lakes at depths between 12 and 40 m (Meisch 2000) and the favourable temperature of the species is below 18 °C (Geiger 1993).At that time (about 13 100-13 000 cal yr BP) the Haljala basin was a narrow sound in the regressive proglacial lake A 2 (Fig. 1). 13 100-12 850 cal yr BP Sediments of this interval are represented by finegrained silt in which OM reached its late glacial maximum (7.3%) at 12 950 cal yr BP (215 cm).At about 13 000-12 900 cal yr BP the Haljala sedimentation basin isolated from the large proglacial water body (A 2 , Kemba).After that sediment transport, mineral matter influx, and consequently the sedimentation rate decreased considerably (0.9 mm yr -1 ).These phenomena could be one reason why pollen accumulation values sharply increased (Figs 5,7). A decrease in Pediastrum algae might be connected with water level lowering at about 13 100-13 000 cal yr BP.The isolation age of Palaeolake Haljala dates proglacial Lake Kemba to around 13 100 cal yr BP, which agrees with an earlier estimation of 13 150 cal yr BP by Vassiljev et al. (2005), but is somewhat older than the date (12 800 cal yr BP) proposed by Rosentau et al. (2007).Two 10 Be dates from boulders 20 and 30 km west and northwest from Haljala correspond to the approximate level of proglacial Lake Kemba, 12 480 ± 920 (EST-8) and 12 520 ± 890 (EST-11;Rinter-knecht et al. 2006), however, these boulders are of somewhat lower level and accordingly have younger ages.This suggests 'not later than' ages for the Kemba ice lake rather than for the Palivere end moraine these dates were meant for. Three samples from the period of 13 100-12 850 cal yr BP were analysed for plant and moss macrofossils.Aquatic and mire plant remains were prevalent: abundant Equisetum sp.stems and aquatic mosses (Drepanocladus sp., Calliergon giganteum) were common together with several Carex sp.leaves and rootlets.These finds refer to increased growth of moss and mire plants within and in the vicinity of the water body, indirectly suggesting its isolation from a larger proglacial lake.Other features indicative of water environment are abundant Characeae oospores, Ranunculus sect.Batrachium seeds, and remains of two limnic animals (Daphnia and Plumatella).Daphnia and its ephippia represent the open water component of zooplankton and, together with other cladoceras, have been used in several cases as a palaeoecological (Hoffmann 2003;Feurdean & Bennike 2004) and palaeoclimatological tool (Duigan & Birks 2000).Trichoptera remains were recorded but not identified.Moss flora contains besides aquatic species (Drepanocladus sp., Scorpidium sp., Calliergon giganteum) also terrestrial species (Tomenthypnum nitens, Rhizomnium punctatum).Tomenthypnum is presently growing in fens; Rhizomnium may occur in shoreline areas but also in forest floors or trees (Ingerpuu & Vellak 1998).In the lower part of the interval the number of Dryas octopetala leaves reaches a maximum, suggesting a more severe local climate. At a depth of 235-225 cm the ostracod assemblage shows a still high water level and cool conditions with sparse aquatic vegetation, because Cyclocypris cf.laevis (Fig. 6G, H) does not tolerate much vegetation (Meisch 2000).This is in good accord with the pollen record where aquatics are absent (Fig. 4). The PAR values of Betula, Pinus, and Picea are high, respectively, 1100-2700, 1600-3900, and 160-370 grains cm -2 yr -1 (Figs 5, 7).A rather dense forest could be suggested, referring to the threshold values proposed by Hicks (2001).This is contradicted by high PAR values of light-demanding Juniperus, Artemisia, Chenopodiaceae, and Cyperaceae.On the other hand, these species obviously colonized the new area that emerged from the proglacial lake waters.Taking into account the possible sediment focusing mentioned earlier, we may rephrase the vegetation assemblage to an open pine-birch woodland with sparse spruce (?) stands with shrubs and herbs.An early immigration of Picea seems rather likely in the light of Picea presence in the Scandinavian Mountains already at 13 000-12 900 cal yr BP (Kullman 2008) and macrofossil finds from Kunda (Thomson 1934).As Picea macroremains have not been found in the Haljala sequence, the presence of Picea at the end of the Allerød remains still open. 12 850-11 500 cal yr BP The Allerød was succeeded by the Younger Dryas cold event, caused by a large reduction in the Atlantic thermohaline circulation (Alley 2000;McManus et al. 2004) and/or by Arctic freshwater forcing (Teller et al. 2002;Tarasov & Peltier 2005).The boundary between the Allerød and Younger Dryas stadials at 12 850 cal yr BP (present study) is defined by the increasing content of herb pollen (Artemisia, Poaceae, Cyperaceae, and Chenopodiaceae) and decreasing frequency of tree pollen (Fig. 4, Ha-5-7).Deposits of the Younger Dryas stadial (GS-1) are represented by grey and greenish-grey silt with fine-grained sand interlayers and remains of Bryales moss corresponding to the Artemisia-Betula nana pollen zone defined by Pirrus & Raukas (1996).The accumulation rate of tree and shrub pollen was very low, even lower than in the early Allerød (Figs 5, 7).The percentage of NAP pollen surpassed that of AP as herbs spread in the newly emerged area.Climate cooling resulted in a significant reduction of Sub-Arctic woodlands and their replacement by grass-shrub tundra without birch and other trees, as the PAR values of the latter stay below 500 grains cm -2 yr -1 , thus below the level of the forest (Hicks 2001).The closest macrofossil evidence of Picea at that time is from the Valdai Highlands and close to the Ural Mountains around 12 000 cal yr BP (Väliranta et al. 2006;Wohlfarth et al. 2007). 11 500-11 300 cal yr BP A great change in the vegetation community occurred with the start of the Holocene warming.The increased representation of all forest taxa, first of all the percentage of Betula, suggests new expansion of the forest, characteristic of the beginning of the Holocene.The PAR of birch still remained rather low (Fig. 7) and shrubs seemed to flourish, with Salix, Betula nana, and Juniperus having their maximum PARs (Fig. 5).Some delay to ameliorated climate occurred in vegetation response, especially reforestation, as was also the case on the Karelian Isthmus (Subetto et al. 2002;Wohlfarth et al. 2002).The delay was caused by the cold water of the Baltic Sea and possible increase in anticyclonic circulation due to the presence of remnants of the Scandinavian ice sheet. ., not available; -calibration is not possible. Fig. 7 . Fig. 7. Summary environmental evidence from the Haljala sediment core in relation to the NGRIP δ 18 O curve(Lowe et al. 2008).The scale on the left is δ 18 O values in ‰ and on the right, % values for LOI data and 10 -3 PAR values for Betula.The first appearance of birch is also denoted. Table 1 . Grain size distribution of sediments in the Haljala sequence Table 2 . AMS Stuiver et al. (2005)ibrated according toStuiver et al. (2005).Dates used for the age-depth curve reconstruction of the Haljala sediment record are marked with the asterisk.Radiocarbon laboratory codes: Poz, Poznan; Ua, Uppsala; TA, Tartu; Hela, Helsinki; Tln, Tallinn Ranunculus sect Batrachium, Characeae, Equisetum sp., Potamogeton sp., and different aquatic mosses (Drepanocladus sp., Scorpidium sp.).Arctic species, which are characteristic of late glacial vegetation, were represented by Dryas octopetala, Salix polaris, and Betula nana.Dryas octopetala, a dwarf shrub indicating Arctic climate conditions, is evenly distributed over the analysed sequence, being present in six samples of thirteen.Salix polaris, which nowadays is growing in northern areas, occurs in one sample in the lower part of the sediment core.One Salix sp.leaf found was left undetermined to species level because of poor preservation.The cold-tolerate shrub, Betula nana, is found rarely in the Haljala sequence.At the same time B. nana remains (seeds, leaves, buds, catkin scales) Table 4 . Distribution of ostracod species in the Haljala core.
7,459.2
2009-01-01T00:00:00.000
[ "Environmental Science", "Geology" ]
Spectral absorption control of femtosecond laser-treated metals and application in solar-thermal devices Direct femtosecond (fs) laser processing is a maskless fabrication technique that can effectively modify the optical, electrical, mechanical, and tribological properties of materials for a wide range of potential applications. However, the eventual implementation of fs-laser-treated surfaces in actual devices remains challenging because it is difficult to precisely control the surface properties. Previous studies of the morphological control of fs-laser-processed surfaces mostly focused on enhancing the uniformity of periodic microstructures. Here, guided by the plasmon hybridisation model, we control the morphology of surface nanostructures to obtain more control over spectral light absorption. We experimentally demonstrate spectral control of a variety of metals [copper (Cu), aluminium (Al), steel and tungsten (W)], resulting in the creation of broadband light absorbers and selective solar absorbers (SSAs). For the first time, we demonstrate that fs-laser-produced surfaces can be used as high-temperature SSAs. We show that a tungsten selective solar absorber (W-SSA) exhibits excellent performance as a high-temperature solar receiver. When integrated into a solar thermoelectric generation (TEG) device, W-SSA provides a 130% increase in solar TEG efficiency compared to untreated W, which is commonly used as an intrinsic selective light absorber. Introduction Direct femtosecond (fs) laser processing is a costeffective, maskless and scalable fabrication technique that has been widely used to effectively modify the optical, electrical, mechanical and tribological properties of materials [1][2][3] . The creation of random structures by fslaser pulses exhibits desirable features for material functionalisation, e.g., perfect light absorption, superhydrophobicity and superhydrophilicity, with many potential applications in biomedical, environmental, and energy fields [1][2][3][4][5][6][7][8] . However, the ability to design and engineer these properties is very limited as they originate from random structures with random sizes, geometries, and densities. To provide more control over laser-induced surface structuring, past studies have mainly focused on structural regularity, mostly on the microscale. Of interest, one-dimensional (1D) femtosecond laser-induced periodic surface structures (fs-LIPSSs) with subwavelength periodicity that can be realised on a wide range of materials have attracted considerable attention 1,2 . Different techniques have been employed to address the spatial uniformity of fs-LIPSSs, for instance, positive and negative feedback mechanisms on titanium substrates 9 , chemicalassisted fs-laser-treatment 10 , high-uniformity fs-LIPSSs creation via temporally delayed multi-pulse irradiation 11,12 , non-ablative fs-laser structuring technique 13 , or self-organisation from metal-assisted pattern formation on glass 14 . While these studies showed the ability to produce more regular surface features, they did not demonstrate the ability to design the surface properties, which is crucial at the device engineering and application level. In this work, we control the morphology of surface nanostructures guided by the plasmon resonance size effect and the plasmon hybridisation model [15][16][17][18] , which allows us to control the optical properties of fs-lasertreated metals. We show that the optical absorption of fslaser-treated surfaces can be altered by tuning the size and density of randomly distributed nanostructures by considering the convolution of the hybridised plasmonic modes of the random structures. Since the particle size and density can be controlled by modifying the fs-laser processing parameters, we use the hybridisation model to guide our fabrication of fs-laser-treated light absorbers. As an application, we create selective solar absorbers (SSAs) and broadband absorbers (BBAs) on Al, Cu, steel, and W. We show that for solar-thermal applications, Wbased SSA has the highest solar absorption efficiency. We compare the performance of untreated W (commonly used as a solar absorber) and fs-laser-treated W, as a receiver for a solar thermoelectric generator and show that W-SSA provides a 130% increase in the thermoelectric generation efficiency. Numerical analyses of hybridised metallic surface nanostructures The goal of this work is to provide fabrication design tools to control the absorption spectral range of fs-laser-treated metallic substrates. For metallic surface structures with random sizes and distribution, the absorption spectral range depends on two main effects. First, the absorption of a nanostructure broadens and redshifts as a function of the nanoparticle size when the particle size exceeds the quasi-static approximation limit 19,20 . In addition, as the size and density of nanoparticles increase, their individual resonances hybridise due to the so-called plasmon hybridisation effect. The plasmon hybridisation model draws an analogy between surface plasmon coupling and hybridisation of atomic orbitals [15][16][17][18] , where coupled metallic (plasmonic) nanoantenna create bonding and antibonding modes that occur at lower and higher frequencies, respectively, compared to the original plasmon resonance 18 . The antibonding, high energy mode is a dark mode and cannot be easily excited directly by an external field 18 . The bonding mode is a bright mode that occurs at a lower energy and thus redshifts the measured plasmon resonance 18 . For an ensemble of metallic nanostructures, increasing the size and density of the nanostructures decreases the interparticle distance, hence increasing the hybridisation likelihood and strength 21 (see Fig. 1a, b). Consequently, increasing the size and density of randomly distributed plasmonic nanostructures significantly broadens the convoluted plasmon resonances and redshifts the overall plasmon resonance 17 . Figure 1c shows the finite-difference time-domain calculation of the absorption for an individual W nanoparticle. Clearly, the plasmon resonance peak redshifts as the particle's radius increases. Figure 1d shows the calculated absorption for four hybridised nanoparticles with a fixed radius of 100 nm. As the gap distance between the nanoparticles decreases, the plasmon resonance significantly redshifts. The resulting absorption spectrum of randomly distributed nanoparticles is a convolution of all the absorption bands corresponding to the different plasmon bonding states 17,21 . Because the plasmon resonance of most metals lies in the near-UV region (300-500 nm range), it is possible to create a Fig. 1 Controlling the absorption spectral range of fs-laser-treated metals. a Using low pulse number and lower fluence, the created surface nanostructures are smaller in size and less dense than the surface nanostructures formed b using high pulse number and higher fluence. Under optical excitation, larger and denser particles hybridise, and their near-field couples leading to a shift in the plasmon resonance. c The calculated absorption for a single W nanoparticle as a function of the particle size highlighting the relevance of the size effect on the broadness of the measured plasmon resonance. d The calculated absorption of four hybridized W nanoparticles with a radius of 100 nm as a function of the gap distance between the nanoparticles. A smaller gap leads to stronger hybridisation and redshifts the plasmon resonance peak to higher wavelengths. Both the size and hybridisation effects broaden the measured resonance for randomly distributed nano-surface structures. plasmon-based selective solar absorber that covers the solar spectrum (~300-2500 nm) if we control the average interparticle distance using the relative size and density of the plasmonic nanostructures 17 (see Supplementary information, Figs. S1 and S2 for more details on the absorption from hybridised W nanoparticles). For fs-laser-treated surfaces, we obtain a distribution of nanoparticle sizes. According to our analyses shown in Fig. 1c, d, increasing the size and density of the nanoparticles will broaden the absorption spectral range due to excitation of higher order resonances and hybridisation between the excited resonances. Importantly, the size and density of randomly distributed surface structures depend on the number of pulses and laser fluence 1,22 . For a low pulse number and low fluence, the processed surfaces form nanostructures with lower density and smaller size, similar to the surface morphology shown in Fig. 1a. However, at a high pulse number and/or high fluence, the size and density of the formed nanostructures increases, similar to the surface morphology shown in Fig. 1b. Consequently, guided by the plasmon absorption and hybridisation theory, we can intelligently control the fs-laser fabrication parameters to control the hybridisation strength of the surface nanostructures and, as a result, the absorption spectral range. Creation of selective and broad band light absorbers with fs-laser ablation Control over the absorption spectral range of surfaces is of major importance for a wide range of applications, such as selective solar absorbers, thermal emitters, structural colouring 23,24 , water condensation 25 and daytime and nighttime radiative cooling 25 . In particular, for a solar-thermal energy absorber operating at high temperature, the absorber should be an SSA since the main cooling mechanism is thermal radiation, and the light absorber temperature is T absorber and T amb are the absorber and ambient temperatures, respectively, α solar is the receiver's solar absorptance, I solar is the incident solar radiation, ε IR is the blackbody emittance at the operation temperature, and σ is the Stefan-Boltzmann constant 26 . Accordingly, an ideal solar light absorber has nearly 100% absorbance within the solar spectrum and negligible thermal emittance within the blackbody radiation spectral range at mid-to-high temperatures (100-500°C), i.e., an SSA. SSAs can thus maximise the temperature of solar absorbers and increase the efficiency of a heat engine driven by solar radiation. In past studies, several materials and structures have been introduced as SSAs, including intrinsic absorbers, e.g., tungsten 26,27 , semiconductor-metal tandems 28,29 , multilayer metal-dielectric interference stacks 30 , metal-dielectric composites 31 , surface textured metals 32,33 and photonic crystals 34 , and metamaterial and plasmonic light absorbers 35,36 . Although many of these systems demonstrated high design flexibility for light absorption 26,37 , their stability at elevated temperatures necessary for solarthermal energy conversion applications is highly questionable. In our study, we fabricate an SSA through our controlled fs-laser processing. More importantly, this is the first demonstration of a fs-laser-treated surface that is stable for high-temperature operation. We also demonstrate a hightemperature solar-thermoelectric generation (STEG) device with our SSA. To create SSAs, we want to create a relatively narrow band absorber that selectively absorbs the solar spectrum (300-2500 nm). To do so, we need to limit the hybridisation between surface nanostructures by creating smaller nanostructures and lower density by using a lower laser fluence at higher scanning speeds, which effectively reduces the pulse numbers. In our experiment, lower fluence levels (0.3-0.5 J cm −2 ) and higher scanning speeds (1-1.5 mm s −1 ) are used to produce SSAs on Al, steel, Cu, and W. Conversely, to create broadband absorbers, we want to increase the hybridisation between surface nanostructures by producing larger nanostructures with a higher density. To do so, we used a set of higher fluence levels (1.5-3 J cm −2 ) and lower scanning speeds (0.1-0.5 mm s −1 ) for Al-BBA, Steel-BBA, Cu-BBA and W-BBA. The spectral absorption/emission of fs-laser processed Al (Fig. 2a), steel (Fig. 2b) and Cu (Fig. 2c) over the visible, NIR and mid-IR ranges is shown. For each metal, we used two laser processing parameters to create a BBA (blue lines) and an SSA (red lines). The fluence used for Al-BBA was 1.5 J cm −2 , and 3 J cm −2 was used for Cu-BBA and Steel-BBA. The scanning speed used for Al-BBA was 0.5 mm s −1 , and 0.1 mm s −1 was used for Cu-BBA and Steel-BBA. The scanning speed used for Al-SSA was 1.5 mm s −1 , and 1 mm s −1 was used for Cu-SSA and Steel-SSA. The fluence for Al-SSA was 0.30 J cm −2 , and 0.50 J cm −2 was used for Cu-SSA and Steel-SSA. Figure 2d, e show scanning electron microscopy (SEM) images of Cu-SSA and Cu-BBA, respectively, which clearly validate the hybridisation model predictions. The insets of Fig. 2d, e shows the low-magnification views of Cu-SSA and Cu-BBA. The average particle size of Cu-SSA is~0.058 µm, while the average particle size of Cu-BBA is~1.13 µm. The structure dimensions were determined using ImageJ by considering a 10 × 10 μm 2 region from an SEM image (see material and methods for details). Using ImageJ, we obtained a histogram of the size distribution of the formed random structures and obtained the average particle size, as shown in Fig. 2f and Fig. 2g, for Cu-SSA and Cu-BBA, respectively. We note here that we are only interested in the particle dimensions in the horizontal plane and not the out-of-plane dimension since the resonances are excited with a normally incident light with horizontal polarisation. To quantify the performance of the fabricated light absorbers, we first analyse the absorptivity and emissivity of different metals. The spectrally averaged absorptivity ðαÞ of the selective surface is given by 37 : The emissivity ðεÞ is given by: where I is the solar intensity, λ is the wavelength, ε(λ) is the spectral emissivity of the selective absorber/emitter, dI dλ is the spectral light intensity, which corresponds to the air-mass coefficient (AM) 1.5 solar spectrum, h is Plank's constant, c is the speed of light, k is the Boltzmann constant, and T is the absorber temperature, here, taken as 200°C. The solar absorber efficiency η abs at a given operating temperature for the solar absorber T 4 abs , and solar concentration C, assuming only radiative losses is given by: where I solar is the solar intensity and is~1000 W m −2 . The calculated solar absorptance and emissivity for the BBA and SSA absorbers at T = 200°C, as well as the corresponding η abs in percentage for C = 1, are shown in Table 1. Although SSAs consistently have lower α , they have higher η abs . One of the by-products of fs-laser processing of metals is the formation of metal oxides, which limits the possibility of having low ε , even after optimising the laser processing parameters. While it is possible to control the absorption spectral range of Al, a strong absorption peak is produced for wavelengths >8 µm due to the phonon-polariton absorption of the formed Al 2 O 3 . On the other hand, stainless steel forms broad absorption with limited control over its spectral range, and it is difficult to create an SSA using steel. This is likely due to the nonzero extinction coefficient of iron oxides (Fe 3 O 4 and Fe 2 O 3 ) over the entire visible, NIR, and IR ranges (see Supplementary information, Fig. S3). Only Cu produces a positive η abs ; i.e., it is the only metal capable of reaching temperatures higher than the operating temperature (T = 200°C) by only absorbing incident solar radiation at C = 1 if cooling is solely due to thermal radiation. Cu, however, can form oxides that have negligible absorption over the wavelength ranges of interest 38 . Therefore, Cu is not a proper choice for high-temperature applications due to its low oxidation temperature 39 and relatively low melting point (~1000°C), which could be reduced even further when considering that the absorption in fs-laser-treated metals is due to nanostructures 40 , that have a lower melting point than their bulk counterpart 41 . W was selected as the best candidate for hightemperature SSA since it has a high melting point (3422°C) and mechanical strength 42 . We show control over the absorption spectral range by controlling the hybridisation of the formed W nanostructures. Figure 3a shows the measured spectral absorption/emission of broadband (black line) and selective (blue line) W Table 1). On the other hand, the W-SSA solar absorptance is~0.92 and the emissivity is~0.18 with η abs SSA~45% at T = 200°C ( Table 1). The results shown in Fig. 3a represent two cases of a highly selective solar absorber and an ultrabroadband light absorber. In the supplementary materials, we show that as we vary the laser fluence, we can tune the structure sizes, which tunes the absorption/emission properties of the W surface (Fig. S4), agreeing with the plasmon hybridisation model shown based on the particle distribution in Fig. S5. Figure 3b shows SEM images of the fs-laser-treated W with a fluence of 0.30 J cm −2 and scanning speed of 1 mm s −1 . The formed periodic line gratings are nanostructure-covered fs-LIPSSs. The formed nanostructure-covered fs-LIPSSs are frequently reported on a wide range of materials and are formed in a direction perpendicular to the polarisation of the incident beam with a period on the order of the incident wavelength 1,2 . It is important to note that although fs-LIPSSs can act as a 1D grating that excites surface waves, the measured absorption is not due to fs-LIPSSs, but is due to the nanostructures formed on top and in-between fs-LIPSSs. This is because light absorption via exciting surface plasmons (SPPs) with a grating is limited to the grating diffraction orders that satisfy the phase matching condition of the SPPs. For the formed grating, the period is~530 nm, and the phase matching condition is not satisfied for W at any wavelength, as we show in the supplementary information, Fig. S6. Figure 3c, d show SEM images of W-BBA created at a fluence of 3 J cm −2 and a scanning speed of 0.1 mm s −1 , where clusters of large surface structures are formed, which lead to a broader absorption spectrum according to the hybridisation model. High-temperature operation of fs-laser-treated SSA Although durable and selective, W and other metals can react and form oxides and nitrides at high temperatures if they are placed in an ambient environment. In our experiments, we noticed that the fs-laser-induced light absorption disappears when we heat the metallic light absorber. This is because the surface plasmonic nanostructures, responsible for the observed absorption, turn into oxides at higher temperatures, as shown in Fig. 4a. Since the absorption is due to the excitation of plasmonic resonances, converting a metal to an oxide destroys the surface absorption. To ensure that W survives high temperatures, we deposit a 200-nm-thick TiO 2 thin film by electron beam evaporation (~0.1 nm s −1 ). By adding a protective oxide layer, the surface nanostructures are no longer exposed to oxygen and retain their metallic properties even at elevated temperatures, as shown schematically in Fig. 4a. We choose TiO 2 , as opposed to Al 2 O 3 or SiO 2 , as it has low spectral emission for wavelengths <10 µm, i.e., away from the blackbody radiation wavelength range of interest. To test the performance of TiO 2 -coated W at high temperatures, we annealed TiO 2 -coated W and uncoated W-SSA at 500°C for ten minutes. As shown in Fig. 4b, the TiO 2 -coated W-SSA survived the annealing process, while the uncoated W-SSA lost its absorption properties. By performing energy-dispersive spectroscopy (EDS) on uncoated W before (Fig. 4c) and after (Fig. 4d) annealing, we can see that an oxide peak appears post-annealing. A cross-sectional SEM image of W-SSA shows the deposited TiO 2 film (see Supplementary information, Fig. S7). We note that the disappearance of the strong light absorption associated with the surface structures indicates that light trapping effects do not contribute significantly to the observed light absorption. This is because after annealing, the formed structures persist (see Supplementary information, Fig. S8), while the measured absorption is similar to that of an untreated surface. Because the absorption is due to plasmonic resonances, the absorption disappears when the metallic surface structures oxidise. Solar thermoelectric generation using fs-laser-treated W To present a practical device application of our plasmon hybridisation-based fabrication method, we tested the TEG efficiency of three thermoelectric generators with a solar thermal receiver consisting of W-SSA, W-BBA and untreated-W. We note that W is used as an SSA since it has good absorption within the solar spectrum and low emissivity in the blackbody radiation range. As shown schematically in Fig. 5a, to obtain high temperatures, we used an optical concentration of C = 40. The maximum temperatures reached by the untreated W, W-BBA, and W-SSA were 118°C, 155°C and 178°C, respectively. We note that due to the HVAC system in the lab, the forced convection current leads to a high convection coefficient, which can reach up to 200 Wm À2 K À1 . Figure 5b shows the measured STEG output power vs. the current for the three systems. The maximum output power of W-SSA is~130% and 15% higher than that of untreated W and W-BBA. Discussion In conclusion, we used the plasmon hybridisation model to explain the absorption spectral range of fs-laser-treated metals. We were able to control the surface morphology by adjusting the fs-laser parameters to create selective solar light absorbers and broadband light absorbers on different metals. We showed that metals with oxides that do not absorb significantly in the blackbody radiation wavelength range, e.g., Cu and W, are excellent candidates for selective solar absorption. We further investigated the performance of the fs-laser-treated SSAs at high temperatures and showed that W provides the best performance. To have fs-laser-treated W operate as a hightemperature SSA in ambient environments, the W surface must be sealed with a dielectric thin film. We chose TiO 2 as it has limited absorption within the blackbody radiation wavelength range of interest. W absorbers coated with TiO 2 were shown to withstand annealing at 500°C without any noticeable degradation in their absorption properties. We showed that the maximum power output is obtained from a thermoelectric generator using W-SSA as the solar receiver, which improved the output power efficiency by~130% compared to using untreated W as the solar receiver. For the first time, we produce fs-lasertreated surfaces that can act as high-temperature absorbers for enhanced thermoelectric generation efficiency. Laser fabrication setup The laser used in our experiments is a Ti: sapphire fslaser amplifier, which delivers horizontally polarised pulse trains at a repetition rate of 1 kHz with a central wavelength of λ = 800 nm and a pulse duration of τ = 30 fs. The maximum pulse energy delivered by the laser system is 7 mJ. The laser was focused by a lens with a focal length of 20 cm and incident at normal incidence. Bulk circular disks of W, Al, Cu and stainless steel (obtained from Alfa-Aesar with 99.99% purity) were used as target materials due to their high mechanical strength at high temperatures. The samples were mounted on a computerised XYZ precision stage and translated at different speeds for broadband and selective absorbers. Simulations The simulations were performed using commercially available finite-difference time-domain software from Lumerical®. For all simulations, we calculated the absorption inside W nanoparticles as a function of the particle size (Fig. 1c), or the gap distance (Fig. 1d, Figs. S1 and S2). Determination of particle radii from SEM images In our work, we are interested in resonances occurring due to the excitation of dipoles oscillating parallel to the surface, i.e., the x-y plane. To obtain the dimensions in the x-y plane of the nanoparticles, we used ImageJ, which is conventionally used to measure particle dimensions. We first select an evenly illuminated region from the SEM surface image and then apply a bandpass filter to flatten the image. Afterwards, we apply a threshold to the image by adjusting the background and outline the particles. After obtaining the area distribution of the outlined particles, we can directly calculate the particle radius distribution. Physical vapour deposition of TiO 2 To create an SSA that operates at high temperature, a 200 nm TiO 2 film is deposited by an electron beam evaporator on the processed metals. As discussed above, the TiO 2 film acts as a protective layer against oxidation of the surface nanostructures, which destroys the absorption properties of the ablated surfaces. TiO 2 pellets were purchased from Kurt J. Lesker® with 99.9% purity and evaporated via electron beam evaporation at 0.1 nm sec −1 . Annealing To check the stability of the solar absorber, we annealed the samples in an oven at 500°C for ten minutes. Spectral and surface characterisation To characterise the spectral reflectance/scattering, we measured the total hemispherical optical reflection of the samples using an ultraviolet-visible (UV) PerkinElmer Lambda 900 spectrophotometer and Fourier transform infrared spectroscopy (FTIR), Bruker IFS 66/S FTIR spectrometer, each equipped with an integrating sphere. The surface morphology was analysed by SEM, and EDS was performed to study the presence of oxides and nitrides. Ion-beam milling was utilised for the crosssectional view of the oxide layer. TEG measurements Commercial Bi-Te-based TEGs with a size of 18 × 21 mm were purchased from TEGpro tm (module of TE-MOD-1W2V-21S). Bi-Te-based TEG power modules can operate at temperatures as high as 230°C continuously. A solar simulator with an AM 1.5 airmass filter was used. A plano-convex lens with a 250 mm focal length and 150 mm diameter was mounted at the output port of the solar simulator to enhance the optical concentration. Power was measured using a Keithley-2400 source metre by using an open circuit voltage and sweeping the voltage down to 0 while measuring the current. The receiver temperature was monitored using a thermocouple.
5,702.2
2020-02-04T00:00:00.000
[ "Materials Science" ]
On Classical r-Matrix for the Kowalevski Gyrostat on so(4) We present the trigonometric Lax matrix and classical r-matrix for the Kowalevski gyrostat on so(4) algebra by using the auxiliary matrix algebras so(3,2) or sp(4). Introduction The classical r-matrix structure is an important tool for investigating integrable systems. It encodes the Hamiltonian structure of the Lax equation, provides the involution of integrals of motion and gives a natural framework for quantizing integrable systems. The aim of this paper is severalfold. First, we present formulae for the classical r-matrices of the Kowalevski gyrostat on Lie algebra so (4), derived in the framework of the Hamiltonian reduction. In the process we shall get new form of its 5 × 5 Lax matrix and discuss the properties of the r-matrices. Finally, we get the 4 × 4 Lax matrix on the auxiliary sp(4) algebra. Remind, the Kowalevski top is the third integrable case of motion of rigid body rotating in a constant homogeneous field [5]. This is an integrable system on the orbits of the Euclidean Lie algebra e(3) with a quadratic and a quartic in angular momenta integrals of motion. The Kowalevski top can be generalized in several directions. We can change either initial phase space or the form of the Hamilton function. In this paper we consider the Kowalevski gyrostat with the Hamiltonian on a generic orbit of the so(4) Lie algebra with the Poisson brackets where ε ijk is the totally skew-symmetric tensor and κ ∈ C (see [6] for references). Fixing values a and b of the Casimir functions one gets a four-dimensional orbit of so(4) which is a reduced phase space for the deformed Kowalevski top. Because physical quantities y, J in (1) should be real, κ 2 must be real too and algebra (2) is reduced to its two real forms so(4, R) or so(3, 1, R) for positive and negative κ 2 respectively and to e(3) for κ = 0. The Hamilton function (1) is fixed up to canonical transformations. For instance, the brackets (2) are invariant with respect to scale transformation y i → cy i and κ → cκ that allows to include scaling parameter c into the Hamiltonian, i.e. to change y 1 by cy 1 . Some other transformations are discussed in [6]. Below we identify Lie algebra g with its dual g * by using invariant inner product and notation g * is used both for the dual Lie algebra and for the corresponding Poisson manifold. The Kowalevski gyrostat: some known results The Lax matrices for the Kowalevski gyrostat was found in [9] and [6] at κ = 0 and κ = 0 respectively. The corresponding classical r-matrices have been constructed in [7] and [10]. In these papers different definitions of the classical r-matrix [8,2] were used, which we briefly discuss below. The Lax matrices By definition the Lax matrices L and M satisfy the Lax equation with respect to evolution determined by Hamiltonian H. Usually the matrices L and M take values in some auxiliary algebra g (or in its representation), whereas entries of L and M are functions on the phase space of a given integrable system depending on spectral parameter λ. For κ = 0 the Lax matrices for the Kowalevski gyrostat on e(3) algebra were found by Reyman and Semenov-Tian-Shansky [9] L 0 (λ) = and These matrices belong to the twisted loop algebra g λ based on the auxiliary Lie algebra g = so (3,2) in fundamental representation. We have to underline that the phase space of the Kowalevski gyrostat and the auxiliary space of these Lax matrices are essentially different. Remind the auxiliary Lie algebra so(3, 2) may be defined by all the 5 × 5 matrices satisfying where J = diag (1, 1, 1, −1, −1), and T stands for matrix transposition. The Cartan involution on g = so(3, 2) is given by and g = f + p is the corresponding Cartan decomposition where f = so(3) ⊕ so(2) is the maximal compact subalgebra of so (3,2). The pairing between g and g * is given by invariant inner product that is positively definite on f. We extend the involution σ to the loop algebra g λ by setting (σX)(λ) = σ(X(−λ)). By definition, the twisted loop algebra g λ consists of matrices X(λ) such that The pairing between g λ and g * λ is given by At κ = 0 the Lax matrices for the Kowalevski gyrostat on so(4) were originally found in [6] as a deformation of the matrices L 0 (λ) and M 0 (λ) Algebraic nature of the matrix L(λ) (10) is appeared to be mysterious, because the diagonal matrix Y c does not belong to the fundamental representation of the auxiliary so(3, 2) algebra, hence matrices (10) do not belong to the Reyman-Semenov-Tian-Shansky scheme [8]. In the next section we prove that the Lax matrix L(λ) at κ = 0 is a trigonometric deformation of the rational Lax matrix L 0 (λ) on the same auxiliary space. Classical r-matrix: operator notations The classical r-matrix is a linear operator r ∈ End g that determines second Lie bracket on g by the rule The operator r is a classical r-matrix for a given integrable system, if the corresponding equations of motion with respect to the r-brackets have the Lax form (4) and the second Lax matrix M is given by In the most common cases r is a skew-symmetric operator such that where P ± are projection operators onto complementary subalgebras g ± of g. In this case there exists a complete classification theory. All details may be found in the book [8] and references therein. Marchall [7] has shown that the Lax matrices (5) for the Kowalevski gyrostat on e(3) may be obtained by direct application of this r-matrix approach. Let us introduce the standard decomposition of any element X ∈ g λ where X + (λ) is a Taylor series in λ, X 0 is an independent of λ and X − (λ) is a series in λ −1 . If P ± and P 0 are the projection operators onto g λ parallel to the complementary subalgebras (12), the operator defines the second Lie structure on g λ . According to [7] the r-matrix (13) is the classical r-matrix for the Kowalevski gyrostat. In the standard case (11) operator ̺ is identity, however for the Kowalevski gyrostat ̺ is a difference of projectors in the base g = so(3, 2) (see details in [7]). Classical r-matrix: tensor notations Another definition of the classical r-matrix is more familiar in the inverse scattering method [8,1,2]. According to [1], the commutativity of the spectral invariant of the matrix L(λ) is equivalent to existence of a classical r-matrix r 12 (λ, µ) such that the Poisson brackets between the entries of L(λ) may be rewritten in the following commutator form Here and Π is a permutation operator ΠX ⊗ Y = Y ⊗ XΠ for any numerical matrices X, Y . For a given Lax matrix L(λ), r-matrices are far from being uniquely defined. The possible ambiguities are discussed in [8,1,2]. If the Lax matrix takes values in some Lie algebra g (or in its representation), the r-matrix takes values in g × g or its corresponding representation. The matrices r 12 , r 21 may be identified with kernels of the operators r ∈ End g and r * ∈ End g * respectively, using pairing between g and g * (see discussion in [8]). For the Kowalevski gyrostat the classical r-matrix r 12 (λ, µ) entering (14) has been constructed in [10] by using the auxiliary Lie algebra g = so(3, 2) in fundamental representation. The generating set of this auxiliary space consists of one antisymmetric matrix and three antisymmetric matrices These matrices are orthogonal with respect to the form of trace (7). Four matrices S k form maximal compact subalgebra f = so(3) ⊕ so(2) of so(3, 2) and their norm is 1, whereas six matrices Z i and H i belong to the complementary subspace p in the Cartan decomposition g = f + p and their norms are −1. Operators are projectors onto the orthogonal subspaces f and p respectively. In this basis the Lax matrix L 0 (λ) (5) for the Kowalevski gyrostat on e(3) reads as According to [10] the corresponding r-matrix is equal to We can say that this matrix r 12 (λ, µ) is a specification of the operator r (13) with respect to canonical pairing (7)- (9). In our case the matrix r 12 (λ, µ) (18) is appeared to be purely numeric matrix, which depends on the ratio λ/µ only. It allows us to change the spectral parameters λ = e u 1 and µ = e u 2 and rewrite this r-matrix in the following form depending on one parameter z = u 1 − u 2 via trigonometric functions. Therefore, the classical r-matrix for the Kowalevski gyrostat on e(3) should be considered as trigonometric r-matrix according to generally accepted classification [2,8]. At the same time it is natural to keep initial rational parameters λ, µ in the Lax matrix. Similar properties holds for the periodic Toda chain, for which N × N Lax matrix depends rationally on spectral parameters, while the corresponding r-matrix is trigonometric. We have to underline that in contrast with usual cases this r-matrix is non-unitary. Moreover, it has a term (S 3 − S 4 ) ⊗ S 4 , which is independent on spectral parameters and, therefore, the inequality r 12 (z) = −r 12 (−z) takes place. In order to understand the nature of these items we recall that the Lax matrix L 0 (λ) has been derived in the framework of the Hamiltonian reduction of the so(3, 2) top for which phase space coincides with the auxiliary space. The corresponding classical r-matrix, calculated in [10] r so(3,2) 12 is a trigonometric r-matrix associated with the so(3, 2) Lie algebra [2]. So, the constant term (18) is an immediate result of the Hamilton reduction, which changes the phase space of our integrable system. We recall that classical r-matrices r 12 (λ, µ) are called regular solutions to the Yang-Baxter equation (15) if they pass through the unity at some λ and µ. In our case we have the following counterpart of this property of regularity res r 12 (z)| z=0 = 1 2 P p − P f . Classical r-matrix for Kowalevski gyrostat on so(4) Now let us consider Lax matrix L(λ) (10) for the Kowalevski gyrostat on so(4) algebra. After transformation L(λ) → cos φ Y −1/2 c L(λ)Y 1/2 c of the Lax matrix L(λ) (10) and change of the spectral parameter λ = κ/ sin φ one gets a trigonometric Lax matrix on the auxiliary so(3, 2) algebra or In order to consider the real forms so(4, R) or so(3, 1, R) we have to use trigonometric or hyperbolic functions for positive and negative κ 2 , respectively. If we put φ = κλ −1 and take the limit κ → 0 we find the rational Lax matrix L 0 (λ) (5) for the Kowalevski gyrostat on e(3). The Lax matrices L(φ) and L 0 (λ) are invariant with respect to the following involutions that are compatible with the Cartan involution σ. This simple observation shows that for the Kowalevski so(4) gyrostat the Reyman-Semenov-Tian-Shansky scheme [8] should be extended from rational to trigonometric case. One can prove that the trigonometric Lax matrix L(φ) (19) satisfies relation with the following r-matrix If we put φ = κλ −1 , θ = κµ −1 and take the limit κ → 0 we get classical r-matrix for the Kowalevski gyrostat on e(3) algebra (18). As above the matrix r 12 (φ, θ) satisfies the Yang-Baxter equation (15) and it has the same analog of the property of regularity res r 12 (φ, θ)| φ=θ ≃ P p − P f . In contrast with r-matrix (18) for the Kowalevski gyrostat on e(3) we can not rewrite this r-matrix (22) as a function depending on the difference of the spectral parameters only. We suggest that it may be possible to present it in terms of elliptic functions of one spectral parameters after a proper similarity transformation and reparametrization. The well known isomorphism between so(3, 2) and sp(4) algebras allows us to consider 4 × 4 Lax matrix instead of 5×5 matrix (20). The generating set Z 1 , Z 2 , Z 3 and S 4 may be represented by different 4 × 4 real or complex matrices, for instance, Other sp(4) generators are constructed by (16)-(17). These matrices are orthogonal with respect to the form of trace (7). Norm of matrices s k is 1/2, whereas six matrices z i and h i have norm −1/2. If we put φ = κλ −1 and take the limit κ → 0 we find the rational Lax matrix for the Kowalevski gyrostat on e(3) According to [3], this matrix has a mysterious property. Namely, it contains the 3×3 Lax matrix L(λ) for the Goryachev-Chaplygin gyrostat on e(3) algebra as its (1, 1)-minor. Remind that the Goryachev-Chaplygin gyrostat with Hamiltonian It is easy to prove that (1, 1)-minor of the trigonometric Lax matrix (23) cannot be a Lax matrix for any integrable system. It is compatible with the known fact that the Goryachev-Chaplygin gyrostat on e(3) cannot be naturally lifted to so(4) algebra. Conclusion There are few Lax matrices obtained for deformations of known integrable systems from their undeformed counterpart in the form (10) (see [6,10] and references within). The important question in construction of these matrices by the Ansatz L = Y c · L 0 (10) is a choice of a proper matrix Y c for a given rational matrix L 0 (λ). In all known cases this transformation destroys the original auxiliary algebra, because the corresponding matrices Y c do not belong it. In this note we show that if one takes a Lax matrix of the Kowalevski so(4) gyrostat in the symmetric form L = Y 1/2 c · L 0 · Y 1/2 c and makes a trigonometric change of spectral parameter it restores the original auxiliary so(3, 2) algebra and new L respects the trigonometric current involution (21). It means that deformation of the physical space from the orbits of e(3) to that of so(4) algebra is naturally related with transition from rational to trigonometric parametrization of the auxiliary current algebra. We calculated explicitly the corresponding r-matrices and demonstrated that constant terms in them is due to the Hamiltonian reduction. The classical r-matrix (22) for the so(4) gyrostat is numeric and the corresponding Lax matrix L(φ) (19) does not contain ordering problem in quantum mechanics. Hence equation (14) holds true in quantum case both for the Lax matrices (19) and (23).
3,477.6
2005-12-20T00:00:00.000
[ "Mathematics" ]
Robust model selection for classification of microarrays. Recently, microarray-based cancer diagnosis systems have been increasingly investigated. However, cost reduction and reliability assurance of such diagnosis systems are still remaining problems in real clinical scenes. To reduce the cost, we need a supervised classifier involving the smallest number of genes, as long as the classifier is sufficiently reliable. To achieve a reliable classifier, we should assess candidate classifiers and select the best one. In the selection process of the best classifier, however, the assessment criterion must involve large variance because of limited number of samples and non-negligible observation noise. Therefore, even if a classifier with a very small number of genes exhibited the smallest leave-one-out cross-validation (LOO) error rate, it would not necessarily be reliable because classifiers based on a small number of genes tend to show large variance. We propose a robust model selection criterion, the min-max criterion, based on a resampling bootstrap simulation to assess the variance of estimation of classification error rates. We applied our assessment framework to four published real gene expression datasets and one synthetic dataset. We found that a state-of-the-art procedure, weighted voting classifiers with LOO criterion, had a non-negligible risk of selecting extremely poor classifiers and, on the other hand, that the new min-max criterion could eliminate that risk. These finding suggests that our criterion presents a safer procedure to design a practical cancer diagnosis system. Introduction Microarray technology 1 has been applied to predict prognosis of cancer patients by comparing gene expression profiles in cancer tissue samples, and its predictive power has been demonstrated for many types of cancers. [2][3][4][5] The prognosis prediction systems based on microarrays have been expected to be new efficient bio-markers that enable personalized cancer medicine. 6 We consider, in this paper, two problems in expanding the use of microarray-based prediction systems in real clinical scenes, namely, observation cost and reliability. 7 To reduce the observation cost without losing reliability, there have been several efforts to design diagnosis systems involving small numbers of specially selected genes. Recently, specialized diagnostic microarrays harboring small numbers of genes, to say tens or hundreds genes, are developed based on a supervised analysis with a dataset taken by a full microarray system involving thousands or tens of thousands of genes. 5,8,9 Measurement cost per patient becomes smaller by reducing the number of genes that is involved in such a system. If number of spots on a chip is fixed, more spots corresponding to a single gene can be included in a chip, which enables more reliable measurement by averaging multiple spots of same genes, and/or more efficient measurement by diagnosing multiple patients simultaneously in a single chip. 8 Manufacturing cost of a chip can be reduced by designing mini-chip harboring small number of spots. 5 To achieve a reliable predictor, a well-known trade-off problem exists even if the above-mentioned issue of observation cost is omitted; we should select as large a number of informative genes and as small a number of non-informative genes as possible. We often need a certain number of genes to gain prediction accuracy, partly because multiple informative genes tend to provide different kinds of information which are complementary to each other for the prediction, and partly because, even when a set of multiple genes provides identical information, observation noise can be reduced by averaging them. On the other hand, since the prediction error increases when non-informative genes are included, we need to reduce the number of non-informative genes, putting the observation cost aside. These two demands are a trade-off because the process of determining whether each gene is informative or non-informative itself is not always reliable enough, due to non-negligible noise and a limited number of observations. In summary, our goal can be stated as to achieve a reliable predictor based on as few genes as possible, which is accomplished in a supervised analysis with the following three processes: • a gene selection process, • a supervised learning process that constructs predictors based on a labeled set of expression data of the selected genes, and • an assessment process for the constructed candidate predictors. There have been many options proposed for the first two processes, and comparisons of their combinations were made from the viewpoint of prediction error rates on test datasets, namely generalization performances. 10,11 In the present study, we use the following two procedures that were applied in the previous study. 12 • Weighted voting (WV) classifier 13 with gene selection based on absolute t-score (T-WV) • Linear-kernel support vector machine (SVM) 14 with recursive elimination of genes that have the smallest contribution to current classification performance (R-SVM). 15 These procedures construct multiple candidate predictors with various numbers of genes included in the predictors. Since their prediction performances for independent test datasets depend on the number of genes, their assessment is crucial. In the assessment process, the prediction performance of each candidate predictor is estimated based on the training data, and good estimation is obtained by reducing the estimation bias and the variance. Since the true performance on independent data in the future is unknown, we should select the best predictor with less bias and smaller variance of the estimated performance. In general, the bias-variance trade-off problem is inherent to all statistical models used for prediction, especially in the classification framework. 16,17 For prognosis prediction by microarray, several past studies focused on reducing the estimation biases of the prediction error rates in determining the best model 18-20 because inclusion of biases could lead to over-estimation of the classification performance of the proposed system. The cross-validation (CV) technique is used widely for predicting true classification error rate in samples that are not included in either the training or the test sample sets. Among the CV methods, the leave-oneout cross-validation technique (LOO) is often used because of its small bias. 18 These studies, however, paid little attention to the variances of estimated classification error rates. The estimated variances in the assessment process are important for practical applications. Even if a classifier has a sufficiently low error rate accompanied instead by large variance in prediction, it suffers from a high risk of having a large actual error rate when applied to unknown test samples. 21 The LOO criterion sometimes selects a classifier involving a very small number of genes, or even a single gene. Although the single-gene classifier fits the 'as few genes as possible' criterion, classifiers involving redundant genes tend to exhibit lower noise and provide better prognosis. 9 Several recent methods consider the estimated error rate variances, [21][22][23][24] and unsupervised methods 25,26 also minimize the variance of the model by focusing on the stability of the signatures instead of on the supervised class labels. However, there has been no discussion from the viewpoint of mini-chip design, namely, to explore a reliable predictor based on as few genes as possible. In the present study, we consider both the bias and the variance of performance estimation so as to achieve a reliable predictor. We applied a bootstrap sampling method to estimate the distribution of possible error rates, with bias and variance, and propose a min-max criterion to obtain a stable classifier. We conducted a simulation study and found that the min-max criterion tends to select better candidate predictors than the LOO criterion, especially when the number of samples is small. We then compared two supervised analysis procedures, T-WV and R-SVM, and showed that T-WV achieves reliable predictors with a small number of genes, indicating that T-WV with the minmax criterion is desirable for our purpose of obtaining a reliable predictor with as few genes as possible. notations Let x i = (x i1 , ..., x i m ) be a vector of the M-dimensional gene expression profile of the i-th sample, and y i a binary class label y i ∈ {-1,1} representing the binary status of the i-th sample, for example, tumor or non-tumor. The numbers of samples in the negative ( y i = -1) and positive (y i = 1) classes are denoted as n n and n p , respectively. Suppose that we have a dataset D = {d i |i = 1, ..., N } including N samples, where d i = (x i , y i ) is a pair of input (expression) and output (class label) of the i-th sample. By applying a supervised machine learning method to the dataset D, we construct a discriminant function h(x | D) such that we predict a label ˆ( ) y x' for a new input x' by T-WV method The WV method is a typical supervised machine learning method that employs the top k significant genes. Since the significance of the j-th gene is defined according to the following t-score, the entire procedure is referred to as the T-WV method, where x pj and x nj are the average expression levels of the j-th gene over the training samples labeled 1 and -1, respectively, and S j 2 is the pooled within-class variance of the j-th gene, The genes are ranked according to the absolute value of |t j |, and the top-ranked k genes are selected as significant genes so that the set of these genes is denoted as C k . The discriminant function obtained by the T-WV method is then constructed as is the average expression level of the j-th gene in the training samples. In the discriminant function h k , the difference between the j-th gene expression and its average is weighted by its significance, i.e. the t-score. Note that the function h k depends on the number k of significant genes, and thus we need to specify k appropriately. r-sVM method R-SVM is another typical supervised machine learning method, which was developed to select important genes for SVM classification. 15 An R code package is publicly available at http://www.hsph.harvard.edu/ bioinfocore/R-SVM.html. The discriminant function of a linear SVM is defined as where x′ is a new input expression vector and x i is the i-th sample expression vector in the training dataset. α i and b are parameters to be determined so that training data points with different class labels are classified with the largest margin. denotes the inner product. Each element of w, w j , is defined as the absolute value |w j | of which represents the significance weight of the jth gene in the current discriminant function. As in the T-WV method, the classification performance of SVM also depends on gene subset selection. R-SVM applies a recursive feature elimination (RFE) procedure. 27 In RFE, less significant genes in the current discriminant function are recursively eliminated, and the next discriminant function is constructed based on the new, smaller set of genes. Consequently, a sequence of discriminant functions with decreasing numbers of genes is constructed. Thus, the prediction performance of each discriminant function h k depends on the number k of significant genes, which causes the same problem as in T-WV, i.e. setting an appropriate number k. In the following section, we describe a common way to set the number of genes in both T-WV and R-SVM. LOO model selection T-WV and R-SVM, both produce many candidate classifiers, from which we should select the best one by an assessment process. Although the true performance of a classifier is measured as classification accuracy on an unknown dataset given in the future, we should instead estimate the performance using the dataset obtained in the assessment process. Note that we refer to each candidate in the assessment process as a model, to clarify that we are assessing all procedures used to construct a classifier rather than assessing solely the classifier. In T-WV and R-SVM, a model is characterized by the number of significant genes that it includes. The LOO procedure has been widely used to estimate, or predict, the future performance of a classifier. In LOO, a classifier h is built using each leave-one-out dataset D -i , i = 1, ..., N; that is, the i-th sample d i is excluded in the training procedure from the dataset D, and becomes a validation sample. The classification performance of h is assessed using the validation sample. After the assessments for d 1 , ..., d N , the LOO error rate of the classifier h, G loo (h | D), is calculated as the averaged error rate where I(R) denotes the indicator function that takes a value of one if condition R holds, and is otherwise zero. When we select the number k of significant genes by this model selection is said to be based on the LOO criterion. resampling bootstrap method It is known that the error rates used to estimate the LOO procedure are nearly unbiased. Molinaro et al 18 compared estimated generalization error rates between different resampling methods and showed that LOO had the smallest bias for a simulation dataset and a real microarray dataset. However, LOO has a tendency to include large variance, despite its small bias, 28 because classifiers constructed based on the leave-one-out datasets, D -i , are quite similar to each other, whereas the data points used for validation vary widely. The large variance of the error rate estimation leads to a high risk of selecting a classifier whose 'true' performance is poor, and this risk becomes higher as the number of candidates becomes larger. When we assess the performance of many candidate classifiers with large variances, some of the candidates often exhibit remarkably low errors, even if their true performance is poor. This is the same problem as overfitting, which was originally found in parametric learning especially when there are many parameters to be learnt. Therefore, it is important to reduce the estimation variance to obtain a robust classifier. We applied a bootstrap method to simulate possible variation of the given dataset and to obtain the distribution of LOO error rates over the range of that variation. We generated bootstrap datasets * is randomly sampled with replacements from the LOO dataset D -i . The single validation sample d i is evaluated by the classifiers that were trained by different datasets D* b , leading to a set of LOO error rates: .., , is given by Eq. (4) after replacing the dataset D with the bootstrap dataset D* b . This set of LOO error rates is considered to be a distribution of G loo and provides a guideline to determine the number of genes used in the T-WV classifier. Min-max model selection Using the simulated distribution of LOO error rates, we defined a risk score called a min-max criterion, where 'Per95' denotes the 95th percentile of the set of values. Based on this risk score, an appropriate model (i.e. the number of genes, k) is selected as We considered the 95th percentile with the number of bootstrap B = 100 as the representative of possible high error rates for each model with different numbers of genes. The 95th percentile is a robust criterion to estimate the risk of selecting a bad model against the possibly asymmetric nature of the error rate distribution. Our approach is referred to as the "min-max" selection criterion because we minimized the risk of selecting a model for which the expected prediction error rate was almost the maximum in the distribution of possibilities. This min-max model selection is likely to refuse classifiers for which the estimated error rates are distributed with a large variance, even if LOO shows the lowest error rate from a single dataset. Therefore, the min-max criterion reduces the instability stemming from the variation of possible future datasets that could be simulated by random sampling from a large pool of samples. In other words, the min-max criterion assumes an underlying game between an analyzer and nature. A dataset is given by nature, and a model is selected by an analyzer. For the analyzer to achieve stability, one good idea is to minimize the risk (Eq. (11)), which stems from the possibility that nature could provide a bad situation (and hence the classifier has been over-trained) (Eq. 10). The number 95 of the percentile and number of bootstrap B = 100 were determined arbitrarily by considering trade-offs between computation time, estimation variance of the percentile point, and appropriateness as a representative of high error rates: • The computation time is proportional to the number of bootstrappings. • Estimation variance is a monotonic function of both the percentile number and the number of bootstrappings. Namely, the variance becomes large as the percentile number diverges from 50 and as the number of bootstrappings is small. • The criterion should evaluate possible high error rates even when the distribution of bootstrap samples is asymmetric. We did not select the 50th percentile, i.e. the median, because of the third reason above; we attempted to obtain a safe classifier rather than to show good average performance. Although the 99th percentile could be another representative of possible high error rates, we rejected it, because it relies on 1% of bootstrap samples, and will therefore lead to high variance especially with small B. The estimation variance of each percentile of the bootstrap error rate can be evaluated in terms of the standard deviation of the corresponding order statistic if the distribution of error rates is known. Table 1 shows the standard deviations (SDs) of several percentiles when the distribution of error rates is a standard normal distribution. These SDs are proportional to the SD of the distribution of error rates, implying that the SDs of the percentiles can represent their variation well even for non-normal distributions. results for real datasets We evaluated our method using four published real gene expression profile datasets: • Breast cancer van't Veer et al 3 obtained gene expression microarray data for approximately 5,000 genes for 78 + 19 breast cancer tissue samples. The samples were classified into favorable and unfavorable samples: patients with recurrence-free survival in five years and those with recurrence in five years, respectively. The authors trained supervised classifiers using 78 samples (34 favorable and 44 unfavorable samples), which we call the training dataset, and tested using 19 independent samples (7 favorable and 12 unfavorable samples), which we call the test dataset. The same group also provided a larger dataset consisting of 295 samples. 29 Among the 295 samples, 32 samples were also included in the former dataset 3 and 10 samples were censored in five years; hence, we used the remaining 253 (192 favorable and 61 unfavorable) samples for the second test dataset. • Colon cancer The colon cancer dataset 30 contains microarray expression data for 2,000 genes for 62 colon tissues. Among the 62 tissue samples, 40 and 22 were labeled as "tumor" and "normal," respectively, and these were used as the labels to be predicted. The NBL dataset 5 consists of microarray expression data for 5,180 genes for 136 patients. Among the 136 samples, 25 and 102 were labeled as "favorable" and "unfavorable" patients, respectively, according to their status at 24 months after diagnosis, and these were used as the labels to be predicted. The remaining nine samples of unknown status at 24 months after diagnosis were omitted. Among the 286 patients, 183 and 93 were labeled as favorable and unfavorable, respectively, and these were used as the labels to be predicted. We omitted 10 samples which were censored in five years. Although this dataset concerned breast cancer, we did not consider relationship between this set and the breast cancer datasets at the top of this list because these two datasets were assembled by entirely different systems and hence had fairly different characters in distribution. Considering different systems of microarrays together may be an important issue, but is beyond the scope of the current study. For each of the above four datasets, we trained T-WV and R-SVM classifiers with various numbers of genes using the training samples, and assessed their classification errors in terms of LOO, 3-, 5-and 10-fold-CV, and min-max criteria. In the case of the breast cancer dataset with large numbers of test samples, 3,29 we also assessed their classification errors in the test datasets. Figure 1 shows the results for the breast cancer dataset. The results with the T-WV classifier (left panel), indicated characteristic behaviors of the three criteria to assess the classification error rate, LOO (dashed line), 3-fold-CV (dotted line), and the proposed min-max criterion (solid line at the top of the blue area). The 90% interval of LOO error rates (blue area), which was estimated by the resampling bootstrap method, describes the estimation variance of error rates. The LOO error rate profile showed the lowest value with a small number of genes, k = 1, so that k = 1 was selected as the best number of genes by the LOO criterion. On the other hand, the 90% interval of the bootstrap distribution at k = 1 exhibited a large width in the error rate, and the 95th percentile error rate was above the chance level 0.5, suggesting large risk of the k = 1 classifier falling into a poor predictor around the chance level. Also, the LOO error rate at k = 1 was below both the 5th percentile and the 3-fold-CV error rate, indicating that the low LOO error rate at k = 1 could have been obtained by chance. The 3-fold-CV showed a smoother profile than those obtained by the LOO, and stayed in the midst of the 90% interval. The 3-fold-CV criterion selected a classifier with k = 5 where the 90% interval was narrower than that at k = 1. We also calculated 5-and 10-fold-CVs and observed similar curves to that of the 3-fold-CV. The proposed min-max criterion, i.e. the 95th percentile, selected a larger number of genes, k = 590. The LOO and 3-fold-CV error rates at k = 590 were higher than those at k = 1 and k = 5; however, we expected that the classifier of k = 590 would have a lower risk of being a poor predictor than those at k = 1 and k = 5. In the right panel of Figure 1, a similar comparison is shown between LOO, 3-fold-CV, and the min-max criteria with the R-SVM classifier. The LOO criterion showed an instability similar to that of T-WV, so that the lowest LOO error rate at k = 376 seems to have been obtained by chance. All criteria selected larger numbers of genes than in the cases of T-WV classifiers. In Table 2, test error rates of the selected predictors were assessed using two test datasets with 19 and 253 samples, where five criteria (LOO, min-max, and 3-, 5-and 10-fold-CVs) with two classifiers (T-WV and R-SVM) are compared. The min-max criterion outperformed the other criteria, LOO and k-fold-CVs, on both test sets. The LOO exhibited poor performance with 19 test samples and worse with 253 test samples whose test error rate was around the chance level. Intuitively, this result pointed out a defect of the LOO criterion in terms of the risk of taking a poor classifier, which has already been suggested by the 90% interval shown in Figure 1. The 3-, 5-and 10-fold-CVs achieved better performance in test error rates than LOO, but worse than the min-max criterion. T-WV tended to exhibit lower error rates than R-SVM with smaller numbers of genes, although we cannot conclude the general superiority of T-WV based on this single example. Test error rates on 253 samples were significantly worse than the error rates on 19 samples, possibly for the following reasons: • The 19 samples were by themselves easily classified. • The number of samples (19) was too small to reproduce the error rate with low variance. • The test data of 253 samples were gathered from different populations from those for the training data of 78 samples and the other test data of 19 samples. • The microarray measurement system differed between the two sets of data. The considerations above will be important when designing mini-chips based on training datasets. Although the last reason, difference in microarray systems, may not be very serious in the case of this breast cancer dataset, it would be serious in the case designing a mini-chip, because differences between systems will probably be inevitable due to the reduction of system size from a full-size chip to a mini-chip. We compared three criteria, LOO, min-max, and 3-fold-CV, with the two classifiers T-WV and R-SVM on the other three datasets (NBL, colon cancer and breast cancer Affymetrix) in Figures 2, 3 and 4, respectively. From the total comparisons over Figures 1-4, we observed the following tendencies: • Although the error rates estimated by LOO fluctuate as the number of genes increases, they stay mostly within the 90% interval. This suggests that the LOO estimation of the tuned number of genes includes a large variance and the character of the variance is well captured by the estimated 90% interval. • In contrast to the fluctuating profile of LOO error rates, the profiles of the 3-fold-CV and the 95th percentile (G boot ) exhibit smoother curves. This suggests a more stable character for the 3-fold-CV and the min-max criterion than the LOO criterion. • With T-WV, the 90% confidence interval was likely to be wide when the number of genes was small, k  10, indicating that prediction based on too few genes is risky; we occasionally get a model with poor performance. The 95th percentile is likely to show a higher error rate for a smaller number of genes, e.g. k  10, than for a large number of genes. Thus, the min-max criterion based on the 95th percentile can avoid risky prediction so that a smaller error rate is achieved on average. • The 3-fold-CV profile stayed almost in the middle of the 90% interval and showed a similar curve to the 95th percentile. However, there was difference between the 3-fold-CV and the 95th percentile in the range of 90% interval, which was prominent in T-WV with small numbers of genes, k  10. to different numbers of genes being selected; relatively large numbers of genes are selected by the min-max criterion in comparison to the 3-fold-CV. • In the case of T-WV, the 90% interval was likely to be narrow for datasets with large sample sizes. The numbers of training samples were 78, 62, 127 and 276, and the widths of the 90% interval were about 0.15, 0.15, 0.1 and 0.07, for breast cancer, colon, NBL and Affymetrix datasets, respectively. • In the case of R-SVM, LOO profiles fluctuated more than those of the min-max criterion, as well as with T-WV, suggesting that the min-max is a better model selection criterion than the LOO criterion. • Whereas the best performance was comparable between R-SVM and T-WV, a larger number of genes was required to achieve the best performance by R-SVM than by T-WV. Thus, T-WV employing a relatively small number of genes is more suitable for practical clinical applications, which is consistent with a previous finding. 12 • The confidence intervals for R-SVM were likely to be narrower than those for T-WV, implying that SVM, as a large margin classifier, is more stable against observation noise than T-WV. Even though we are not interested here in classifiers with a large number of genes, say k  1,000, this finding may be important for applications other than mini-chip construction. • The Affymetrix data set was unbalanced, with the numbers of favorable and unfavorable samples being 183 and 93, respectively. This suggests that the error rate would become 0.34 if every label prediction is called favorable, which actually occurred for R-SVM with k  10. Therefore, the narrow confidence interval in such a case did not correspond with stable prediction. The experiments showed that a reduction of risk is achieved by the proposed min-max criterion, and this was particularly convincing in the breast cancer dataset. simulation study on synthetic datasets In the previous section, we tested our new criterion on four real datasets; however, the ground truth was unknown and the number of samples was limited in many cases, which prevented us from obtaining strong evidence for the superiority of the min-max criterion. We conducted a simulation study based on artificial datasets to prepare a sufficient number of test samples, which will be more realistic in future clinical studies. We randomly generated expression profiles for 2,000 genes, where 30 out of the 2,000 were differentially expressed (DE) between two classes of samples and the others were not (non-DE). For non-DE genes, expression levels were generated from a normal distribution with mean zero , N(0,1), and for DE genes, the expression levels of samples with positive and negative class labels were generated from N(µ, 1) and N(-µ, 1), respectively, where we set µ = 0.5 for all DE genes. By this process, we generated synthetic datasets of 20 to 150 samples for training, and 1,000 samples for testing, where the numbers of samples with the two class labels were set to be equal. The proposed simulation scheme is illustrated in Figure 5. For each training dataset, the candidate classifiers involving various numbers of genes were trained and assessed, and the best numbers of genes were selected by the LOO and the min-max criteria, where the number B of the bootstrap in the min-max procedure was set at 100. The performance of the finally selected classifier was then assessed by a test dataset with 1,000 samples. We repeated this process with a randomly generated training dataset and assessed the corresponding test error rates by using a test dataset of 1,000 samples. The distributions of the test error rates were compared between different conditions. We designed the above setting to clarify how well the min-max criterion improves the model selection. The number of test datasets was set sufficiently large, and is commonly used in various settings of the other features to reduce the variance of error rates that stems from random sampling of the test dataset. The number of DE genes (30) and the strength of differential expression (µ = 0.5) were determined to examine typical situations that arise in realistic cases. We omitted other realistic features of datasets that may arise such as variation in the number of DE genes, strength µ, and the proportion of numbers of positive and negative samples, because they had shown no significant effect in our preliminary experiments. We also omitted correlations of gene expression patterns between DE genes because such correlations would not affect either T-WV or R-SVM. Figure 6 shows the distributions of test error rates of the T-WV classifiers selected by LOO and minmax, with 20, 50, 100 and 150 training samples. We found that there were certain levels of variance for both criteria, and the variance was larger for smaller numbers of samples. LOO sometimes showed much worse results than min-max, as indicated by the points in the bottom-right area of each panel in Figure 6. Note that the number of test samples, 1,000, was so large that there was no significant increase in sampling variance. Table 3 shows the means and We counted the number of true DE genes in the selected genes for each trial and found that the minmax criterion tended to include many of the 30 true DE genes, and that the ratio of the true DE genes in the selected genes became large as the training samples increased. In contrast, LOO sometimes selected a very small number of genes, leading to large error rates. Both criteria occasionally selected more than 30 genes, although this did not cause a large increase in the error if the selected genes included many of the true DE genes. As the number of training samples increased, the means and variances of test error rates became smaller, which is consistent with the previous observation. Even when the number of training samples increased and mean error rates decreased, however, the test error rates of LOO still showed larger variance than those of min-max. We also conducted a similar simulation with R-SVM; the simulation settings were the same as those for T-WV except that we performed 50 trials, (half the number used for T-WV), and we excluded the case of 150 samples because of the large computational cost of bootstrap simulation for R-SVM. Figure 7 shows the distributions of test error rates of R-SVM classifiers selected by LOO and min-max with 20, 50, and 100 training samples. A similar tendency to that of T-WV was observed in the cases of 50 and 100 samples, although in the case of 20 samples, the error rate was almost the chance level (0.5) for both the LOO and min-max criteria. concluding Remarks In the present study, we investigated model selection methods with the aim of designing a reliable cancer prognosis predictor based on gene expression microarrays involving as small a number of genes as possible. We assessed possible variation in prediction error rate of each microarray-based predictor by simulating a distribution of classification error rates via a resampling bootstrap method. Accordingly, we proposed a novel min-max criterion to select a predictor from multiple candidates. In numerical comparisons that used real and synthetic datasets, we showed that the conventional LOO estimation of their error rates resulted in large variances; consequently, the LOO criterion had a large risk of choosing inappropriate classifiers that would exhibit extremely poor prediction performance. In contrast, we showed the stability of the min-max criterion relative to well-established statistical criteria including the LOO. We also compared two different supervised analysis procedures, T-WV and R-SVM, and found that, in general, T-WV performed the best when it involved a small or moderate number of genes in contrast to that R-SVM performed the best when it involved almost all genes, although the mean and variance of the best possible performances were not always significantly different between those achieved with T-WV and R-SVM. Thus, overall, we concluded to recommend T-WV with the min-max criterion, which satisfied our demand; the most reliable predictor involving as small a number of genes. It should be important to note that, we proposed our procedure to select a set of genes for designing a good predictor of cancer prognosis, rather than for determining a set of genes which have statistically significant relationship to the prognosis; these purposes are different from each other in general. In other words, the 'robust' model selection is meant to lower the risk to select an extremely poor predictor, rather than to select a stable set of genes. In fact, different research groups reported prognosis prediction systems with different sets of genes based on different sets of microarray data for the same type of cancer. 6 The microarray-based predictors for breast cancer, were designed with 70 and 76 genes by two different research groups, 3,31 respectively, and these gene sets had only three genes in common. Namely, the selected sets of genes were not stable at all, however, the 70 genebased diagnosis system of breast cancer have been verified by increasingly large number of new patients and authorized by Food and Drug Administration in USA. 6 In our own numerical experiments, we also observed that number of common genes tended to be small between any gene sets that were selected based on different datasets generated by resampling bootstrap (data not shown), although we achieved good predictors in vast amount of the cases as we had shown. Thus, it should be emphasized that such an instable selection of gene subsets did not necessarily cause a poor predictor as long as the predictor was selected by a robust model selection method. Once a prediction system based on a small number of genes is developed, the system can be transfered not only to mini-chip microarrays but also to other easy accessible devises such as quantitative real-time polymerase-chain-reaction (RT-PCR) analysis, 32 which would be tractable if only tens of genes were targeted. Robust model selection methods, like the proposed one, will be needed especially when we consider such a transfer work between different measurement devises because large bias is often expected between different devises. In general, when a procedure is designed to be robust against measurement variance, such a method is also robust against an unknown bias which would appear like in the above transfer; thus, our min-max criterion will be used for this purpose. In order to design a practical tool for real scenes in clinical cancer therapy, new demands in informatics can always arise. As we had seen in this study, although past efforts in informatics tended to pursue good performances in average, minimizing risk to catch poor predictor against possible variability in cancer diagnosis systems becomes a next issue. There are few methods to directly seek such risk minimization as long as we know. Reducing cost by selecting relevant genes based on high-dimensional gene expression profile is a relatively well-investigated field of research. However, the combination of the cost and reliability is not investigated well. Thus, there must be room to develop a novel supervised classification method that satisfies these demands for designing mini-chip systems, and future studies in cancer informatics should proceed to such directions. Authors contributions IS performed the experiments and wrote the manuscript. IS and SO proposed the main idea that the variance influenced the performance of the classifiers. TT contributed to the construction of the simulation scheme and the development of the variance estimation methods. SI provided advice on the min-max strategy and supervised the present study. MO provided several topics concerning real cancer therapy and future directions. All five authors participated in the preparation of the final manuscript. Supplementary Figures for "Robust Model Selection for Classification of Microarrays" Ikumi suzuki, Takashi Takenouchi, Miki Ohira, shigeyuki Oba, and shin Ishii
8,789.2
2009-01-01T00:00:00.000
[ "Computer Science" ]
Photon-counting computed tomography thermometry via material decomposition and machine learning Thermal ablation procedures, such as high intensity focused ultrasound and radiofrequency ablation, are often used to eliminate tumors by minimally invasively heating a focal region. For this task, real-time 3D temperature visualization is key to target the diseased tissues while minimizing damage to the surroundings. Current computed tomography (CT) thermometry is based on energy-integrated CT, tissue-specific experimental data, and linear relationships between attenuation and temperature. In this paper, we develop a novel approach using photon-counting CT for material decomposition and a neural network to predict temperature based on thermal characteristics of base materials and spectral tomographic measurements of a volume of interest. In our feasibility study, distilled water, 50 mmol/L CaCl2, and 600 mmol/L CaCl2 are chosen as the base materials. Their attenuations are measured in four discrete energy bins at various temperatures. The neural network trained on the experimental data achieves a mean absolute error of 3.97 °C and 1.80 °C on 300 mmol/L CaCl2 and a milk-based protein shake respectively. These experimental results indicate that our approach is promising for handling non-linear thermal properties for materials that are similar or dissimilar to our base materials. Introduction Annually, over 100000 patients undergo thermal ablation procedures for a wide range of benign and malignant tumors [1].As a primary example, high intensity focused ultrasound (US), which heats a focal region using a concave transducer, is an effective non-invasive treatment for prostate and other cancers [2,3].Currently, the delivery of the thermal dose is guided by invasive thermistors which can be fragile and only report temperatures from a limited number of points [4,5].Over the past decades, significant research efforts were devoted to extracting and analyzing thermal data from medical imaging modalities like US, magnetic resonance imaging (MRI), and computed tomography (CT).Among these modalities, CT is particularly advantageous for its real-time acquisition, high spatial resolution, and full-body coverage.In contrast, MRI has significant drawbacks in scanning speed, geometric accuracy, and cost, while US suffers from strong artifacts and restricted penetration through hard tissues and across air-tissue interfaces [6,7]. While ionizing radiation to the patient is the main problem associated with CT, solutions are being rapidly developed over the past years.For instance, interior tomography allows for targeted imaging of a region of interest [8].Also, data-driven methods (i.e., machine learning and deep learning) have been applied to low-dose image reconstruction and denoising [9].Synergistically, hardware-based innovations enabled photon counting CT (PCCT), which is a new frontier of medical imaging.PCCT can reduce radiation dose by eliminating electron noise, minimizing sensitivity to beam hardening through optimal X-ray photon weighting, increasing spatial resolution with fine detector pitch, and performing multiple material decomposition beyond the capabilities of dual energy CT [10,11].With FDA approval, these advancements have already been used in multiple clinical applications. The ability for CT to measure temperature changes is based on the induced change in X-ray linear attenuation coefficient (LAC) as the result of thermal expansion.In general, heat applied to a tissue causes an increment in volume and thus decrement in density, which is observed as a drop in the LAC.The relationship between CT number, which is a normalized measure of the LAC expressed in Hounsfield units (HU), and temperature is modeled as Eq. 1. where T 0 is an initial baseline temperature, and α is the material-specific thermal expansion coefficient [12].The change in HU per degree Celsius is called the thermal sensitivity and is often approximated as a constant over the relevant temperature range (approximately 30 °C to approximately 90 °C).This linear trend is confirmed in the prior studies which examined substances including water, fat, liver, kidney, etc. [13,14].Overall, studies have shown that CT thermometry can reach an impressive accuracy of 3-5 °C, but only after calibration to a given material [1].While the principle of CT thermometry is conceptually simple, the variability in thermal sensitivity between different tissues, different patients, and under different scanning protocols is a critical challenge [1].It would be difficult or impossible to obtain these highly specific measures in vivo, and clearly there are substantial differences between in vivo and ex vivo measurements because of the different physiological conditions.Furthermore, exposure to intense heat during thermal ablation may alter the thermal properties of the target region, introducing additional errors. To address these significant problems with CT thermometry, here we present the first approach for PCCT thermometry that allows for superior material decomposition and data-driven temperature mapping relying on basis material data that do not need patient-specific calibration.Using PCCT to simultaneously capture the LAC of a substance at several energy levels, we can perform material decomposition, which is demonstrated in (1) �CT (T ) ≈ −[1000 + CT (T 0 )]α�T Eq. 2 for three base materials without loss of generality [15,16].μ 1 , μ 2 , and μ 3 are the known energy-dependent LACs of the bases and V 1 , V 2 , and V 3 are the corresponding unknown volume fractions.Physically speaking, the LAC of a mixture of the base materials must be the linear combination of the LACs of the components with the corresponding volume fractions as the weighting factors. Given the above, one might reasonably expect that thermal sensitivity could be linearly computed according to the material composition.In other words, given that μ i (T) ≈ α i (T − T 0 ) + μ i (T 0 ) where T 0 is a reference temperature and α i is the thermal sensitivity, a linear model for the LAC for n base materials would be as follows: where are the volume fraction weighted thermal sensitivity and offset respectively.In reality, thermal sensitivity relies primarily on thermal expansion, which is directly related to the strength of intermolecular bonds.Hence, the above linear model is generally inaccurate.Indeed, our experimental data presented in Fig. 2g shows that thermal sensitivity follows a quadratic/higher order relationship with the concentration of CaCl 2 , indicating that data-driven modelling is suitable for PCCT thermometry.Since a fully connected neural networks with proper activations can approximate any continuous function, it is an ideal choice for non-linear prediction of temperature given spectrally resolved LAC values, which is essentially a multivariate regression task. Methods In our feasibility study, we selected (1) water and aqueous solutions of (2) 50 mmol/L CaCl 2 and (3) 600 mmol/L CaCl 2 as our three base materials since the human body is characteristically composed of water and bone.These substances were heated in a hot water bath with precision temperature control and immediately transferred to a custom-built rectangular cuboid phantom with a digital thermometer (DS18B20 thermometer, ± 0.25 °C) shown in Fig. 1a.The thermal expansion of the acrylic phantom container is negligible in comparison to the substances being measured.The LAC values of the homogeneous base substances were measured in four energy bins (8-33 keV, 33-45 keV, 45-60 keV, and 60-100 keV) during transient cooling and at approximately every 5 °C (2) temperature drop.The system consists of an X-ray source (SourceRay SB-120-350, 75 μm focus) and an X-ray photon-counting detector (ADVACAM WidePIX1x5, Medipix3, 55 μm pitch, 256 × 1280 pixels).In our experiments, the source was operated at 100 kVp 100 μA with 0.1 mm copper filtration.The detector was set to the charge-summing mode with two thresholds for each acquisition.After 1 h of stabilization, projections were collected at 8 keV and 45 keV thresholds followed by the same number of projections at thresholds of 33 keV and 60 keV.All projections were captured within a 1.5 °C change of the digital thermometer reading. Since the X-ray tube emits photons in a small-angle cone geometry, we cannot assume that all beam paths through the phantom are in parallel.Thus, a weak perspective method was used to compensate for beam divergence.This is illustrated in Fig. 1b and c.In the 2D projection after removal of a small proportion of unstable pixels (greater than 3 standard deviations from the average), we selected a horizontal LOI that spans the width of the phantom [17].Using x to denote position along the LOI, the difference between the line integral profiles of the phantom when it is filled with liquid and when it is empty was computed according to Eq. 4. where f stands for filled, e stands for empty, I are the raw photon counts, and L (254 mm) is the external side length of the square cross-section of the phantom.By taking the difference, the attenuation contribution of the phantom enclosure was eliminated.Finally, a sliding average over five pixels and a median filter over seven pixels were sequentially applied to remove noise from the profiles before the attenuation of the material is found in Eq. 5. where the correction factor of 1.23 is the magnification, defined as the ratio of the distances from source to the phantom center (278.5 mm) and from source to detector (342.5 mm) and 55E-3 mm is the length of a detector pixel.The error in μ material is theoretically no more than 3% compared to if it were measured with a parallel beam source.Note that our weak perspective method is rotation-invariant and uses all data points in the LOI to yield a high signal to noise ratio.The variance of all measurements was quantified by computing the LAC as the average of 10 adjacent LOI's. To predict the temperature changes, we designed a neural network with an input layer of eight nodes, two ( 4) 23 dx hidden layers of four nodes, and an output layer of 1 node. The training examples were generated from the base material data.Shown in Eq. 6, the first four elements of the input are a material's LACs at some temperature and the last four are the LAC residuals due to heating of the material above the 33 °C baseline.The multiplicative factor of 100 was introduced to scale the residuals into a similar range as the baseline.The network architecture is displayed in Fig. 3a. In total, 333 unique training inputs representing a reasonable range of temperatures were generated for each of the three base materials where a small amount of Gaussian random noise was added to each input.The ReLU activation was used for all layers, mean squared error acted as the loss function, and stochastic gradient descent with a learning rate of 1E-5 was used as the optimizer.The dataset was split 80% for training and 20% for validation.The testing set consisted of data collected from 300 mmol/L aqueous CaCl 2 , which is similar in composition to the base materials, and from a milk-based protein shake (30 g protein, 4 g carbohydrates, 2.5 g fat per 340 mL), which is organic and dissimilar to the base materials.The uncertainty of the temperature predictions is quantified by evaluating the network on the testing (6) . . .data with randomly generated Gaussian noise.This noise is distributed according to the variance in attenuation obtained from 10 LOI's in the corresponding projection.Hence, we realistically simulate the range of attenuation values that are measured in practice. Results and discussion All the collected data is illustrated in Fig. 2 and the raw data and code are made openly available [18]. Figure 2a depicts the trend of X-ray LAC with increasing energy levels, which is generally expected.However, at low energies, our measured attenuation coefficients for water are lower than those reported by NIST [19].This discrepancy is due to Compton scattering of high energy photons from our polychromatic source which were recorded as low energy photons.Figure 2b-f show the relationship between attenuation and temperature, which is a negative trend in all except the 33-45 keV channel.The reduced attenuation of a material due to thermal expansion leads to two competing effects: fewer high energy (45-60 keV and 60-10 keV) photons are Compton scattered while more low energy (33-45 keV) photons pass through.It is hypothesized that the former phenomenon has a greater effect since it occurs over a wider energy range.Hence, the net effect is that attenuation is increased with increasing temperature in the 33-45 keV bin.Despite this effect, the data in the 33-45 keV channel is still informative and is incorporated into the network. After 73 epochs of training, the MAE on the validation data smoothly converged from 43.13 °C to 3.40 °C.The network is evaluated by the testing materials by taking a baseline scan and computing the residuals from heating in an identical fashion as described in Eq. 6 for the training data.On the testing set, the network achieves a MAE of 3.97 °C on 300 mmol/L CaCl 2 over a temperature range of 35 °C to 60 °C and an MAE of 1.80 °C on a milk-based protein shake over a temperature range of 38 °C to 50 °C.Note that 300 mmol/L CaCl 2 can be directly made from the bases (i.e., 50% water and 50% 600 mmol/L CaCl 2 ) while the protein shake must be indirectly modeled since it contains significant amounts of other substances.In both cases, the network is highly accurate.These results are displayed in Fig. 3b and c. Conclusions In future studies, an active temperature measure (as opposed to passive cooling) could be used to ensure better thermal accuracy of the data points.A better calibrated PCD and increased source filtration can also reduce the adverse effects of fluorescence escape and beam hardening effects respectively [20,21]. Additionally, more material bases can be incorporated for the neural network to cover more material types and better neural networks can be designed to improve temperature prediction.Furthermore, tomographic PCCT on human tissue samples are necessary before in vivo studies can be planned.For preclinical evaluation, mouse experiments can be used to compare the efficacy of thermal ablation using classical approaches (e.g., thermistors) and the novel PCCT thermometry imaging presented in this letter.Clearly, PCCT thermometry will offer a thermal dimension to a spectral CT volume and may potentially bring new diagnostic and therapeutic tools to clinical practice.Furthermore, the idea of using material decomposition to improve thermometry may also be applied to phase contrast X-ray thermometry, which has been shown to be capable of volumetric thermal visualization [22]. In this study, we demonstrate a data driven PCCT thermometry algorithm that can accurately predict the temperature of unknown materials given spectrally resolved LACs of a set of known, base materials at various temperatures.This is an important result toward surgical translation as it presents a solution for handling variability in tissue property without direct calibration to the tissue in vivo. Fig. 1 Fig. 2 Fig.1Illustration of the experimental setup and procedure.a Photo of photon counting CT configuration used to take 2D projections (256 × 1280 pixels) of the phantom; b 200th row in the projection is selected as the line of interest (LOI) and used to obtain the difference between the projection profiles of the phantom when it is filled and when it is empty.The projections have been contrast enhanced for better viewing and the vertical white lines corresponding to gaps between detector chips are removed during processing; c The difference in area between the projection profiles of the empty and filled phantom are used to determine the LAC of the liquid material Fig. Fig. Summary of experiment results.The fully connected neural network architecture used to non-linearly model the relationship between attenuation and temperature.The input to the network are the spectral attenuations of a material at a baseline temperature concatenated with the attenuation residuals due to heating; b Visualization for network performance for predicting temperature on 300 mmol/L CaCl 2 and a milk-based protein shake.The data points are labeled in the (xx, yy) format where xx is the predicted temperature and yy is the ground truth temperature synchronously measured with a digital thermometer.The 95%CI of temperature prediction is shaded.Data from the testing samples were not included in the training data
3,628.8
2022-09-26T00:00:00.000
[ "Physics" ]
Photoemission from the gas phase using soft x-ray fs pulses: An investigation of the space-charge effects An experimental and computational investigation of the space-charge effects occurring in ultrafast photoelectron spectroscopy from the gas phase is presented. The target sample CF$_3$I is excited by ultrashort (100 fs) far-ultraviolet radiation pulses produced by a free-electron laser. The modification of the energy distribution of the photoelectrons, i.e. the shift and broadening of the spectral structures, is monitored as a function of the pulse intensity. The experimental results are compared with computational simulations which employ a Barnes-Hut algorithm to calculate the effect of individual Coulomb forces acting among the particles. In the presented model, a survey spectrum acquired at low radiation fluence is used to determine the initial energy distribution of the electrons after the photoemission event. The spectrum modified by the space-charge effects is then reproduced by $N$-body calculations that simulate the dynamics of the photoelectrons subject to the individual mutual Coulomb repulsion and to the attractive force of the positive ions. The employed numerical method accounts for the space-charge effects on the energy distribution and allows to reproduce the complete photoelectron spectrum and not just a specific photoemission structure. The simulations also provide information on the time evolution of the space-charge effects on the picosecond scale. Differences with the case of photoemission from solid samples are highlighted and discussed. The presented simulation procedure, although it omits the analysis of angular distribution, constitutes an effective simplified model that allows to predict and account for space-charge effects on the photoelectron energy spectrum in time-resolved photoemission experiments with high-intensity pulsed sources. Introduction The extension of the photoelectron spectroscopy (PES) to time-resolved investigations has established itself in recent years as one of the most powerful and promising methods for the study of the electron, spin and lattice dynamics with picosecond and femtosecond time resolution [1][2][3][4]. The pump-and-probe PES technique was successfully employed, for example, in investigating the dynamics of surface chemical reactions [5], demagnetization processes [6][7][8][9], charge density waves [10], image potential states [11], relaxation of photoexcited molecules [12]. Advancements with the use of the pumpand-probe technique [13] are strictly related to the development of stable and intense sources with ultrashort (fs-ps) pulse duration. While the pump signal is often provided by a pulsed optical laser, the list of employed sources for the probe radiation includes the multiplication of a laser fundamental frequency through non-linear crystals [7,10] or high-harmonic generation (HHG) [14][15][16], synchrotron light [9,17,18], free-electron lasers (FELs) [19][20][21][22][23]. This collection of available sources covers a photon-energy range extending from the near ultraviolet to the hard X-rays. The reduction of the pulse duration to the scale of picoseconds or femtoseconds implies the arrival of a considerable number of photons on the sample within an ultrashort time, with the consequent simultaneous emission of a huge quantity of photoelectrons from a region of space having the size of the beam spot. Photoemitted electrons are subject to mutual Coulomb repulsion and to the attractive force of the positive charges left in the irradiated material system, determining a variation in their kinetic energy and momentum. The measured spectrum consequently differs from the genuine distribution of the electron kinetic energy after the photoemission event. The two most important effects that are observed in the photoelectron energy spectrum are the shift of the position and the broadening of the PES structures (valence band or orbitals, core levels, Auger peaks) on the scale of the kinetic energies [24]. Spacecharge effects on the angular distribution of photoelectrons, i.e. the electron momentum broadening [25][26][27], are also of fundamental importance for angle-resolved PES (ARPES) experiments but they will not be investigated in this paper. Space-charge effects in PES were observed for the first time in the eighties in multiphoton ionization experiments on gases that used optical and near-ultraviolet pulsed lasers [28,29]. Strategies to cancel or reduce them included decreasing the laser intensity [28,30], defocusing of the incident beam [30] and the reduction of the gas pressure [31] but no systematic study or numerical simulation were carried out. Apart the shift and broadening of the PES lines, the observed suppression of lower order peaks in multiphoton ionization processes [32][33][34] was ascribed to the attractive force of the ions that traps the slower electrons and prevents them from reaching the spectrometer [35,36]. Observations in photoemission experiments from solid surfaces followed and more comprehensive studies were carried out using different pulsed sources and exploring a wide range of photon energies [19,[24][25][26][37][38][39][40][41][42][43][44][45][46][47]. These investigations evidenced that in PES experiments from solids the space-charge effects are dominated by the numerous low-energy ( 20 eV) secondary electrons that constitute the most abundant part of the charge emitted in vacuum. The PES lines are typically shifted towards higher kinetic energies due to the repulsive force of the slow secondary electrons that remain closer to the sample surface and push the faster primary photoelectrons away. Energy shift and broadening are both directly proportional to the number N e of emitted electrons per pulse (and then to the pulse intensity I) and inversely proportional to the linear size a of the beam spot on the sample surface, but they depend only slightly on the kinetic energy of the PES structures under study [24,27]. Space-charge effects were recently studied also in photoemission from liquid solutions [48]. While a simple theoretical description of the space-charge limited (SCL) current in thermionic valves or phototubes was well established decades ago [49], a rigorous treatment of the Coulomb interactions in the photoelectron cloud is much more complex but some effective simplified models have been proposed. The space-charge effects are traditionally treated distinguishing between a deterministic contribution, which represents the repulsive force exerted by the average (macroscopic) charge density ρ(r, t) of the electron cloud, and a stochastic contribution, which describes the electronelectron scattering events and takes into account the granularity and the stochastic nature of the electron cloud [50,51]. Generally speaking, the deterministic part leads to a rotation of the particle distribution in the position and momentum space, while the contribution of the stochastic part induces a broadening of the distribution [52]. Simulations of the deterministic effects are usually carried out following the trajectories as a function of time of a limited number of particles and apportioning the total charge of the electron cloud among the considered trajectories. The use of advanced softwares like SIMION ® , based on the solution of the Laplace's equation, allowed to include the space-charge effects in the simulation of the motion of the photoelectrons inside a timeof-flight (TOF) spectrometer [4,51,53]. This approach was particularly effective in the correction of the deterministic effect of the space charge phenomenon in time and angular photoemission from solid samples upon using time-of-flight momentum microscopy [54]. In this instrument, a strong electric field is applied in front of the sample, a condition that is not typically encountered in other spectroscopic setup where a field-free region exists close to the sample. Therefore, the large number of slow secondary electrons and the few faster core level and valence electrons emitted from the solid have a different expansion of their momentum-distribution discs in the strong extractor field of the instrument, allowing to reconstruct the experimental spectra limiting the space-charge effect influence. Long, Itchkawitz and Kabler proposed more than twenty years ago a model to predict the deterministic contribution to the energy broadening of the PES structures. In this model the photoelectrons are described as a negative charge flying between the plates of a spherical capacitor [41]. Despite its numerous simplifications, this model justifies the linear dependence of the energy broadening ∆E B on the ratio N e /a and revealed to be effective in quantifying the order of magnitude of ∆E B measured in PES experiments with different pulsed sources [55]. The stochastic contribution to the space charge has been known for many years in electron microscopy as Boersch effect [56] and it induces an energy broadening that adds to the effects of the charge density ρ(r, t) [25-27, 50, 51]. The distinction between deterministic and stochastic contributions is overcome by simulation procedures that reproduce the trajectories of as many electrons as those actually photoemitted in the experiment under study. In this way, the simulation takes into account both the long-range effects of the net charge distributions of electrons and ions and the short-range electron-electron interaction. The model presented in this work belongs to the latter class of space-charge simulations. Hellmann et al. [27] were the first to propose a N-body simulation scheme based on the Barnes-Hut algorithm [57], in which the trajectories of the N e photoelectrons (representing the photoelectron cloud generated by a single pulse) are determined with an approximate calculation of the force acting on each particle and by performing a leapfrog integration of the equations of motion of each electron. This method requires the use of the Treecode software [58] with slight modifications [27,55] and has been applied to successfully reproduce the space-charge effects observed in numerous measurements [20,26,27,44,47,59] as well as for a feasibility study of future experiments [53,55]. Experiments and simulations were also carried out to study the contribution to the space charge from the photoelectrons generated by the optical or near-ultraviolet pump radiation via multiphoton processes and its dependence on the delay time between the pump and probe pulses [26,46,47,59]. Nevertheless, the method of a full-spectrum N-body simulation of PES experiments has not been yet explored. The advantage of this method is the possibility to predict or correct for space-charge effects over the entire photoelectron spectrum. Most of the calculations reported in literature for the photoemission from solid surfaces are focused on studying the space-charge effects on a single specific structure (a core level or the valence band), assuming for the secondary electrons a simple rectangular energy distribution [20,27,44] or a delta-function distribution [53,55] and neglecting the contribution of the higher-energy photoelectrons (other photoemission structures and inelastically scattered electrons). It must be said that a PES experiment from a solid sample is not the easiest test to attempt a full-spectrum simulation. Most of the particles that enter in the simulation are secondary electrons, which constitute the large majority of the overall photoelectrons (80-99 %) [60]. The kinetic-energy distribution of these secondary electrons is essentially a very broad bump extending from 0 to about 20 eV, which is expected to be visibly affected by the space charge only at very high pulse intensity. Nevertheless, secondary electrons necessarily occupy the most part of the computation time but they must be included because they are primarily responsible for the distortion of the sharper and more interesting higher energy structures (core levels, valence band and Auger peaks). A much smaller number of secondary electrons are generated in photoemission from the gas phase. In these experiments the primary photoelectrons themselves are the main source of the shift ad broadening of the PES structures that compose their kineticenergy distribution, with a small contribution from inelastically scattered electrons. Measurements on gases then offer a simple way to compare the full-spectrum simulations with the experimental results with a lower computational effort with respect to the case of solids. Another simplification for a first test is the use of a moderate photon energy (extreme ultraviolet or soft X-rays) that avoids working on very extended 'survey' spectra presenting a lot of different structures. In the present paper, N-body calculations are used to reproduce the space-charge effects in photoemission spectra from gaseous trifluoroiodomethane (CF 3 I) excited through hν=95 eV ultrashort (∼100 fs) FEL radiation pulses. The measurements show the evolution of the shift and broadening of the PES structures with increasing the intensity I of the radiation pulses. Spectra acquired at low photon flux and with negligible space charge provide the initial kinetic-energy distribution Y in (E K ) for the photoelectrons in the simulation. The final kinetic-energy distributions Y f in (E K ) obtained from the simulations for different values of I are then compared with the experimental spectra. As we shall see, the good qualitative and quantitative agreement between experiment and simulations guarantees the reliability of the adopted theoretical and numerical approach. The presented computational tool is a first positive step towards the implementation of a standard procedure for taking into account spacecharge effects in PES, including investigations on solid samples or using other pulsed sources (first of all HHG and hard X-ray FELs), and provides an instrument for the partial correction of the acquired spectra and for the design of future experiments. The paper is structured as follows. Section 2 describes the experimental setup and the details of the PES measurements from the CF 3 I gas. The simulation method and the physical assumptions on which it is based are illustrated in Section 3. The results of experiment and simulations and the comparison between them are illustrated in Section 4 while Section 5 discusses the implications of the obtained outcomes. A summary of the work and of the principal conclusions is found in Section 6. Figure 1. Picture of the gas-phase x-ray photoelectron spectroscopy experimental apparatus. The apparatus was mounted on a position-adjustable XZθ-stage. The VUV-FEL beam comes from the right-hand side (yellow arrow). The inset shows a schematic drawing of the 9 mm long gas cell which has a 1 mmφ entrance aperture and a 5 mmφ exit aperture for the VUV-FEL beam. In the picture and in the inset the red arrows indicate the polarization direction of the beam. Experimental The photoelectron spectroscopy experiment was carried out at beamline-1 [61] of SACLA XFEL facility [62] at SPring-8 campus using ultrashort (∆t ∼100 fs), quasimonochromatic vacuum ultraviolet (VUV) pulses, the so-called pink-beam, at a repetition rate of 60 Hz. The VUV-FEL photon energy and the average pulse energy were 95 eV and 85 µJ/pulse, respectively. The corresponding photon flux was about I 0 =5.6·10 12 photons/pulse and the beam diameter at the sample position was measured to be about 10 µm. The experimental setup, equipped with the hemispherical electron energy analyzer SES-2002 and the gas-cell GC-50 (Scienta Omicron [63]) used for the measurements of electron spectra is shown in Figure 1. The inset shows a schematic diagram of the 9 mm long gas cell crossed by the VUV-FEL beam ionizing the molecules of the gas. The apparatus is regularly used for the gas-phase x-ray photoelectron spectroscopy experiments at the SPring-8 undulator beamline in the energy range between soft x-ray and hard x-ray [64]. For the present work, the apparatus was temporarily moved to the SACLA experimental hall. The lens axis of the analyzer was in the horizontal plane at right angles to the VUV-FEL beam direction and parallel to the polarization vector of the incident photons. The apparatus was mounted on a position-adjustable XZθ-stage, where X stands for the horizontal, Z for the vertical directions and θ for the rotation, respectively. All the electron spectra were recorded with the kinetic energy sweep mode of the electron analyzer. The kinetic energy scale of the analyzer was calibrated by measuring the Kr M-NN Auger electron spectrum and comparing the detailed structure observed in 25-62 eV region with the previous work by Aksela et al. [65]. For the present measurements on CF 3 I molecule, the pass energy of the hemispherical analyzer was set to 50 eV and a 1.5 mm × 25 mm analyzer entrance slit (longer side parallel to VUV-FEL beam) was chosen to give a theoretical energy resolution of about 230 meV. The angular acceptance of the analyzer is centered in the polarization direction of the radiation and is strongly elongated in a direction parallel to the beam due to the shape of the entrance slit. The linear magnification of the analyzer optics is ∼5. Since beamline-1 of SACLA does not incorporate a monochromator, the bandwidth of the pulsed VUV radiation results larger than 1.0 eV. Measuring the iodine 4d spectrum emitted from the CF 3 I target at low photon flux, and then with negligible space-charge broadening, an overall spectral energy resolution of about 1.4 eV is estimated. The typical acquisition time for one spectrum was 15 minutes. In order to study the space-charge effect on the electron spectrum, the pulse intensity at the sample position was adjusted using a 2.6 m-long gas-attenuator chamber filled with N 2 gas at a pressure varying between 1 and 90 Pa. Accordingly, the pulse intensity I of the attenuated beam was ranged between approximately 1·10 7 and 5·10 12 photons/pulse at the sample position. The target sample for the present study was trifluoroiodomethane (CF 3 I) molecules fed into the main chamber via a gas cell. The pressure in the experimental chamber (outside the gas cell) was maintained at about 1.3·10 −3 Pa, corresponding to an estimated pressure of 1.26 Pa inside the gas cell. The gas cell was at room temperature during the experiment, hence the density of CF 3 I in the gas cell was ρ=3.0·10 14 molecules/cm 3 . The pulse intensity I can be considered as constant along the length of the gas cell. In fact, the attenuation length of the radiation in the gas cell is given by where µ is the absorption coefficient and σ M is the absorption cross section of a CF 3 I molecule for hν=95 eV photons. From the tabulated total atomic photoionization cross sections for C, F, and I [66], an approximated absorption cross section σ M =19.1 Mb can be estimated. This results in an attenuation length λ=1.7 m which is much greater than the 10 mm length of the gas cell. Simulations The present N-body simulation of the space-charge effects considers an initial kineticenergy distribution of the photoelectrons Y in (E K ) defined in the whole range of energies from 0 eV to about hν. Y in (E K ) represents the photoemission spectrum unaffected by the space charge. The motion of each of the N e electrons, subject to the repulsion of the other electrons and to the attractive force of the ions, is then calculated until the particles are spaced far enough apart that the Coulomb interactions are negligible. The final kinetic-energy distribution Y f in (E K ) of the photoelectrons is then compared with the measured spectra, analyzing in detail the modifications induced by the space charge on the various photoemission structures in both experiment and simulation. With this approach, it will be possible to unambiguously verify the reliability of the approximations embedded in the simulation procedure and to control the accuracy of the physical parameters at the base of the calculations, first of all the space-charge-free photoelectron spectrum Y in (E K ). The calculations were carried out using a modified version of the open-source Treecode software [58] based on the Barnes-Hut algorithm [57] that was already applied for the simulation of the space-charge effects in the case of photoemission from a solid surface [55]. In the simulations, the trajectories of the photoelectrons are calculated considering both their mutual Coulomb repulsion and the Coulomb attraction with the positive ions left behind. The ions can be considered at rest during the flight of the electrons, the root-mean-square (rms) velocity of a CF 3 I molecule being 196 m/s at 300 K in the ideal gas approximation whereas the speed of an electron with a kinetic energy of just 1 eV is 5.9·10 5 m/s. On the other hand, in the time between two successive pulses (0.017 s) CF 3 I molecules travel an average distance of about 3 m, indicating that when the new pulse arrives the ions have left the gas cell or have repeatedly bounced off its walls neutralizing their charge. So it can be assumed that every radiation pulse interacts with a gas of neutral molecules. Photoelectrons are considered as all emitted at the instant t=0. The simulation procedure requires to define the initial configuration of the overall N e photoelectrons, i.e. their spatial positions r i and velocities v i at the instant t=0 in a defined Oxŷẑ reference system. The initial position r i of each photoelectron i = 1, ..., N e is randomly selected inside a cylinder that represents the region of the gas cell crossed by the FEL radiation beam. The radius of the cylinder a=5 µm corresponds to half the measured beam diameter, while the length of the cylinder is l=30 µm and is approximately equal to the spatial extension of the pulse, its duration being ∼100 fs as stated above. The axis of the cylinder corresponds to thê x coordinate axis. The initial kinetic energy E Ki of each photoelectron is randomly selected according to a normalized probability distribution f (E K ) which essentially reproduces a reference experimental photoelectron spectrum acquired with low radiation intensity (I = 6.6 · 10 7 photons/pulse at an attenuating N 2 gas pressure p=80 Pa) and presenting negligible space-charge effects. The transmission function for the spectrometer is assumed determined by the angular magnification of the entrance lens and hence directly proportional to (E K ) − 1 2 [67,68]. The low-flux spectrum acquired at p=80 Pa is then multiplied by a function proportional to √ E K and a polynomial is used to extrapolate the spectrum in the non-measured low kinetic-energy range 0-7 eV. The area of the obtained function in E K is finally normalized to 1, giving as a result the probability distribution f (E K ). The modulus of the initial velocity of each photoelectron is then calculated from the initial kinetic energy E Ki as |v i | = 2E Ki /m. Space-charge effects at different pulse intensities I are then simulated varying the number N e of photoelectrons emitted from the cylinder at t=0. For relatively low values of N e (less than 30,000), in order to improve statistics the procedure is repeated with different initial configurations and the obtained final kinetic-energy spectra are averaged. The number of photoemitted electrons N e that are included in a simulation can be related to the number of photons per pulse I through the ionization cross section. The absorption coefficient µ = ρσ M multiplied by I represents the number of absorbed photons per unit length along the beam path, but this does not correspond to the number of photoelectrons per unit length n e because Auger electrons are also emitted due to the core-hole decay. The core-hole lifetime is typically of the order of few femtoseconds or less [69] and then much shorter than the pulse duration, so that the emission of Auger electrons is assumed as simultaneous to the emission of the primary photoelectrons. Conversely, modifications in the electron energy distribution due to the simulated spacecharge effects occur on a timescale at least of the order of hundreds of fs (see below) and then significantly larger than the pulse duration. The 95 eV radiation is above the ionization thresholds for the F 2s and the I 4d core levels, whose binding energies E B are ∼40 eV and 57.8 eV (4d 5/2 ) and 59.5 (4d 3/2 ) eV, respectively [70,71]. For these values of binding energies in the L and N shells the recombination of the core holes can be considered completely non-radiative [72,73]. Previous studies [70,74] demonstrated that the contribution of Auger electrons due to the creation of a photohole in the valence orbitals is negligible. So we can assume that the Auger recombination processes originate only from holes in the F 2s (LVV) and I 4d (NVV) states and for the same reasons we also exclude the formation of multiple Auger electrons due to the cascade phenomenon. The overall number of photoelectrons per unit length of the beam path generated in the gas can then be estimated as the sum of primary photoelectrons and Auger electrons with the formula where σ F2s =0.60 Mb and σ I4d =9.1 Mb are the tabulated absorption cross sections for the two core levels at hν=95 eV [66]. The proportionality coefficient ρ (σ M + 3σ F2s + σ I4d ) corresponds to a production of 913 photoelectrons per µm of beam path and per billion photons. The number of photoelectrons taken into account in the simulation is then given by N e = n e l, where l is the length of the cylinder assumed as source of the charged particles. The contribution of the positive ions is represented by a static random distribution of N e particles with positive charge +e confined in the volume of the cylinder. For simplicity, all the photoelectrons (photoelectrons from the valence orbitals and from the core levels, Auger electrons, inelastic background) are assumed to be emitted isotropically. This approximation is not too raw considering that, as shown in the low-flux (I=6.6 · 10 7 photons/pulse) experimental spectrum (navy-blue continuous line in figure 2): i) roughly 40% of the photoelectrons are Auger and low-energy electrons whose angular distribution is not far from isotropic; ii) about 20% of the photoelectron spectrum are I 4d electrons whose asymmetry factor β [75] at 95 eV photon energy amounts to 0.8 for an atom [76] and is lower than 0.5 for the similar CH 3 I molecule [77]; iii) 30% of the photoelectrons belong to a manifold of molecular valence orbitals of different symmetries and hence with a variety of asymmetry parameters; iv) a mere 10% of the photoelectrons are F 2s that in atomic approximation are credited of a sharp asymmetry parameter of 2 [75]. In order to estimate the effect of anisotropic emission on the shift and broadening of the photoemission structures, we performed a test simulation considering a toy Gaussian initial photoelectron energy distribution Y in (E K ) centered at E K =56 eV (approximately the energy of the F 2s electrons) and with a FWHM of 3.0 eV. The test simulation was carried out comparing the results obtained assuming an asymmetry parameter β=0 (isotropic emission) or 2 for a density of n e =1970 photoelectron per micron of beam path. This linear density of photoelectrons corresponds in the simulation of the CF 3 I spectrum to an intensity I=2.2·10 9 photons/pulse through equation (1). For isotropic emission, the obtained final energy distribution Y f in (E K ) has a maximum at 52.5±0.3 eV and a FWHM of 11.6±0.3 eV whereas for the anisotropic distribution the maximum of Y f in (E K ) is at 52.2±0.3 eV and its FWHM is 13.6±0.3 eV. The isotropic assumption causes a negligible error on the energy shift and an inaccuracy of 15% for the energy broadening for the case of the most anisotropic emission, which concerns a limited number of the overall photoelectrons. Identical results were found in the test simulations also for a lower value of the linear density of photoelectrons, namely n e =465 photoelectrons/µm (I=5.1·10 8 photons/pulse). Given the initial positions and velocities of the photoelectrons at instant t=0, the simulation code calculates the force acting on each electron and then the new position and velocity after a leapfrog integration time step δt [27,55,58]. The procedure is then iterated simulating the dynamics of the photoelectron cloud in the interval between t=0 and a chosen ending time t f . It is then possible to obtain a histogram of the kinetic energy of photoelectrons at different times t and the final histogram at t = t f is the simulated spectrum Y f in (E K ) that shall be compared with the experiment. The integration time step must be carefully chosen because too large values determine a poor time resolution and produce results that significantly differ from the real integrated solutions; on the other end, a too short time step is uselessly time consuming [55]. Moreover, in the presented case a time step shorter than the pulse duration (∼0.1 ps) makes the approximation of instantaneous emission of the photoelectrons meaningless. Simulation tests carried out with different time steps δt revealed that negligible variations in the results are found decreasing δt below 0.125 ps. Furthermore, this value of integration time step is slightly longer than the pulse duration but shorter than the time scale at which significative modifications in the simulated electron energy distribution are observed (about 1 ps as demonstrated below) and is then adopted for the calculations. Results In figure 2 we show a selection of the experimental photoelectron spectra acquired in the kinetic-energy range 7-100 eV for different values of the exciting pulse intensity I. In the spectra reported in the figure, the pulse intensity varies between 6.6·10 7 and 7.9·10 10 photons per pulse and these values of flux were obtained changing the attenuating N 2 gas pressure in the range 80-30 Pa. It is evident a progressive shift towards lower kinetic energy of most of the structures, in particular the I 4d (∼37 eV) and the F 2s (∼56 eV) core levels. This can be ascribed to the attractive force of the positive ions left in the irradiated cylinder. Nevertheless, at low photon flux no shift is present for the structure of the valence orbitals that extends in the 65-85 eV kinetic-energy range. This absence of energy shift is caused by the shielding effect of the lower-energy electrons that during the flight are mostly located between the "motionless" positive ions and the faster photoelectrons from the valence band. Only for very high photon flux, the valence band experiences a clear shift towards higher kinetic energies. The maximal kinetic-energy value of the valence band, which is associated to photoelectrons from the highest occupied molecular orbital (HOMO) structure, increases from ∼85 eV to 91 eV in the reported spectra and exceeds 100 eV at full pulse intensity (not shown in figure 2). The experimental measurements also manifest an energy broadening of the reported structures. For example, it is evident that the detailed structure of the valence band [70,71] is lost when the intensity reaches 4.7·10 9 photons/pulse. The photoelectron spectra obtained from the N-body simulations are shown in (1), ranging approximately from 1·10 8 to 1·10 10 photons/pulse. The reported spectra are the histograms of the kinetic energy of the photoelectrons at a time t=125 ps after the arrival of the radiation pulse and the photoemission event. Extending the simulation time t beyond 125 ps does not cause further changes in the energy distribution of the photoelectrons. Notably, similarly to the experimental spectra, the simulations predict a shift towards lower kinetic energy for the I 4d and of the F 2s core levels and a fixed position for the valence-band states that move towards higher kinetic energy only at very large photon fluxes. Simulations also foresee an energy broadening of the photoemission structures. It is important to compare quantitatively the values of energy shift and broadening and their dependence on the pulse intensity as obtained from the experiment and from the simulation procedures. To this end, we focus on the two core-level peaks I 4d and F 2s. The position in terms of kinetic energy of their maximum and their energy width are extracted from both the experimental and the simulated spectra through a fitting procedure. The profile of both these peaks was fitted as the sum of two Gaussian functions with the additional contribution of a simple linear background. In particular, the use of a double Gaussian is necessary for the I 4d peak because of the spin-orbit splitting between d 3/2 and d 5/2 components, that results 1.76 eV in agreement with the tabulated data [78], with a fixed branching ratio of 0.67. The presence of a doublet structure in the F 2s peak is instead ascribed to a symmetry breaking induced . ; . by the photoionization process which splits the molecular orbitals into states of e and a 1 character [79]. Analyzing the experimental spectra acquired with lower intensity I (small space-charge effects), the separation between the two components is found to be 3.4 eV while the ratio between the intensities results 0.63 with the main peak at higher kinetic energy. These values are very similar to those obtained on the analogous CF 3 H molecular gas using hν=132.3 eV radiation [79]. The fitting of the experimental I 4d and F 2s photoemission lines acquired with a moderate intensity I=2. [24,27,41,55]. The straight lines in the aforesaid panels are linear fitting to the reported experimental and simulated values and their slopes, reported in table 1, are the coefficients of proportionality between the space-charge effects (∆E S , ∆E B ) and the pulse intensity I, expressed in terms of electronvolts per billion photons in a single pulse. It is evident that, for the same pulse intensity, simulation in general tends to slightly overestimate the space-charge effects with respect to the results provided by the experiment. Nevertheless, the difference between experiment and calculation in the obtained values of slope reported in table 1 is never greater than a factor ∼3, a discrepancy which is similar to that found in analogous studies for photoemission from solid surfaces [27]. This corresponds to an acceptable quantitative agreement between experiment and simulations, considering the simplicity of the implemented model and the critical approximations that have been adopted. Using the simulation procedure it is also possible to study the dynamics of the energy distribution of the photoelectrons. As an example, in figure 5 (a) we report the evolution of the simulated I 4d spectrum for a moderate radiation intensity (I=1.0·10 9 photons/pulse) in the time range t=0-125 ps from the arrival of the radiation pulse and the concomitant photoemission event. The values of the energy position E M and of the FWHM w as a function of time extracted from the fitting of the photoemission peaks are reported in the panels (b) and (c). From the data reported in these figures, it is evident that the two space-charge effects present a clearly different dynamics. A progressive broadening of the electron energy distribution starts immediately after the photoemission event while the maximum of the peak initially does not move. The shift of the peak towards lower kinetic energy begins to be evident at a time t ≈4 ps. The variations of the I 4d energy position and FWHM mostly occur in the first 10 ps and the evolution of the photoemission structure can be considered completed at the end of the simulation at t=125 ps. Discussion The progressive shift towards lower kinetic energies of the PES structures with increasing the pulse intensity, observed in the experimental spectra and reproduced by the simulations, constitutes the most striking difference with the case of photoemission from solid surface, where a shift towards higher kinetic energies is commonly experienced. In the case of photoemission for the gas phase, the reduced kinetic energy of the photoelectrons is clearly ascribed to the attractive force exerted by the positive ions which basically remains in the cylinder of gas crossed by the FEL radiation beam during the flight of the negative particles. The positive charge of the ions exerts on the photoelectrons, which are escaping and moving away from the irradiated gas region, a force directed back towards the cylinder, determining a decrease of their kinetic energy. In photoemission from solids, the space-charge effects are instead dominated by the secondary electrons in the 0-20 eV kinetic-energy range [24,55], which, as already stated, constitute the majority of the overall photoelectrons [60]. The secondary electrons and the higher-energy electrons of the other spectral structures (valence band, core levels, Auger electrons and inelastic background) emerge at the same time from the radiation spot on the sample surface but are quickly spatially separated on account of their different velocities. The numerous and slower secondary electrons remain closer to the surface and as a result of the Coulomb repulsion push away the other electrons, which in this case acquire a higher kinetic energy. Positive charges are present also in photoemission from solids, in the form of mirror charge of the photoelectrons if the surface is metallic whereas for an insulator the positive ions left in the region of the radiation spot cannot be neutralized. Nevertheless, the attractive force that the positive charge on the sample surface exerts on the high-energy photoelectrons is mostly shielded by the interposed secondary electrons, resulting on the whole in an increased kinetic energy [24,27]. This screening effect of the positive charge is absent in the photoemission from a rarefied gas because the production of secondary electrons is much smaller, due to the low density and the extremely larger electron mean free path. In gas-phase photoemission, the effect of the attractive force of the positive ions and the related negative shift of the PES structures depends on the kinetic energy. Photoelectrons having different kinetic energy (and consequently different velocity) are subject to a progressive spatial separation after their common emission from the irradiated cylinder of gas. Electrons emitted from the valence band have the highest kinetic energy (70-85 eV) and move away from the cylinder faster than the other photoelectrons. The attractive force that the positive ions exert on these faster photoelectrons is mostly shielded by the interposed photoelectrons with lower kinetic energy that oppose a counteracting repulsion. This explains the absence of energy shift for the valence band structures at moderate intensities revealed in the experiment (figure 2) and reproduced by the simulation procedures (see figure 3). Immediately after the photoemission event there is no spatial separation between positive ions, slower electrons and faster electrons, all the charged particles being contained in the irradiated cylinder. Once the photoelectrons have left the cylinder, the positive charge of the almost immobile ions exerts on the photoelectrons around them an attractive force directed back towards the cylinder. The work performed by this attractive force reduces the overall kinetic energy of the photoelectrons determining the negative shift. On the contrary, random Coulomb interactions and scattering events of a photoelectron with the other electrons and the ions may either increase or decrease its net kinetic energy. This is the stochastic contribution to the space-charge effects, already mentioned in the Introduction, and is an important source of the energy broadening of the PES structures. This explains why the simulated evolution of the I 4d peak (see figure 5) indicates that the effect of energy broadening temporally precedes the energy shift. A random modification in the kinetic energy of the photoelectrons and the resulting energy broadening starts immediately after the photoemission event, when all the charged particles are in the irradiated cylinder. The energy shift requires a spatial separation between ions and electrons and can occur only when most of the electrons have left the positively charged cylinder. For example, in the simulation of the evolution of the I 4d peak, the energy shift appears only about t=4 ps after the emission, a time during which the I 4d photoelectrons (∼36 eV) travel a distance of ∼14 µm. Considering a isotropic emission of the photoelectrons, this corresponds to a mean distance traveled in a direction perpendicular to the cylinder's axis of ∼11 µm, which is comparable with the diameter of the beam. This indicates that the negative energy shift of the I 4d photoelectrons starts only when most of them have left the cylinder of the irradiated gas. Conclusions The present study illustrates an experimental and computational investigation of the space-charge effects on the energy distribution of electrons photoemitted from a molecular gas, using the extreme-ultraviolet radiation generated by a FEL apparatus as a pulsed source. Shift and broadening of the PES structures were monitored as a function of the pulse intensity I. In contrast to photoemission from solid samples, where the PES structures are subject to Coulomb interactions that increases their kinetic energy, in photoemission from gases a progressive shift towards lower kinetic energies is observed. This different behaviour is ascribed to the secondary electrons that dominate the space-charge effects in photoemission from solids but are much less numerous in the case of gases and cannot shield the attractive force of the positive ions. Nevertheless, the extent of this negative energy shift depends on the kinetic energy and is almost absent for the photoelectrons originated from the valence orbitals due to the shielding effect of the lower-energy electrons. Only for very high photon fluxes the valence photoemission structures are shifted, but towards higher kinetic energies. All the evidenced spacecharge effects are well reproduced by the N-body simulations that take into account the Coulomb interactions among the photoelectrons and with the positive ions. In the new presented approach, a reliable initial energy distribution for all the photoelectron is provided by a low-intensity measurement and the simulation procedure calculates the modifications occurring in the full photoelectron spectrum. Despite some strong approximations adopted to reproduce the investigated system, a reasonable quantitative agreement between the experimental and simulated results was obtained. It must be borne in mind that the low energy resolution of the experiment (∼1.4 eV) limits the level of confidence attributable to this model and that future experiments with better energy resolution will be a tougher test for the effectiveness of the proposed model. Simulations also give information on the time evolution of the space-charge effects and show that the broadening effect precedes the energy shift of the photoemission structures. The results of the present study on photoemission from the gas phase validate the effectiveness of the proposed simulation procedure, which can be extended to experiments on solid surfaces and to the use of other types of pulsed sources. The illustrated method can be also applied on molecular dynamics calculations based on the particle-to-particle (p2p) approach, where the contribution of the positive charge of the ions is of critical importance.
9,671.2
2020-06-28T00:00:00.000
[ "Physics" ]
The Development of Interactive Instructional Media Oriented to Creative Problem Solving Model on Function Graphic Subject The main purpose of this research was to result Interactive Instructional Media Oriented to Creative Problem Solving Model. The method of this research used research and development method by 4-D of development model those were consisted by Define Stage, Design Stage, Develop Stage and Disseminate Stage. However, this research was limited to Develop. The research subjects on individual limited test were as much 2 students, on field trials were as much 20 students. Instrument those were used to get the data in this research were questionnaire and documentation. Technical that was used to analyse the data was percentage descriptive counting. The result of this research showed that development of Interactive Instructional Media Oriented to Creative Problem Solving Model had gone well that was proved by means of learning result was 81.10 when the simulation of using interactive instructional media Copyright © Universitas Pendidikan Ganesha. All rights reserved. Introduction The function graphic is one of core teaching material that is tested in calculus learning at school of IT unit level and applies to all existing departments.The teaching in mathematics learning is started from concretely then is continued to abstract that corresponding with the one of mathematics characteristics which has abstract object.So many students have difficulty in learning mathematics.It is caused of the mathematics characteristics.One of material that the abstract characteristics which is still difficult in understood and analysed by students is the function graphic.This material is one of the mathematics learning materials that is taught in calculus subject, which of course this material has been linked to previous material.But in this case students are still having difficulty analysing and finding solutions to solve problems those are related to the problems of applying graph functions.This abstract nature of mathematics that need the media to help learn it concretely because if in this case only relying on verbal explanations while teaching then it will not be able to help students understand it.(Howlitschek & Joeckel, 2017). Preliminary observations was conducted through interviews with mathematics lecturers and STMIK STIKOM Bali students showed that only a few students wanted to take lessons well when lecturers used computer-based learning media to help carry out mathematics teaching.While some other students were busy with their own activities.This is due to the fact that lecturers themselves rarely provide opportunities for students to find functional graph concepts independently, students are only told to watch and memorize a formula that has been presented through learning media, so that the activity and process skills are not well-honed.Student-oriented learning models are considered insufficient to be applied in order the learning process of function graphics becomes meaningful.Because in addition to implementing a student-oriented learning model.It is necessary to develop a learning media that is more innovative, contextual, and not be boring, and most importantly, students can be directly involved in using this media.So that students can use the knowledge that they have to guide students in building new knowledge, and of course this will be able to attract students to actively find solutions in solving problems from the case which relate to the material of function graphic.One of the developments of interactive learning media that can be developed is in the form of developing interactive instructional media oriented to the Creative Problem Solving model.Where interactive instructional media oriented to Creative Problem Solving model that be meant is instructional media that is oriented in accordance with the syntax of Creative Problem Solving model.The design of learning media will be made as interactive as possible so students can be directly involved in its use and will use a computer application in the form of GeoGebra. This research refers to some of the research results that is related to the development of instructional media to enrich the discussion and study of researchers.The research results on the use of GeoGebra in geomatic learning (Wihardjo et al., 2016) have similarities with what researchers have done in terms of utilizing GeoGebra applications, while the difference is output/product that was resulted.Wihardjo et al. used the utilizing of GeoGebra to find out the differences between learning outcomes of students who were taught using GeoGebra and without using GeoGebra, while researchers used the utilizing of GeoGebra to create interactive instructional media.The research results on the use of the Creative Problem Solving model in mathematics learning (Pratiwi et al., 2014) have similarities to what researchers have done in terms of utilizing the Creative Problem Solving model while the difference is the outcome/product which was resulted.Pratiwi et al. used the Creative Problem Solving model as a mathematical learning model that was applied in class to support student interest and learning outcomes, while researchers used the Creative Problem Solving model as an orientation for developing interactive instructional media.The research results on the utilizing of GeoGebra in mathematics learning (Nur, 2016) have similarities with what researchers do in terms of utilizing GeoGebra applications, while the difference is the output or product that was result.Nur used the utilizing of GeoGebra to carry out mathematics learning, while researchers used GeoGebra to create interactive learning media. Referring to the problems and research results those were related to Geogebra and the Creative Problem Solving model that have been done before, so that the researchers are interested in conducting research on The Development of Interactive Instructional Media Oriented to Creative Problem Solving Model. Method The method that was used in this study was the research and development method.The development design that be used was the 4-D development model, which has 4 stages (Trianti, 2007), namely: a. Define Stage The purpose of this stage was to set and define the learning conditions were begun with analysing objectives of the material boundaries that was developed by the device.This stage includes 3 main steps, namely: 1) Literature Study of Package Books, Instructional Materials Guide Book, and Software User Guide, 2) Software Installation, 3) Curriculum analysis, learning objectives, material and conceptual framework b.Design Stage The purpose in this stage was the development and manufacture of interactive learning media.This stage consisted of several steps, namely: 1) Compilation of benchmark reference content was the first step that connects between set up and design stages.Content was prepared based on the results of specific learning objectives formulation (basic competencies), and 2) Selection of formats. it was for example in the selection of this format was able to be done by reviewing the formats of existing devices and those were developed in more advanced countries. c. Develop Stage The purpose of this stage was to produce interactive instructional media that had been validated based on expert input.This stage included: 1) Media validation by experts was followed by revisions, 2) Simulation was the activity of operationalizing the teaching plan, and, 3) Limited trials with students.The results of stages 2 and 3 were used as the basis for revisions.The next step was further testing (field trials) with students who were in accordance with the actual class. d. Disseminate Stage This stage is the stage when it use of devices that have been developed on a wider scale for example in other classes, in other schools, by other teachers.Another goal is to test the effectiveness from the use of devices in teaching and learning activities.However, this research was limited to the Develop Stage. Research on the development of interactive instructional media oriented to Creative Problem Solving model was carried out at STMIK STIKOM Bali which is located at Jl. Raya Puputan No. 86 Renon, Denpasar-Bali.The data sources of this development research were obtained from students, lecturers and experts.The data types of this research were quantitative data and qualitative data.Quantitative data were consisted of the results from student responses and media validity while qualitative data were in the form of literature studies and curriculum analysis results. The method of data retrieval from student responses was taken using evaluation tool in form of test.Data from media validity results were taken using a questionnaire which was tested by experts with small revisions.The collection of qualitative data from this research was taken using documentation.Data from the questionnaires that were collected had been analysed using descriptive percentages counting in the form of scores.Formula that was used to calculate the percentages was as follows (Sudrajat, 2008).In providing meaning and decision making at the level of accuracy or effectiveness then percentage results were converted to the following levels of achievement scale: Results and Discussion Based on the stages of the 4-D development model that was used to develop interactive instructional media oriented to Creative Problem Solving model, then there are several things that will be explained, as follows. A. Define Stage There were five main steps that must be carried out in the defining stage, namely: Front End Analysis This stage was done to find out the problems that occur in the learning process of calculus subject at STMIK STIKOM Bali.Based on the results of direct observations that were conducted by researchers during the learning process in classroom and the results of interviews with lecturers of calculus courses, there were several problems were found, namely: a) Students still have difficulty analyzing and finding solutions to solve problems that are related to problems of applying the function graphic in calculus subject, b) Learning activities are still centered on lecturers, c) Tutorials and computer-based instructional media that have been used by lecturers in mathematics at STMIK STIKOM Bali have not been able to attract the learning interest of some students, d) The interest and concentration of students who lack focus when lecturers use computer-based instructional media to help do mathematics teaching, e) There is no direct involvement of students in the use of instructional media so students are less skilled in finding basic concepts of abstract mathematics, f) There is no use of interactive instructional media that is used in the process of learning Mathematics optimally yet, g) Limitations of instructional media when involving students in their use, h) There is no student-oriented interactive instructional media so as to be able to realize an active, creative, effective and enjoyable learning process, and i) Analysis of Students. STMIK STIKOM Bali students are referred to as students in this matter.The number of students which be analysed was 20 students (10 high ability people and 10 low ability people).Based on the results of observations in classroom learning, between students who have high and low abilities gave almost the same reaction or response to the learning process that occurs.Where only few student who wanted to take lessons well when lecturers used computer-based instructional media to help done teaching mathematics while some other students were busy with their own activities.This was due to the fact that lecturers themselves rarely provided opportunities for students to find functional graph concepts independently.Students were only told to watch and memorize a formula that had been presented through learning media, so that the activity and skills in the learning process were not well-honed. Task Analysis At this stage were done of analysis of the material that will be developed in interactive instructional media is oriented towards the Creative Problem Solving model.The conformity of the teaching material presented must be relevant to the core competence and basic competencies that had been determined by the education unit, based on a predetermined curriculum analysis.The materials that were presented on interactive instructional media oriented to Creative Problem Solving model, including: PRELIMINARY It is explain the concept of Graph Functions in general and the importance of studying Function Charts in this view.BASIC COMPETENCIES It presents the goals that must be achieved in studying the Function Graph in this view.RELATION it presents: 1) general set, 2) relation concept, 3) arrow diagram, 4) Cartesian diagram and 5) sequential pair set in this view.FUNCTION It presents about: 1) the concept of Functions and 2) Forms of Functions and Graphs (Linear Functions, Quadratic Functions and Rational Functions) in this view. Concept Analysis At this stage were done of identification and analysis from the concepts that will be presented in interactive instructional media oriented to Creative Problem Solving model.The concepts of interactive instructional media were oriented to syntax of the Creative Problem Solving model.The Creative Problem Solving model has four syntaxes, namely: understanding the problem, arranging a problem solving plan, implementing a problem solving plan and re-examining the problem solving. B. Design Stage This stage was carried out with the aim of designing interactive Instructional media oriented to Creative Problem Solving model so that a prototype of Mathematics learning media was obtained which was still in the form of an initial draft.In addition, at this stage the limited trial instrument was designed to test the draft.The stages that must be passed at this planning stage include: Arrangement of a Benchmark Reference Test At this stage was done of test preparation that was used to measure the ability of students to do the problem solving.The test that was arranged in this research was the final test to measure students' problem solving abilities after utilizing interactive instructional media oriented to Creative Problem Solving model in the learning process.The assessment score that was used to measure the results of the student's problem solving ability test was using the five scale of Benchmark Reference Assessment because this approach required a minimum percentage of mastery from student problem solving abilities.The Five Scale of Benchmark Reference Assessment score can be seen in table 2 below. Media Selection At this stage, the determination and selection of media was used to present the interactive instructional media oriented to Creative Problem Solving model.Based on the front end analysis, student analysis, task analysis, concept analysis and analysis of available support facilities, the media that was able be used to present interactive instructional media was oriented to Creative Problem Solving model in order to can directly involved students in their using and simulation was GeoGebra. The format Selection The interactive instructional media oriented to Creative Problem Solving model developed was created using one of the computer application programs that can be used as a instructional media for mathematics that was GeoGebra.GeoGebra has variety of facilities that can be used to create interactive instructional media that can involve students directly when applying it.It was can helped students in developing their own thinking and creativity in expressing a problem that was considered most appropriate to answer a case. C. Develop Stage At this stage the validation of interactive instructional media oriented to Creative Problem Solving model in the hope of getting input or revision from experts, so that the media that was developed can be a better media.The things those were done at this stage include: The Validation of Interactive Instructional Media Oriented to Creative Problem Solving Model The validation results those were conducted by three experts on interactive Instructional media oriented to Creative Problem Solving model can be seen in table 3 below.It was as for some inputs or revisions that were provided by experts on interactive instructional media oriented to Creative Problem Solving model that had been validated and the revision results that had been made can be seen in full on Table 4 below.been used to load material in everyday life. Simulation from the Use of Interactive Instructional Media Oriented to Creative Problem Solving Model At this stage was done of simulation activity used interactive instructional media oriented to Creative Problem Solving model on the learning process in class that only involved 2 students.First of all in this simulation process, students were involved in learning that utilized interactive instructional media oriented to Creative Problem Solving model then they was given a case where the assessment results of problem solving from this case can be used as the score of test results.The simulation test results on the used of interactive instructional media oriented to Creative Problem Solving model at STIKOM Bali can be seen in full on table 5 below.Two students were involved in the simulation activities on the used of interactive instructional media oriented to Creative Problem Solving model.The averages of learning outcomes were 81.00.If the average value was matched with the percentage level on achievement of the five scale so the simulation activities that used interactive learning media oriented to Creative Problem Solving model were appertained to run well.In a limited trial that involved 20 students were used to measure student responses on the use of interactive instructional media oriented to Creative Problem Solving model at STIKOM Bali by the average of learning outcomes were 80.65.If the average value was matched with the percentage level of achievement on the five scale so the used of interactive instructional media oriented to Creative Problem Solving model at STIKOM Bali had included going well. Conclusions and Suggestions The design of the development on the used of interactive instructional media oriented to Creative Problem Solving model that used the 4-D development model design by utilizing the Geogebra application program had gone well because it can resulted interactive instructional media oriented to Creative Problem Solving model that is feasible be used.This was evidenced by the average of learning outcomes of 81.00 when simulation on the used of interactive instructional media oriented to Creative Problem Solving model.Student responses to the used of interactive instructional media oriented to Creative Problem Solving model were good.This was evidenced by the average of student learning outcomes in limited trials of 81.10.To obtain interactive instructional media oriented to Creative Problem Solving model that more qualified so it should be do the use of devices that had been developed on a broader scale such as in other classes, in another school, by another teacher, so that the results of the assessment obtained will be more accurate and detailed.It should use the Borg and Gall development model on the development of this instructional media in the future, so that we can obtain better and more tested learning media, because with the Borg and Gall model there are more trials conducted, such as initial trials, field trials, and trials usage.. n = sum of all questionnaire items Next to calculate the percentage of entire subject.It was able to use the following formula: Note: F = Sum of percentage from all subject N = number of subjects Table 1 Achievement Level Scale Table 2 The Five Scale of Benchmark Reference Assessment score to student's problem solving ability test Table 3 . The Results of Expert Validation on Interactive Instructional Media Oriented to Creative Problem Solving Model Table 4 . The Revision Results of Interactive Instructional Media Oriented to Creative Problem Solving Model Table 5 . Simulation Results on the Used of Interactive Instructional Media Oriented to Creative Problem Solving Model at STIKOM Bali At this stage a limited trial on the use of interactive instructional media oriented to Creative Problem Solving model was carried out on 20 students that took Calculus subject.The results of the limited trial can be seen in table 6 below. Table 6 . Limited Test Results of Using Interactive Instructional Media Oriented to Creative Problem Solving Model at STIKOM Bali
4,412.8
2019-02-04T00:00:00.000
[ "Computer Science", "Education" ]
Numerical modeling of flow structure and heat transfer in a mist turbulent flow downstream of a pipe sudden expansion Turbulent droplet-laden flow downstream of a sudden pipe expansion is numerically studied using Eulerian two-fluid model. The model is used to investigate the effect of droplet evaporation on the particle dispersion and on the gas phase turbulence modification. Turbulence suppression in the case of evaporating droplets is hardly observed near the wall, and the level of turbulence tends to the corresponding value for the single-phase flow regime. In the flow core, where evaporation is insignificant, a decrease in the level of gas turbulence (to 20% as compared to a single-phase flow) can be observed. The maximal effect of droplet evaporation is obtained in the wall region of the tube. A considerable increase in the maximal value of heat transfer on adding the evaporating droplets to the separated flow is shown (more than 2-folds as compared to the single-phase flow at a small value of droplet mass concentration of ML1 ≤ 0.05). The addition of the solid non-evaporating particles causes a slight increase in the maximum value of heat transfer in the case of small particles and a decrease in heat transfer in the case of large particles. Introduction Two-phase droplet-laden separated flows are observed in many engineering and natural processes, such as cyclonic separation, flame stabilization in internal combustors, pneumatic transport, and many others.The separated flow is a typical two-dimensional shear flow consisting of several zones: main core flow, shear layer, recirculation, and flow relaxation regions.Each zone has specific features and typical length and time scales.The interactions between finely dispersed phase and turbulent gasphase flows are very complex, and many of these interactions remain poorly understood [1].Gasdroplet flows are usually inhomogeneous and anisotropic and often include flow separation and heat transfer between two-phase flow and the wall surface.A review of experimental and numerical works revealed that the effect of dispersed phase on the turbulence modification by particles in two-phase separated flows is an extremely complex phenomenon even in the case of gas-particle flows without phase changes. The aim of the present short communication is to examine the effect of evaporating droplets on gas turbulence modification and heat transfer enhancement in sudden pipe expansion flow.The number of papers that have examined two-phase separated flows with evaporating droplets is very limited [2,3].These few studies do not provide sufficient information to evaluate all the factors affecting the flow structure, particle dispersion, and turbulence modification in sudden pipe expansion flows with evaporating droplets. The mathematical model and numerical realization The authors of the present paper used the Eulerian two-fluid approach for the modeling of dispersed phase, which treats the particulate phase as a continuous medium with properties analogous to those of a fluid [3].This technique involves the solution of a second set of Navier-Stokes-like equations in addition to those of the carrier (gas) phase.A one-point probability density function (PDF) [4,5] of the particle velocity for describing the action of small particles on turbulence is used.The governing mean and fluctuating equations for both phases are described in detail [3].Gas-droplet turbulent flow is numerically predicted by the set of steady-state axisymmetrical Reynolds averaged Navier-Stokes (RANS) equations.Gas phase turbulence was modeled with the use of elliptic-blending secondmoment closure [6].Two-way coupling is achieved between dispersed and carrier phases in the mean and fluctuating transport.Particles' mean and fluctuating flow interactions are described by the twoway coupling model [3].The volume fraction of the dispersed phase is lower ĭ1 < 10 -4 and the droplets are fine (d1 < 100 ȝm); therefore, the effects of inter-particle collisions are neglected when treating the hydrodynamic and heat and mass transfer processes in the two-phase flow [3].Validation analysis of the developed model for both single-phase and two-phase flows in sudden pipe expansion was performed in [3]. The mean transport equations for both gas and dispersed phases and the second moment closure model are solved using a control volumes method on a staggered grid.The QUICK scheme [7] is used to approximate the convective terms, and the second-order accurate central difference scheme is adopted for the diffusion terms.The velocity correction is used to satisfy continuity through the SIMPLEC algorithm [8], which couples velocity and pressure.The first cell is located at a distance y+ = yU*/ν = 0.3-0.5 from the wall, where U* is the friction velocity obtained for the flow in the inlet pipe.At least 10 control volumes have been generated to be able to resolve the mean velocity field and turbulence quantities in the viscosity-affected near-wall region (y+ < 10).Grid sensitivity studies are carried out to determine the optimum grid resolution that gives the mesh-independent solution.For all numerical investigations performed in the study, a basic grid with 350×100 control volumes along the axial and radial directions is used.A more refined grid is applied in the recirculation region and in the zones of flow detachment and reattachment. Numerical results and discussion Numerical modeling is carried out for a monodisperse gas-droplet mixture in the initial cross-section, and then the droplet size is changed due to the evaporation process.The diameter of the pipe is 2R1 = 20 mm before expansion and 2R2 = 60 mm behind the expansion, the expansion ratio is ER = (R2/R1) 2 = 9, and the step height is H = 20 mm.The mean-mass gas flow velocity before detachment is Um1 = 10-30 m/s, and the Reynolds number for the gas phase varies within the range ReH = HUm1/ν = (1.33-4)× 10 4 .The initial velocity of the dispersed phase is UL1 = 0.8Um1.The Stokes number in the mean motion is Stk = 0.04-5, and the initial mass fraction changes within the range ML1 = 0-0.1. Computations are performed at a uniform wall heat flux density qW = 1 kW/m 2 .The case with the particles of d1 = 30 μm, Stk = 0.4 is the basic one in this study. All simulations are performed for both the evaporating and non-evaporating cases.In the case of non-evaporating particles, the predictions are carried out without consideration of the phase transition of the dispersed phase and with consideration of the thermal-physical properties of the two-phase flow.This case is artificial, but it allows us to analyze the effect of dispersed phase evaporation on the transfer processes and heat transfer in the separated two-phase flow. The radial profiles of mass fraction of the dispersed phase are shown in Fig. 1a for the cases of evaporating droplets (continuous lines) and the non-evaporating particles (dashed line).A sharp decrease in the concentration of particles is observed due to their dispersion over the tube crosssection in the separation region.It should be noted that the low-inertia droplets at low Stokes numbers of Stk < 1 (d1 < 50 ȝm) (lines 1 and 2) are entrained well in the separated flow, and are available in almost the entire tube cross-section.The near-wall zone of the pipe (r/H > 1.2) is practically free from particles due to intense evaporation.The large droplets with Stk = 4 (d1 = 100 ȝm) are not involved in the area of the recirculation flow but are mainly in the shear mixing layer and in the flow core.In the case of non-evaporating particles, their concentration in the wall region of the tube is higher than in the case of the evaporating droplets. The profiles of the turbulent kinetic energy (TKE) in the two-phase separated flow along the pipe are shown in Fig. 1b, where k0 is turbulence of the gas phase in the single-phase flow (without particles or droplets).In the two-phase flow, the TKE is suppressed by finely dispersed in comparison with the single-phase one.This effect increases with the size of the dispersed phase, which is consistent with the data of studies on separated flow with solid particles [1,2,7].Small particles (Stk < 1) are involved well in the turbulent motion of gas and take some energy away from the carrier medium.The turbulence suppression in the case of evaporating droplets is not observed in the wall zone and k/k0 § 1 because this area is free of particles and the level of turbulence tends to the corresponding value for the single-phase flow regime.In the flow core, where evaporation is insignificant, a decrease in gas turbulence is observed (to 20%).It should be noted that the maximal effect of droplet evaporation is noticeable in the wall region of the pipe (the difference is up to 10%).An increase in TKE is observed and k/k0 § 1 far from the cross-section of the flow detachment (x/H = 15) because most of the droplets have already evaporated. The effect of the Stokes number (the initial size of droplets) on the magnitude of maximal Nusselt numbers is shown in Fig. 2. The local Nusselt number is calculated by the following relationship: Nu = qWH/[Ȝ(TW -Tm)], where λ is the coefficient of heat conductivity of the gas flow, and TW and Tm are the wall temperature and mean-mass temperature of the gas flow, respectively.The increase of the droplets' mass fraction intensifies the heat transfer significantly.This effect cannot be explained by flow turbulization on the addition of the dispersed phase.The main reason of the heat transfer intensification is the using of latent heat of droplets evaporation.This effect increases with an increase in the amount of dispersed phase.Two characteristic areas in the distribution of Numax in the investigated range of particle sizes should be noted.An increase in the initial droplet diameter has the more complex influence on heat transfer in the two-phase separated flow.Heat transfer enhancement is observed in the range of small particles, and then a drastic decrease occurs at Stokes numbers of Stk > 0.25.The dispersed phase is not involved in the separated motion for the largest droplets with diameter d1 = 100 ȝm and Stk = 4.The heat transfer approximately corresponds to the value typical for the single-phase flow in the recirculation zone.The heat transfer rate increases due to the evaporating droplets in the near-wall zone behind the point of flow reattachment.This behavior of the maximal heat transfer is caused by the effect of various factors: more intense evaporation of small droplets, a decrease in the rate of their inertial precipitation, and weaker involvement of large particles in the separated flow.The increase in mass concentration of the dispersed phase causes a significant increase in heat transfer between the two-phase flow and wall as compared to the single-phase flow (Numax = 51).It should be noted the maximal increase in heat transfer in the gas-droplet flow occurs in the region of small particles, which intensively penetrate into the recirculation zone and thus reach the pipe wall.This work is supported by the Russian Science Foundation (Project No. 14-19-00402).
2,469.6
2016-01-01T00:00:00.000
[ "Engineering", "Environmental Science", "Physics" ]
The incremental information content of AC 201 inflation-adjusted data In this article an attempt is made to examine the extent to which inflation-adjusted income figures (derived from AC 201 data) contain information not included in the historic figures currently reported. The usefulness (or information content) criterion is examined from the aggregate market perspective through an empirical examination structured to determine which set of figures best represents the information impounded in share prices. The research design incorporates a two-stage regression approach which permits a determination of the incremental explanatory power of collinear variables. The results obtained suggest that there are information content differences between inflation-adjusted and historic data as measured through the association with share returns. Only for those companies affected to a lesser extent by the effects of inflation could no discernable difference in information content be detected. Bar this exception, the results appear to support the hypothesis that inflation-adjusted data contain information that is on aggregate not reflected in the financial reports currently produced. However, the contention that historic income also possesses information beyond that provided by inflationadjusted income is not supported by the results. The research findings have important implications for reporting policy in SA regarding the future of inflation accounting requirements and seem to suggest that the SA Institute of Chartered Accountants should seriously consider making a form of inflation accounting mandatory. S. Afr. J. Bus. Mgmt. 1986, 17: 1 6 In this article an attempt is made to examine the extent to which inflation-adjusted income figures (derived from AC 201 data) contain information not included in the historic figures currently reported.The usefulness (or information content) criterion is examined from the aggregate market perspective through an empirical examination structured to determine which set of figures best represents the information impounded in share prices.The research design incorporates a two-stage regression approach which permits a determination of the incremental explanatory power of collinear variables.The results obtained suggest that there are information content differences between inflation-adjusted and historic data as measured through the association with share returns.Only for those companies affected to a lesser extent by the effects of inflation could no discernable difference in information content be detected.Bar this exception, the results appear to support the hypothesis that inflation-adjusted data contain information that is on aggregate not reflected in the financial reports currently produced.However, the contention that historic income also possesses information beyond that provided by inflationadjusted income is not supported by the results.The research findings have important implications for reporting policy in SA regarding the future of inflation accounting requirements and seem to suggest that the SA Institute of Chartered Accountants should seriously consider making a form of inflation accounting mandatory.S. Afr.J. Bus. Mgmt. 1986, 17: 1 -6 In hierdie artikel word gepoog om te bepaal in watter mate inflasieaangepaste inkomstesyfers inligting bevat wat nie ingesluit is in die historiese syfers wat tans gepubliseer word nie.Die kriterium ten opsigte van nuttigheid (of inligtingsinhoud) word ondersoek uit die globalemark-perspektief deur middel van 'n empiriese ondersoek wat daarop ingestel is om te bepaal watter stel syfers die inligting vervat in aandeelpryse die beste weergee.Die navorsingsmetodiek omvat 'n twee-fase-regressiebenadering, wat die vasstelling van die ekstra verduidelikingskrag van saamlynige veranderlikes moontllk maak.Resultate dui daarop dat daar verskille in inligtingsinhoud bestaan tussen inflasie-aangepaste-en historiese data, soos gemeet deur die verband met aandeelopbrengste.Slegs by daardie maatskappye wat in 'n mindere mate deur die uitwerking van inflasie bei'nvloed is, kon geen opmerklike verskille in inligtingsinhoud waargeneem word nle.Benewens hierdie uitsondering, skyn die resultate die hipotese le ondersteun dat inflasie-aangepaste data inligting bevat wat, globaal geneem, nie weergegee word in die finanslele verslae wat tans verskyn nie.Daarenteen word die bewaring dat historiese lnkomste ook inligting bevat bo en behalwe die weergegee deur die lnflasie• aangepaste inkomste, nie deur die resultate bevestig nie.Hierdie navorsingsresultate het belangrike implikasies vir die rekeningkundige beleid in SA met betrekking tot die toekoms van die vereistes van inflasieboekhouding en skyn daarop te dui dat die Suid-Afrikaanse lnstituut van Geoktrooieerde Rekenmeesters dlt ernstig moet oorweeg om 'n vorm van lnflasieboekhouding verpligtend te maak.S. Afr. Tydskr. Bedryfsl. 1986, 17: 1-6 Introduction The debate as to whether companies should be required to report inflation-adjusted data, and if so, what the nature of the requirements should be, has drawn renewed attention in recent years.This has occurred because of the high rates of inflation since the early seventies and the prospect that relatively high levels may persist in South Africa (SA) in the future. This study focuses on a particular form of accounting for changes in price levels -that recommended by the then National Council of Chartered Accountants (SA) (now called the SA Institute of Chartered Accountants) in Guideline AC 201 (formerly Guideline 4.003) of August 1978.In publishing Guideline AC 201, the accounting profession in SA paved the way for inflation-adjusted data to be disclosed.However, to date only a few companies have experimented with the recommendations of this guideline.A possible reason for this is that no company will of its own free choice publish inflation-adjusted results if this could adversely affect its market rating. The SA Institute of Chartered Accountants may take further action once evidence has been submitted that inflation-adjusted figures are useful to the users of accounting data, i.e. that inflation-adjusted income convey information beyond that which is currently available in historic cost reports.An investigation into the usefulness of the alternative/supplemental sets of figures is therefore a research question with important implications in SA for public policy regarding the future of inflation accounting requirements.The purpose of this study is to submit evidence to this question. An analysis was therefore undertaken to explore the extent to which inflation-adjusted data contain information not included in the historic data currently reported (i.e. the information content of the inflation-adjusted data set which is in excess of that which is contained in the related historic income figures).The usefulness (or information content) criterion was examined from the aggregate market perspective through an empirical examination structured to determine which set of figures best represents the information impounded in share prices. Data The initial set of companies considered in this study consisted of all companies, listed in the industrial section of the Johannesburg Stock Exchange (JSE), with financial years ended in the calender years 1975-1982. The final sample was produced by applying five sample selection criteria to the initial set of companies.Companies conforming to the criteria enumerated below, ranging from the most to the least restrictive, were excluded for research purposes: (i) companies with financial years not ending on June 30 for the entire period; (ii) holding companies that carried no stockholding, and/or where the major investment was represented by another sample constituent; (iii) companies that experienced severe structural changes, including those of which the listings were shifted from the industrial to other sections of the JSE; (iv) companies for which reasonable estimates of inflationadjusted data could not be made readily; and (v) companies of which the listings were suspended for excessively long periods.The first requirement necessitated the exclusion of a considerable proportion of the companies.This extremely restrictive criterion was required because the methodology employed in this study necessitated the calculation of financial year-onyear differences in historic income and inflation-adjusted income respectively.These differences were then divided (or scaled) by a balance sheet deflator.A change in the financial year end of a particular company would therefore necessitate an annualiz.ation of the variables concerned.The somewhat haphazard customary adjustment procedure was avoided by restricting the sample to companies with the same financial year throughout the covered period.June 30 proved to be the most common reporting date for those companies that had maintained the same financial year over the entire period. Application of the sample selection criteria produced a sample of 59 companies. The analysis focused upon two informational variables, i.e. the annual change in historic income and the annual change in inflation-adjusted income, and their relationship with annual share returns.These variables are discussed in turn in the following paragraphs. Historic income Historic income was for the purpose of this study defined as earnings available for ordinary shareholders, based on consolidated net income for the financial period, after ordinary and foreign taxation, and after deducting outside shareholders' interests and preference dividends, but before extraordinary and abnormal items. Historic income for a group of companies was therefore based on operating profits attributable to members of the holding company.Deferred taxation was excluded from the ~culation in an attempt to avoid possible distortions being mtroduced as a result of extreme fluctuations in this taxation ~mponei:it.Where the earnings of associated companies were mcluded ma company's income statement, the historic income was based on profits exclusive of associated companies' results. The annual change in historic income was formulated as follows. Inflation-adjusted income Inflation-adjusted income was for the purpose of this study defined as historic income (see definition above) adjusted for the effects of changing price levels, in accordance with the recommendations of Guideline AC 201.(Inflation adjustments were generated by means of the inflation accounting model of the University of Stellenbosch Business School.The reader is referred to Archer (1980:94-141) for a detailed discussion of this model.)The portion of the inflation-adjustment accruing to outside shareholders' interests was taken into consideration in arriving at the estimate for inflation-adjusted income.If price levels did not change, inflation-adjusted income and historic income would, of course, have been identical. It should be noted that .afew of the companies included in the study published inflation-adjusted data in the form of a supplementary current cost income statement.In order to facilitate uniformity in the adjustment procedure, such company-year observations were excluded.In addition, companies which employed the flip-flop LIFO (last-in-first-out) accounting method, which circumvents any reported earnings reduction that would otherwise have arisen with the use of the LIFO method of stock valuation, were also excluded. The annual change in inflation-adjusted income was formulated as follows. RC. -RI;,, -RI;,, -I ,., - where RC;,,= change in inflation-adjusted income of company i in period t; and RI;,,= inflation-adjusted income of company i in period t.All other symbols are as described before. Share returns A cumulative abnormal return (CAR) was computed for each company for the duration of each financial year from 1976 through 1982, using intercept and slope coefficients estimated from a time series regression based on the previous two years.The initial financial year holding period assumption was then extended to include four alternative annual holding periods (starting with September 30 through September 30 and ending with December 31 through December 31).The CAR was computed as follows. (i) Price relative returns (i.e.returns unadjusted for risk) were calculated for a company's ordinary shares on a bi-weekly basis, using the following formulation. (ii) R .-P;,, -P;,,-I + D;,, ,.,-P;,,-1 where R;,, = ex post (i.e.realized) return on share i in period t; P;,, = closing price of share i at the end of period t; and D,., = dividend of share i in period t (i.e.exdividend date v:ithin period t); Returns on the m~•vket, as represented by appropriate share-price and di, idend indices, were calculated as where Rm.,= ex post return of market portfolio in period t (the JSE Actuaries lndustrial share-price and dividend indices were used as a surrogate for the market); Im.,= index value at the end of period t; Dim.,= dividend in<lel s.Afr.J. Bus.Mgmt. 1986, 17(1) of market portfolio (expressed in percentage terms) at the end of period t; and n = number of days between the end of period t and the end of period t -1.(iii) Because this study evaluates informational variables as they relate to the individual company, their information content should be assessed relative to changes in the rate of return on the company's shares net of market-wide effects.This step therefore involved the elimination of the overall market effects from price relative returns and the adjustment for the risk level of the share, using the familiar market model (Markowitz, 1952 andSharpe, 1963). where e;., = residual (or abnormal) return for share i in period t; and a;,13; = regression parameters.All other symbols are as described before. Using the estimated regression parameters, ii and ~. the bi-weekly abnormal return for share i can be estimated as follows e;,, = R;., -Ai.I where R;,, = expected return on share i in period t (R;,, = a;.,+~;Rm.,).All other symbols are as described before. As the above procedure abstracts from the general market conditions and the market is believed to adjust reasonably quickly and efficiently to new information, the residuals will represent the impact of new information about company i alone. (iv) The series of bi-weekly abnormal returns were aggregated into annual measures, using the following methodology.where CAR;.T= cumulative abnormal return of company i in period T; and nT= number of bi-weekly intervals in period T. All other symbols are as described before.Note that hereafter t will denote year t, rather than a biweekly interval. The CAR measures the cumulative effects of the deviations of the share returns from their normal relationship with the market (Fama, Fisher, Jensen & Roll, 1969: 8).The CAR can clearly be either systematically positive or negative (i.e.non-zero), or can display unsystematic (i.e.zero) residual behaviour.This CAR was used as dependent variable in the two-stage regression analysis described below and together with the HC and RC series formed the basic data used in this study. Research methodology From an informational perspective the central issue in this study revolves around the following two questions: (i) Do inflation-adjusted income figures provide information (i.e.additional explanatory power) over and above that provided by historic income figures?(ii) Do historic income figures provide information over and above that provided by inflation-adjusted income figures?Accordingly, these are the two hypotheses that will be tested.It is important to note that the two earnings variables are not necessarily mutually exclusive regarding their respective infonnation content.Both historic income and inflationadjusted income share common factors that could explain a 3 cross-sectional variation in share returns.The level of explanatory power provided by knowledge of more than one variable must therefore be compared with the explanatory power provided by knowledge of only one of the variables. To examine this issue, two-stage regression analyses were conducted.The approach adopted was similar to that employ- all company-year observations, i.e. a pooled cross-sectional approach.This particular procedure was adopted for the following reasons. (i) When the independent variables in a regression are correlated among themselves, intercorrelation or multicollinearity is said to exist among them.The presence of multicollinearity makes the interpretation of results difficult and misleading.In particular, the incremental explanatory power of each informational variable becomes blurred.Incorporating several earnings measures in one regression equation as independent variables, would clearly result in multicollinearity.The two-stage regression approach, however, permits the determination of the incremental (i.e.additional) explanatory power of collinear variables.(ii) The earnings variables are not treated as being mutually exclusive.(iii) The magnitude as well as the sign of the earnings variables are incorporated. The incremental infonnation content of inflationadjusted income This section deals with an examination of the first hypothesis, namely that inflation-adjusted income does not provide information over and above that provided by historic income figures.The methodology employed is briefly summarized below. (i) The annual change in inflation-adjusted income, RC, was regressed on the annual change in historic income, HC, to obtain a residual, Z, which is by construction uncorrelated with HC. RC;,,= a+ PHC;., + Z;., where RC;.,= change in inflation-adjusted income of company i in period t; HC;,, = change in historic income of company i in period t; Z;,, = random disturbance (or residual) variable of company i in period t; and a,13 = regression parameters. The ordinary least squares estimate of 13 was 0,99 (I value of 68,20).The value of the R'statistic was 0,92 which is significant at the 1 OJo level.It can therefore be concluded that a significant proportion of the information content of inflation-adjusted income is also included in the historic figure.(ii) The annual cumulative abnormal return, CAR, was then regressed on HC and Z. If RC possesses infonnation not provided by HC, the regression coefficient, 132, on the residual, Z, should be different from zero.The null hypothesis therefore states that the regression coefficient, lh, in the popula-tion is not different from zero (i.e.RC does not possess explanatory power not provided by HC). (iii) This hypothesis was tested using the familiar t. test.. Results for the second-stage regression are summarized m Table l.It can be seen that the 132 coefficient is insignificant at conventional levels in all the holding periods examined.One is therefore unable to reject the null hypothesis and hence cannot conclude that inflation-adjusted income figures possessed information beyond that provided by the historic figures.•oenotes significance at the 5% level. The incremental infonnation content of historic income It could perhaps be argued that the former procedure was a rather severe test to impose on inflation-adjusted income. If RC and HC are highly correlated, it is not unreasonable to believe that the two variables possess a considerable amount of common explanatory power with respect to share returns (i.e. the informational variables are not mutually exclusive). In this section the two-stage model was therefore reversed and run in the opposite direction.The procedure employed was as follows.(i) HC was regressed on RC to obtain a residual, Z, which is by construction uncorrelated with RC. HC;,1 =a+ l3RC;,1 + Z;,1 where all the symbols are as described before.This regression yielded a 13 estimate of 0,94 (t value of 68,20) and an R 2 statistic of 0,92 which, as before, is significant at the l % level. If HC possesses information not provided by RC, the regression coefficient, 132, on the residual, Z, should be different from zero.The null hypothesis therefore states that the regression coefficient, 132, in the population is not different from zero (i.e.HC does not possess explanatory power not provided by RC). (iii) This hypothesis was tested using the familiar t test. Results for the second-stage regression are summarized in Table 2. Analysis of the results indicates that once again none of the 132 coefficients were significantly different from zero at conventional levels.Therefore one cannot conclude that historic income figures possess information beyond that provided by inflation-adjusted figures. Table2 Incremental information content of historic in- come: summary of results for second-stage regressions The results presented in the previous two sections, appear to suggest that both sets of figures are substitutes for one another.In both cases the value of the R 2 statistic in the firststage regression was of the order 0,90, indicating that about 90% of the variation in one income measure can be explained by the other income measure.The results of the second-stage regression indicate that the remaining variation (i.e.approximately 10%) does not significantly explain the CAR.Consequently there seems little value in requiring both historic and inflation-adjusted figures to be reported. The effect of inflation The foregoing results seem to indicate that little benefit would accrue to shareholders were it to become mandatory for companies to disclose both historic and inflation-adjusted income figures.Such a conclusion could, however, be an oversimplification because, in the absence of inflation, one would not expect the two sets of numbers to contain different information.Indeed, both Beaver, Christie & Griffin (1980: 145) and Gheyara & Boatsman (1980: 114) structured their studies on the supposition that information content was more interesting for companies affected to a greater extent by inflation.Because the results presented in the previous two sections were averages of the entire sample of 59 companies, it is possible that the significance of the inflation-adjusted figures was dissipated by the presence of several companies relatively unaffected by inflation.It was therefore decided to repeat the analysis after segmenting the sample into three subgroups on the following basis. Firstly, the companies were ranked in terms of the impact of inflation on their historic income.This impact was measured by the absolute difference between a company's historic income and inflation-adjusted income.In order to obtain a relative measure, this difference was scaled by the average net asset value on a historic cost basis. The inflation impact was formulated as follows II ._ RI;.1 -HL,1 where Il;,1 = inflation impact of company i in period t; Rl;,1 == inflation-adjusted income of company i in period t; HL.1 = historic income of company i in period t; and NA V;,, = net asset value of company i at the end of period t. The inflation impact was expressed in terms of an average figure, calculated as the arithmetic average of the series of annual figures over the duration of the research period.An average measure was used in an attempt to smooth possible S. Afr.J. Bus. Mgml. 1986, 17(1) wide fluctuations in the annual inflation impact as a result of factors pertaining to the economy. The top end of the ranking comprised those companies suffering from severe inflationary pressure (i.e. companies where inflation had a large impact on historic income), whereas the tail end was made up of those that had been relatively successful in hedging inflation (i.e. companies where inflation had a small impact on historic income). Finally, the average inflation impact figures were used to partition the ranked companies into three approximately equal-sized subgroups.These groups were labelled from A to C, with the companies in group A being affected the most, and companies in group C being affected the least by the impact of inflation on their historic income. For the high inflation impact group (i.e. group A) the firststage regression resulted in an R 2 value of 0,87, indicating that there was a high degree of co-movement between historic and inflation-adjusted income figures.Indeed, as much as 870/o of the variation in one income figure could be explained by the other income figure. Results for the second-stage regression are summarized in Table 3.It can be seen that the incremental information content of inflation-adjusted income, lh is significant in both the November and December holding periods.This would be consistent with the hypothesis that for the high inflation impact companies, inflation-adjusted income figures possess information in addition to that provided by historic figures. The fact that ~2 was insignificant at conventional levels for the June, September and October holding periods, is a cause for some concern.However, Knight (1983:118) has shown that for SA companies, much of the information content in earnings numbers is unanticipated by the market.Consequently the lack of significance in the June, September and October holding periods, can probably be ascribed to the fact that not all of the companies had released their preliminary reports at these stages. The converse, however, does not appear to hold, i.e. in none of the holding periods examined did the historic figures :Denotes significance at the 10% level. Denotes significance at the 5% level. 5 possess information beyond that provided by the inflationadjusted figures.Therefore, for the high inflation impact group it would appear that the two sets of figures do not provide identical information.The results in fact seem to suggest that two alternatives exist, namely: (i) that both sets of income figures should be provided; or (ii) if only one set of figures is to be provided it should be the inflation-adjusted figures.For the low inflation impact group (i.e. group C), the firststage regression results yielded an R 2 value of 0,95 which indicates the 95% of the variation in one income measure could be explained by the other income measure.This is higher than the 0,87 for the high inflation impact group, but is to be expected as one would anticipate a higher degree of comovement between the two sets of income figures for the lower inflation impact group. Results for the second-stage regression are summarized in Table 4. None of the ~2 coefficients are significant which provides support for the hypothesis that both sets of income figures are substitutes for each other and neither possess any incremental information."Denotes significance at the 5% level. In concluding this section, it can be said that the results obtained suggest that there are information content differences between inflation-adjusted and historic data as measured through the association with share returns.Only for those companies affected to a lesser extent by the effects of inflation could no discernable difference be detected in information content.For the high inflation impact group the results appear to support the hypothesis that inflation-adjusted data contain information which is on aggregate not reflected in the financial reports currently produced.However, the contention ~at historic income also ~ information beyond that provided by inflation-adjusted income is not supported by the results. Although not of central concern to this study, it is nevertheless interesting to note that in all of the second-stage regressions, the ~1 coefficient was significant at th~ 5% level.This confirms that earnings infonnation, whether m the form of historic income or inflation-adjusted income, does significantly explain abnormal returns of shares listed on the JSE.Moreover, an examination of the second-stage results indicates that the t statistics are substantially higher for the September, October, November and December holding periods than for the June holding period.As previously mentioned this probably occurs because the release of the earnings information only occurs during the months September to December.These results therefore confirm those of Knight (1983: 130) who concluded that the actual release of the earnings number does provide information to shareholders; in other words that such information is not consistently anticipated by the market. In order to remove any possible scepticism which may cloud the results obtained from a relatively small sample, the entire statistical analysis was duplicated for a holdout sample.The second (i.e.holdout) sample was chosen using the same criteria as applied to the initial set of companies, with the exception of the first criterion which was ammended to include companies having a financial year ended on December 31 for the entire period.In total 29 companies qualified for inclusion in the holdout sample. The results obtained for the holdout sample (i.e.December year end companies) were similar to those obtained for the original sample (i.e.June year end companies).Hence the conclusions drawn can be considered to be valid for all companies during the period under study. the real resources needed to generate the restated data.This is true because such disclosure requirements would eliminate the need for market participants to adjust financial repons for changes in price levels, and shift the responsibility to companies and their accountants. It should also be kept in mind that market participants and management are not the only audience of accounting disclosures.It is certainly possible that other audiences may find the disclosure of inflation-adjusted data informative.For example, legislative thinking regarding the merits of company tax on 'phantom profits' may be altered by such disclosure.In this regard it has to be borne in mind that inflation profits should not be distributed to shareholders, as capital has to be maintained in order to carry on business at the same levels as before.Similarly the fiscus should not claim taxes on these profits. Furthermore, in a recent extensive analysis of international accounting standards, SA performed rather poorly (Sunday Times, Business Times, March l l, 1984).International Accounting Bulletin's Survey of Accounts and Accountants 1983/84 ranked SA l l th of the 17 countries where 10 or more annual reports were analysed.Of particular significance is the fact that the analysis gave SA companies a poor rating on, among others, changing prices data. It is therefore apparent that the SA Institute of Chartered Accountants should seriously consider making a form of inflation accounting mandatory. Table 1 Incremental information content of inflationadjusted income: summary of results for second-stage regressions Table 3 Incremental information content of inflation- Table 4 Incremental information content of inflation-
6,370
1986-03-31T00:00:00.000
[ "Economics" ]
Entropy in Multiple Equilibria , Systems with Two Different Sites † The influence of entropy in multiple chemical equilibria is investigated for systems with two different types of sites for Langmuir’s condition, which means that the binding enthalpy of the species is the same for each type of sites and independent of those that are already bonded and that this holds for both types of sites independently. The analysis makes use of the particle distribution theory which holds for each type of sites separately. We provide physical insight by discussing an Xm{AB}Xn system with m = 0, 1, ..., M and n = 0, 1, ..., N in detail. The procedure and results are exemplified for an Xm{AB}Xn system with M = 3 and N = 2. A satisfactory consequence of the results is that the eleven equilibrium constants needed to describe such a system can be expressed as a function of two constants only. This is generally valid for any Xm{AB}Xn system where the [(M + 1)(N + 1) − 1] equilibrium constants can be expressed as a function of 2 constants only. This has also implication for quantum-theoretical studies in the sense that it is sufficient to model only two reactions instead of many in order to describe the system. We have observed that it is sufficient to have two different sites in a multiple equilibrium in order to observe a characteristic of isotherms that cannot be described by Langmuir’s equation. This is a result that may be useful for explaining experimental data which otherwise have not been explained satisfactory so far. Instead of inventing adsorption models it might often make sense of describing the system in terms of multiple equilibria. Introduction We explained the influence of entropy in multiple chemical equilibria by studying the particle distribution for the conditions that the binding enthalpy of the species is the same for all sites and that it is independent of those that are already bonded [1].Consequences were discussed for the insertion of guests into the one dimensional channels of a host, for dicarboxylic acids, and for cation exchange of zeolites.The validity of the results is independent of the nature and the strength of the binding.The quantitative link between the description of multiple equilibria and Langmuir's isotherm [2][3][4] was found to provide new insight.Multiple equilibria of objects with several equivalent binding, docking, coupling, or adsorption sites for neutral or charged species play an important role in all fields of chemistry .We now investigate systems with two different types of sites, which we name Xm{AB}Xn, for the condition that the binding enthalpy of the species is the same for each type of sites and independent of those that are already bonded and that this holds for both types of sites independently.The analysis makes use of the particle distribution theory as described in ref. [1], which holds for each type of sites separately.The condition that the binding enthalpy of the species is the same for all sites and that it is independent of those that are already bonded is equivalent to the condition I. Langmuir used one hundred years ago to derive the Langmuir isotherm [2,3].We therefore name it Langmuir's condition. Results and Discussion The number of distinguishable chemical objects of an Xm{AB}Xn (m = 0, 1, …, M and n = 0, 1, …, N) system is equal to (M + 1)(N + 1).From this follows that the number of equilibria with X is [(M + 1)(N + 1) − 1] which is also the number of equilibrium constants.We show that Langmuir's condition in connection with the particle distribution function allows to express the (M + 1)(N + 1) − 1 equilibrium constants as a function of two different constants only.This is a simplification which allows studying systems quantitatively by experimental and theoretical means which otherwise might be difficult to handle.A numerical analysis of experimental data for a system with 5 different types of sites has been carried out based on this reasoning and has allowed to correct earlier reports on the reaction entropy of silver zeolite A [16].We improve the physical insight by discussing a simple Xm{AB}Xn system in detail.The notation Xm{AB}Xn represents individual particles, a grid consisting of many sites, microporous objects, or other chemical systems.The procedure and results are exemplified for m = 0, 1, 2, 3 and n = 0, 1, 2. The 11 equilibria and the corresponding equilibrium constants Ki are collected in Table 1. We apply the stoichiometrie-matrix expression for evaluating these equlilibria [9,10].Details of this procedure are reported in the appendix.The result is given in Table 2.It is convenient to use the following notations to write the concentrations of the individual objects, namely Ci and also [Xm{AB}Xn], but only [X] for the concentration of X. We have 11 equation available for expressing the 13 concentrations: Ci, i = 1, 2, …, 12 and [X].An additional equation is available from the fact that in a closed system the total concentration of the Xm{AB}Xn species, which we name A0, is constant, as expressed in Equation ( 1).The concentration C12 = [{ }] can, hence, be determined using Equation (1).The concentration [X] of the ligand X that can bind to the { } is the free variable. We need to know 11 equilibrium constants in order to describe the evolution of the concentrations Ci of the twelve species as a function of the variable [X].This is a difficult situation and may in many cases have as a consequence that a system cannot be handled in a satisfactory way.A very important simplification arises if Langmuir's condition applies.This may often be the case sufficiently well.Langmuir's condition implies in our example that K1, K4, and K7 are equal.The same holds for K2, K5, and K8 and also for K3, K6, and K9.From this follows a further simplification from the application of the particle distribution function f(n,r) [1,16,30], where n is the total number of equilibria of a set and r counts the individual equilibria in a set; r = 0, 1, …, n − 1: The particle distribution function describes the entropy decrease in the corresponding reaction sets, as we have discussed in detail [1].Applying Langmuirs's condition and the particle distribution function we find the results reported in Table 3. The very satisfactory consequence of the result shown in Table 3 is, that the eleven equilibrium constants can be expressed as a function of two constants only, namely K1 and K10.Inserting this in the equation shown in Table 2 we find the Equations ( 3) and ( 4), where C12(X) is the concentration of { } expressed as a function of the concentration of X.This is a nice and very useful result.It allows to study the concentration of the twelve species Ci, i = 1, 2, …, 12 as a function of the concentration X by considering only 2 parameters, namely K1 and K10, instead of eleven.This has also implication for quantum-theoretical studies in the sense that it is sufficient to model only two reactions instead of eleven, in order to describe the system. Table 3. Relation between the equilibrium constants defined in Table 1 as a consequence of Langmuirs's condition and the particle distribution function.We see e.g., in Figure 1A, that the X{AB}Xn appear only at the beginning for small values of [X] and even more, that only X{AB}X shows temporally a value of larger than 0.05, while X{AB}X2 always stays very small.We note that {AB} vanishes soon and that the X3{AB}Xn become dominant.[{AB}X2] and [{AB}X] always remain small.The situation changes very much in Figure 1B.The symmetry of the plot of the concentrations Ci versus the total concentration [X]tot we have observed in Figure 4B of ref. [1] has completely disappeared, however, in both cases as seen in Figure 1A',B'.We also observe that out of the 12 species Xm{AB}Xn only few manage to evolve significant concentrations.An example with different values of K1 and K10 is reported in the appendix. The fractional coverage expressed as a function of the concentration [X] is of special interest, also because it can often be determined experimentally relatively easy.We show this in Figure 2. and (A',B') between 0 and 20.Solid lines: Amount of the objects Xm{AB}Xn.Red: m = 1,2,3, all n; divided by 3. Blue: {AB}Xn, n = 1,2; divided by 2. The rectangles and the circles correspond to Langmuir's eqs.( 29) and (29A) of ref. [1] with KL = K1/3 and KL = K10/2, respectively.Green: all Xm{AB}Xn, except {AB}, divided by 5. Black, dashed: Isotherm calculated using Langmuir's eqs.( 29) and (29A) of ref. [1] with optimized values for KL.Orange, dash dot: Sum of the red and the blue curves weighted by an optimized factor. It is interesting but not surprising that the amount of the objects Xm{AB}Xn (m = 1,2,3, all n; divided by 3) can be perfectly described by Langmuir's isotherm equation.We observe the same for the concentration of {AB}Xn (n = 1,2; divided by 2).The sum of all objects Xm{AB}Xn (m = 0,1,2,3, all n), however, cannot be described by the Langmuir isotherm equation.This behaviour seems to be of general validity, as I have numerically tested for a number of representative examples.It should be possible to prove this analytically but such a proof is not yet known.If the numerical values of K1 and K10 are equal, the system simplifies to the situation we have discussed in ref. [1].In the other extreme, when K1 and K10 differ by orders of magnitude, the system decomposes into separate parts. Different types of explanations for isotherms that deviate from Langmuir isotherms have been developed.They are in many cases satisfactory because they have been linked to a microscopic phenomenon, but they seem to be arbitrary in other situation [6,11,18,24].We find that it is sufficient to have two different sites in a multiple equilibrium in order to observe a characteristic that differs from Langmuir's equation, despite of the fact that the latter applies for individual parts.Writing multiple chemical equilibria could therefore be useful for explaining experimental data and for making prediction.Instead of inventing adsorption models, it might make sense to describe a system in such terms.The system may consist of one set of equivalent sites [1], two sets, as reported here, or even of several sets of equivalent sites [16]. Table 2 . Concentrations Ci, calculated based on the equilibria in Table1and Equation (1); see appendix.
2,408
2017-11-20T00:00:00.000
[ "Physics" ]
Decreased Peripheral Naïve T Cell Number and Its Role in Predicting Cardiovascular and Infection Events in Hemodialysis Patients Patients with end-stage renal disease (ESRD) are at high risk of morbidity and mortality from cardiovascular and infectious diseases, which have been found to be associated with a disturbed immune response. Accelerated T-cell senescence is prevalent in these patients and considered a significant factor contributing to increased risk of various morbidities. Nevertheless, few studies have explicated the relevance of T-cell senescence to these fatal morbidities in ESRD patients. In this study, we designed a longitudinal prospective study to evaluate the influence of T-cell senescence on cardiovascular events (CVEs) and infections in hemodialysis (HD) patients. Clinical outcomes of 404 patients who had been on HD treatment for at least 6 months were evaluated with respect to T-cell senescence determined using flow cytometry. We found that T-cell senescence was associated with systemic inflammation. High-sensitivity C-reactive protein was positively associated with decreased naïve T cell levels. Elevated tumor necrosis factor-α and interleukin 6 levels were significantly associated with lower central memory T cell and higher T effector memory CD45RA cell levels. Decreased CD4+ naïve T cell count was independently associated with CVEs, whereas decreased CD8+ naïve T cell count was independently associated with infection episodes in HD patients. In conclusion, HD patients exhibited accelerated T-cell senescence, which was positively related to inflammation. A reduction of naïve T cell could be a strong predictor of CVEs and infection episodes in HD patients. INTRODUCTION End-stage renal disease (ESRD), considered as a public health concern, affects more than 1.5 million people worldwide (1). Patients with ESRD usually have a high risk of life-threatening comorbidities, especially cardiovascular and infectious diseases. According to the U.S. Renal Data System, ESRD patients have a 25% annual mortality rate, and almost 50% patient deaths are attributed to cardiovascular complications (2). Infection is the second leading cause of death, accounting for 35% of all-cause mortality (3). It has been proposed that chronic kidney disease may be a model of premature aging, since uremia could induce premature senescence and many aging-related complications are prevalent in ESRD patients, including those with cardiovascular diseases (CVDs) and infections (4). Recent evidence suggests that uremia can induce T-cell senescence, indicated by a lower thymic output of naïve T cells, a decline in T-cell telomere length, and an increase in differentiation toward the terminal differentiated memory phenotype; T-cell senescence is more pronounced in patients undergoing hemodialysis (HD) therapy (5,6). Compared with physiological aging, ESRD seems to have the ability to increase the immunological age of T cells by 20-30 years (7). In terms of function, T cells in ESRD patients are preactivated by secreting more inflammatory cytokines in the resting state, leading to persistent inflammation and providing a breeding ground for CVD (8,9). On the contrary, T cells in ESRD patients have diminished reaction toward pathogen stimulation, with susceptibility to apoptotic death after activation (9), reduced humoral response to vaccination (10), and impaired maintenance of specific T cell memory (11), resulting in a high incidence of infection. Hence, interventions targeting T cell function could improve morbidity and mortality in such patients. While it is well-recognized that ESRD-related T cell dysfunction is prominent, few studies have explicated the relevance of T-cell senescence to the fatal morbidity resulting from ESRD, and existing results are based on different markers of immune senescence. It has been reported that telomere length shortening is associated with a higher risk of death, reduced thymic output is associated with severe infection episodes, and terminally differentiated CD8 + T cell expansion is closely linked to accelerated atherosclerosis in ESRD patients (5). CD4 + CD28 -T cells, as a terminal differentiated memory phenotype, were independently associated with the presence of atherosclerotic disease in ESRD patients (12). Cytomegalovirus (CMV) infection is considered to act as a critical factor for accelerated T-cell senescence in ESRD patients by exacerbating the selective depletion of naïve T cells and clonal expansion of memory T cells (13). However, since most patients with ESRD are CMVseropositive (14,15), it is difficult to distinguish the CMVindependent effects of T-cell aging in ESRD. The question that then arises is whether it would be possible to find one consistent marker for evaluating overall immunological age, assessing the risk of multiple complications, and aiding early intervention in ESRD patients. Depletion of naïve T cells, the most significant and consistent change reported during aging, is also prevalent in ESRD (8,15). Our previous study findings revealed that a decrease in the number of naïve T cells is significantly associated with increased mortality in HD patients (16), supporting the idea that selective reduction of naïve T cell is a critical feature in this population and may impact clinical outcomes profoundly. In the present study, we prospectively analyzed whether T-cell senescence is associated with cardiovascular events (CVEs) and infectious episodes in HD patients and aimed to find valuable markers for clinically evaluating immunological aging and predicting risk of ESRD. Study Population This current study included patients who had been on HD treatment for at least 6 months in the Blood Purification Center, Department of Nephrology, Zhongshan Hospital, Fudan University. Patients were enrolled from August to September, 2016 and followed weekly. Individuals who experienced CVE or infection within 3 months were excluded. Those with evidence of hematological diseases, rheumatic diseases, active malignancies, and history of human immunodeficiency virus infection or using immunosuppressants were also excluded. Follow-up lasted for 2 years and ended in October 2018. During follow-up, CVEs and infection episodes were documented. CVEs were defined as coronary artery disease, congestive heart failure, stroke, and peripheral arterial occlusive disease. Infection episodes were defined as infectious diseases requiring regular intravenous antibiotics in hospital or emergency department. We obtained blood samples from the arterial site of vascular access before the start of the HD session in the middle of the week. Anti-CMV-IgM and IgG antibodies were detected using the Roche Elecsys assay. All procedures were performed at the Department of Clinical Chemistry, Zhongshan Hospital, Fudan University using standard methods. Written informed consent was obtained from all patients that met the inclusion criteria. This study was approved by the Ethics committee of Zhongshan Hospital, Fudan University. Statistical Analysis All data are expressed as mean ± standard deviation or median (interquartile range), as appropriate. Correlations between T cell parameters and laboratory variables were tested using a non-parametric Spearman rank analysis. Free survival of CVEs and infection episodes were estimated using the Kaplan-Meier curve, and differences between groups were examined using the logrank test. Univariate Cox regression analysis was used to identify predictors of CVE and infection. Significant predictors were subsequently added to the multivariable model, and backward stepwise Cox regression identified the most parsimonious model. The probability used for the stepwise regression was set at 0.05 for entry of variables and 0.1 for removal of variables. The results of the Cox proportional hazards analysis are presented as the hazard ratio (HR) and 95% confidence interval (95% CI). Statistical significance was considered at P < 0.05. All statistical analyses were performed using SPSS version 20.0. Demographic and Clinical Characteristics of Patients A total of 404 patients (248 men and 156 women) were enrolled in this study. The average age of patients was 59.4 ± 14.6 years. The median time in HD was 53 (26, 80) months. Of the 404 patients, 94 (23.3%) had diabetes mellitus and 324 (80.2%) had hypertension. The overall frequency of CVD in this cohort was 30.7%; stroke and congestive heart failure were the most prevalent complications, followed by coronary artery disease and peripheral arterial occlusive disease. The underlying kidney diseases included chronic glomerulonephritis (46.8%), diabetic nephropathy (16.8%), polycystic kidney disease (9.4%), hypertension renal disease (3.5%), others (10.9%), and unknown (12.6%). Only one patient (0.2%) was seropositive for CMV-IgM, and 401 patients (99.3%) were seropositive for CMV-IgG. The median level of CMV-IgG was 468 U/ml, and 189 patients (46.8%) had CMV-IgG titers exceeding the upper limit of 500 U/ml. Table 1 presents the baseline characteristics of the study population. T-Cell Senescence Is Associated With Systemic Inflammation in HD Patients We examined the association between T cell subsets and circulating inflammatory markers at enrollment. As shown in Table 2, high-sensitivity C-reactive protein (hsCRP) was positively associated with decreased T Naïve cell count in both CD4 + and CD8 + T cell compartments (p < 0.05). Meanwhile, elevated tumor necrosis factor-a (TNF-a) and interleukin 6 (IL-6) levels were significantly associated with lower CD4 + T CM and higher CD4 + T EMRA levels (p < 0.001). Decreased CD4 + T Naïve Cell Count as a Predictor of CVEs in HD Patients During the 650 ± 176 days of follow-up, 86 patients (21.3%) experienced at least one CVE and a total of 99 CVEs were recorded. The incidence of CVE was 13.4% per year. A total of 42 patients died of CVEs, accounting for 56.8% of all-cause mortality. Furthermore, 32 patients had stroke and 14 died of it; 24 patients developed acute coronary syndrome and 12 died of it; 22 patients experienced at least one event of heart failure and 8 died of it; 12 patients developed lower extremity atherosclerotic occlusive disease and 4 died of it; and 4 patients died of sudden cardiac death. The median value of each T cell parameter was used in analyzing the correlation between CVEs. A lower absolute number/percentage of CD4 + T Naïve as well as a higher percentage of CD4 + T EM and CD8 + T EM could significantly predict CVEs ( Figure S2). When taking age into consideration, only CD4 + T Naïve cells were shown to significantly predict CVEs. In the pairwise comparison, patients with a lower CD4 + T Naïve count had a significantly higher CVE incidence in both the middle-aged [36 < age (years) ≤ 65, p = 0.014] and old (age > 65 years old, p = 0.003) groups. There was no difference in CVE incidence between middle-aged patients with a lower CD4 + T Naïve count and old patients with a higher CD4 + T Naïve count ( Figure 1). In the univariate Cox proportional hazard model, other CVE predictors included older age, history of CVD and diabetes mellitus, usage of central venous catheter, lower serum levels of albumin, prealbumin, creatinine, and uric acid, and increased levels of white blood cell count, hsCRP, and N-terminal pro-brain natriuretic peptide (NT-proBNP) ( Table 3). In the multivariate Cox hazard model, a decreased count of CD4 + T Naïve cells along with older age, history of diabetes, history of CVD, as well as elevated white blood cell count and , and infections at other sites or undocumented sites [n = 6 (6.2%)]. The median value of each T cell parameter was used for analyzing the correlation between FIGURE 1 | CVE-free survival curves according to age-CD4 + T Naïve group. We divided the patients into five groups according to age and CD4 + T Naïve cell count. Group 1 included young patients (age ≤35 years old, n = 29). Group 2L included middle-aged patients with a lower CD4 + T Naïve cell count [36 < age (years) ≤ 65, CD4 + T Naïve < 153 cells/ml, n = 120]. Group 2H included middle-aged patients with a higher CD4 + T Naïve cell count [36 < age (years) ≤ 65, CD4 + T Naïve ≥ 153 cells/ml, n = 120]. Group 3L included old patients with a lower CD4 + T Naïve cell count (age > 65 years old, CD4 + T Naïve < 110 cells/ml, n = 67). Group 3H included old patients with a higher CD4 + T Naïve cell count (age > 65 years old, CD4 + T Naïve ≥ 110 cells/ml, n = 68). Kaplan-Meier analysis revealed that survival rate was significantly different among the five age-CD4 + T Naïve groups (p < 0.001). In pairwise comparison, patients with a lower CD4 + T Naïve count had a significantly higher CVE incidence in both the middle-aged (p = 0.014) and old groups (p = 0.003). There was no difference between middle-aged patients with a lower CD4 + T Naïve count and old patients with a higher CD4 + T Naïve count. infections. Decreased absolute count/percentage of CD8 + T Naïve and increased percentage of CD8 + T EMRA cells were significant predictors of infection ( Figure S3). Although aging contributes to both infection and depletion of CD8 + T Naïve cells, patients with a lower CD8 + T Naïve count in the middle-aged group [36 < age (years) ≤ 65] had a significantly higher infection incidence than those with a higher CD8 + T Naïve count in the same age group (p = 0.04) (Figure 2). Other infection event predictors included a history of CVD, usage of central venous catheter, decreased levels of hemoglobin, albumin, prealbumin, creatinine, and uric acid, and increased serum levels of hsCRP, NT-proBNP, ferritin, and globulin ( DISCUSSION In the current study, CVEs and infections were the major complications accounting for more than 70% of all-cause mortality. Our study finding indicates that a decreased level of CD4 + naïve T cells is a strong predictor of CVEs, while a decreased level of CD8 + naïve T cells is a strong predictor of infectious episodes in HD patients. Loss of naïve T cells might be a hallmark of immune disturbance, leading to a more intense immune incompetence with profound clinical outcomes. In the original model of the T cell system, naïve T cells are activated in the presence of infection, which then proliferate and generate heterogeneous classes of effector and memory cells with distinctive surface phenotypes, cytokine production abilities, and homing potentials (18). The T cell system has unique mechanisms of replenishment. Thymic T cell generation is the only way to add novel naïve T cells and enrich diversity; however, thymic function rapidly declines during adolescence and early adulthood and is quantitatively irrelevant throughout adult life (19). Instead, homeostatic proliferation is responsible for maintaining the size of the naïve T cell compartment and sustaining the richness of the T cell receptor repertoire (20). Generally, homeostatic proliferation in humans is efficient in maintaining a sizable CD4 + naïve T cell pool (21). CD8 + naïve T cells, on the contrary, are progressively lost with age, which induces a higher homeostatic proliferation of aged-CD8 + naïve T cells than that of aged-CD4 + naïve T cells (20). Log-hsCRP, log transformed high-sensitivity C-reactive protein; Log-NT-proBNP, log transformed N-terminal pro-brain natriuretic peptide. T Naïve , naïve T cell; T EM , effector memory T cell. 1 Backward conditional method was used. Model included each T cell parameters and was adjusted for age, gender, history of CVD, history of diabetes, types of vascular access, Kt/Vurea, CMV IgG, albumin, prealbumin, white blood cell, creatinine, uric acid, triglyceride, LDL-C, NT-proBNP, and hsCRP. 2 For those with CMV-IgG titers exceeding the upper limit of 500 U/ml, the numbers were regarded as 500 U/ml. To the best of our knowledge, this is the first study to identify a decrease in CD4 + naïve T cells as a novel CVE risk factor and a decrease in CD8 + naïve T cells as a novel infection risk factor in patients with ESRD. Notably, compelling data suggest profound lymphopenia of naïve T cells in both the CD4 + and CD8 + compartments in ESRD (15,22), although the underlying mechanism is not sufficiently understood. It is evident from the literature that there is a reduced thymic output in ESRD (15,22); however, the more important reason seems to be the failure to maintain quiescence in these cell compartments. Maintenance of quiescence is vital for naïve T cells to retain their self-renewal potential and differentiation plasticity throughout life. In circumstances of inflammation, T cells can leave their usual quiescent state and accumulate as partially differentiated cells, even in the absence of antigen stimulation (23,24). In ESRD, inflammation is significantly enhanced with uremia (25), and dialysis treatment certainly exposes these patients to microbial products and other antigenic stimulations, which can lead to accelerated activation and turnover of naïve T cells. Thus, chronic inflammation could be responsible for the decreased naïve T cells in ESRD patients, which is supported by our finding that decreased levels of naïve T cells were correlated with elevated levels of the inflammation marker hsCPR in both CD4 + and CD8 + compartments. In earlier studies on aging, the decline in naïve T cells and relative expansion of memory and effector T cell populations were entirely due to chronic CMV stimulation (26). In this context, chronic immune stimulation could be the reason for accelerated T cell aging in ESRD patients, including at least the prevalent CMV infection, renal damage, uremia toxin retention, and increased reactive oxygen species generation. Any attempts to maintain the naïve T cell pool eventually lead to its further depletion and extinction, as such attempts result in the partial loss of stemness and incomplete differentiation and activation of negative regulatory programs (20,27). In this context, decreased naïve T cells could represent their maladaptive behavior in ageing and even trigger a vicious cycle of aggravated immunosenescence. This is more so in case of CD4 + naïve T cells, as their shrinkage is not common during normal aging. Besides chronic kidney diseases, rheumatoid arthritis is another pathological condition wherein there are several lines of evidence of premature aging of T cells, indicating a defective DNA repair mechanism in CD4 + naïve T cells (28,29). T cell senescence should be included in the assertion that cellular senescence is an emerging cardiovascular risk factor along with senescence of the endothelial and vascular smooth muscle cells (30,31). We have reported that the absolute numbers of CD8 + naïve T cells decreased significantly with age in a nearly parallel pattern in HD patients aged 20-89 years (16). In the current study, we found that the levels of CD8 + naïve T cells dropped to an extremely low level in HD patients older than 65 years, which could explain why we did not find a significant correlation between CD8 + naïve T cells and infection in these patients. In the middle-aged patients, a decreased CD8 + naïve T cell count was significantly related to a higher risk of infection episodes. This could be attributed to a decreased T cell receptor diversity in naïve T cells, which are not only vital for a primary T cell response but continue to be a resource for T cell responses to antigens previously encountered. On the contrary, chronic immune stimulation, such as that by CMV infection, can also lead to the clonal expansion of the T cell population, which can severely compromise repertoire diversity. Recent studies indicated that ESRD patients present reduced T cell receptor diversity with clonal expansion (32,33), leading to a high incidence of infection in these patients. Generated from naïve T cells, T CM cells home to lymph nodes, lack potent effector functions, and mount rapid FIGURE 2 | Infection-free survival curves according to age-CD8 + T Naïve group. We divided the patients into five groups according to age and CD8 + T Naïve cell count. Group 1 included young patients (age ≤35 years old, n = 29). Group 2L included middle-aged patients with a lower CD8 + T Naïve cell count [36 < age (years) ≤ 65, CD8 + T Naïve < 63 cells/ml, n = 120]. Group 2H included middle-aged patients with a higher CD8 + T Naïve cell count [36 < age (years) ≤ 65, CD8 + T Naïve ≥ 63 cells/ml, n = 120]. Group 3L included old patients with a lower CD8 + T Naïve cell count (age > 65 years old, CD8 + T Naïve < 25 cells/ml, n = 66]. Group 3H included old patients with a higher CD8 + T Naïve cell count (age > 65 years old, CD8 + T Naïve ≥ 25 cells/ml, n = 69). Kaplan-Meier analysis revealed that survival rate was significantly different among the five age-CD8 + T Naïve groups (p < 0.001). In pairwise comparison, old patients had a significantly higher infection incidence, regardless of the CD8 + T Naïve count. Patients with a lower CD8 + T Naïve count in the middle-aged group had a significantly higher infection incidence than those with a higher CD8 + T Naïve count in the same age group (p = 0.04). secondary responses upon re-exposure to antigens. T EM cells migrate to peripheral tissues and display immediate effector function at the sites of inflammation. T EMRA cells are usually considered to be at an advanced stage of differentiation and are promoted by homeostatic cytokines or low load but protracted antigen exposure (34,35). T EMRA cells share the same characteristics as senescent cells, such as possessing short telomeres, DNA damage foci, and a secretome of senescenceassociated secretory phenotype (36). Consistent with the concept that senescent cells exert systemic detrimental effects, T EMRA cells have been implicated in several chronic disease states, such as rheumatoid arthritis, acute coronary syndromes, as well as poor vaccine responses (37)(38)(39). In the current study, T EMRA cells were correlated with proinflammatory cytokines, such as TNF-a and IL-6. It is hard to distinguish causality between inflammation and expanded T EMRA cells. In the present study, a higher percentage of T EM cells was associated with CVEs, and a higher percentage of CD8 + T EMRA cells was associated with infection. However, after including naïve T cells in the model, the association between these cells and clinical events diminished, indicating that an increase in differentiated T cells might partly be due to the decrease in naïve T cells; this is partly explained by some epigenetic studies (40,41). Overall, T-cell senescence in HD patients is markedly evident, and the contraction of the naïve T cell pool may act as a major player in developing CVEs and infections in these patients. Mechanistic studies on T cell homeostasis are needed in these patients. The central theme emerging from our finding is to alleviate chronic inflammation and promote cellular quiescence. Modifying HD therapy seems to be a feasible way to ameliorate T-cell inflammation and improve immunity against pathogens using antioxidant electrolyzed-reduced water (42) and introducing hemodiafiltration (43). To the best of our knowledge, only one study has investigated these T cell parameters in healthy individuals for each decade, with T cell subsets defined by co-expression of CD95 and CD62L, and reported that an increased absolute number of CD8 + memory T cells (CD95 + CD62L − ) correlated with increased mortality (44). Few other studies have reported the relevance of T-cell senescence to morbidity in the aged population. One study conducted in 1,072 elderly individuals from a nursing home indicated that a decreased percentage of CD4 + naïve T cells and CD8 + T EM cells was correlated with frailty (45). In a case-control study conducted in 122 women aged 65 and above, no significant correlation was observed in naïve nor memory T cells between cases and controls (46). However, these studies did not take absolute count of these T cell parameters into consideration, which could miss the vital date of T-cell senescence in aged individuals. Thus, studying T-cell senescence in patients with ESRD can help to shed light onto the alteration of immune function in the general aged population. Our study had several limitations. First, it remains unclear whether inflammation is the cause or the consequence of T-cell senescence. Second, T-cell senescence can be assessed by several other markers, such as telomere length, recent thymic emigrants, CD57, and CD28. This study cannot exclude the impact of these CVD, cardiovascular disease; BMI, Body mass index; HD, hemodialysis; Log-iPTH, log transformed intact parathyroid hormone; Log-hsCRP, log transformed high-sensitivity C-reactive protein; Log-NT-proBNP, log transformed N-terminal pro-brain natriuretic peptide; T Naïve , naïve T cell; T EMRA , T effector memory CD45RA cells. 1 Backward conditional method was used. Model included each T cell parameters and was adjusted for age, history of CVD, history of diabetes, types of vascular access, hemoglobin, albumin, prealbumin, creatinine, uric acid, globulin, ferritin, NT-proBNP, and hsCRP. unmeasured parameters. Of particular interest is the fact that CMV infection has a substantial impact on T-cell senescence. In the current study, nearly all patients were seropositive for CMV-IgG, and half of them had an extremely high CMV-IgG titer, which could lead to underestimation of the relevance of CMV infection and T-cell senescence in HD patients. Finally, this was a single-center study, which might potentially limit the statistical power and its external validity. Hence, further studies are needed in this area to gain a deeper understanding. In conclusion, HD patients exhibited accelerated immunosenescence in the T lymphocyte compartment, and these changes were positively related to inflammation. A reduction of naïve T cells was shown to be a strong predictor of CVEs and infection episodes in these patients. Monitoring naïve T cells could be useful for the early identification of patients at a high risk of profound complications. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Ethical Committee, Zhongshan Hospital, Fudan University. The patients/participants provided their written informed consent to participate in this study.
5,871.6
2021-03-17T00:00:00.000
[ "Medicine", "Biology" ]
Designing online learning modules to conduct pre-and post-testing at high frequency We introduce a new type of online instructional design, online learning modules, that effectively allows instructors to conduct pre-and post-testing on the scale of every 20-30 minutes. This paper will focus on estimating students’ test-taking effort on the pre-test by analyzing their response time using a multi-component mixture model. In a study involving four online learning modules on mechanical energy, we found that only a small fraction of students display low test-taking effort on the pre-tests. We also show that data from frequent pre-and post-test can provide useful information regarding the instructional effectiveness of the learning materials in each OLM I. INTRODUCTION Pre and post testing is the single most well established and widely used method for measuring students' learning outcomes in physics education [1,2].However, most existing pre and post tests are designed to measure students' learning gain over an entire semester, yet most instructors must make a large number of instructional choices on a daily basis [3].This large frequency gap means that many instructional choices are being made without sufficient data on student learning.In consequence, the instructional design of many physics courses is still being shaped heavily by students' evaluation forms rather than their learning outcomes. We believe that there are two major hurdles preventing instructors from more frequently administering pre-post assessments.The first is the high cost in time and resources associated with creating, administering and grading an assessment, and the second being the concern that students will find frequent pre-testing disruptive, and as a result, will not have the motivation to take such low-stakes assessments seriously.Research on test-taking effort in low or zero stakes tests found that some students tend to "speed" through tests by answering most of the test items via "rapid guessing" [4,5].It has also been shown that students' performance on physics conceptual surveys are sensitive to testing conditions [6]. The recent surge in online education and online testing technology provides a potential solution to both hurdles.Most online platforms today allow instructors to draw autogradable problems from a large problem bank, greatly reducing the cost of creating and grading assessments.They also allow instructors to be innovative in the format in which the assessment is being administered, so as to boost students' test-taking motivation.More importantly, a well-designed online learning platform can provide rich data on student behavior, allowing researchers to observe and measure students' test-taking effort.[7,8] In this paper, we introduce a new online instructional design, online learning modules (OLM), that effectively enables prepost assessment to be conducted on a time scale of every 20-30 minutes.We show that by analyzing students' response time on assessment attempts using a mixture-model method, we can estimate the fraction of students who "speeded" through the pre-test for each OLM.This population turns out to be relatively small in the current study.Meanwhile, the pre and post-test data can provide rich information on the effectiveness of instructional materials in each module. A. Design of OLM Inspired by early research in modularized instructional design and deliberate practice [9,10], each OLM contains instruction, practice problems and assessment (FIG 1Error!Reference source not found.),focused on developing competency in one well-defined "knowledge component" [3].A knowledge component roughly corresponds to a single physics concept such as kinetic energy, or one aspect of a physics principle, such as conceptual understanding of conservation of mechanical energy.A series of OLMs are combined sequentially to form a learning unit on a certain topic.A student can access the next module in the sequence after he/she passes the assessment of the previous module.A unique feature of the OLM design is that students are required to attempt the assessment at least once to "unlock" the instruction and practice problems in each module.After the initial attempt, student can choose to either study the instruction and work on practice problems, or make additional attempts on the assessment.The instruction and practice problems are locked from access during each assessment attempt.For each OLM, the initial assessment attempt serves as a pre-test, while all the attempts afterwards serve as multiple posttests.More detailed discussion of OLM design will be presented in a different paper. B. Analyzing response time with mixture models Students' response times on test items have been shown to correlate well with their test-taking effort on not for credit tests [7,8].More specifically, students who speed through a test by answering items via rapid-guessing would spend significantly less time on the test, showing up as a peak close to zero in the response time distribution.The sizes of different student groups with different test-taking behavior can be estimated by fitting the response time distribution to a mixture-model with components: where () is the probability density at response time , are the relative weights of the components which sum to unity, and () is the density function of component .Each component would ideally correspond to a group with a distinct test-taking behavior, with a maximum of groups thought to exist in the population.A student with response time is assigned to group which corresponds to the component that has the maximum weighted probability density among all components: () = max [ ()]. Since the assessment of each OLM only contains about 2-3 problems, in the current study we use the total response time on one assessment attempt, referred to as the attempt response time (ART), as a proxy for students' test-taking effort. II. METHODS: A. Creation and Implementation of OLM sequence For the current study we created an OLM sequence consisting of four modules on the topic of conservation of mechanical energy(CME), including: definition of kinetic energy(KE), definition of gravitational and elastic potential energy (PE), conceptual understanding of CME (CU) and problem solving using CME (PS).The learning modules are implemented in an online learning platform, Obojobo, developed by the Center for Distributed Learning at University of Central Florida [11]. The assessment component of each module contains three sets of 2-3 isomorphic assessment problems, inspired by or directly taken from either the Energy and Momentum Conceptual Survey [12], or an exam review instrument developed by the PER group at University of Illinois [13]. Students are presented with one of the three sets on each assessment attempt in fixed order.After each attempt, students are informed of the correctness of their answer to each problem, but not the correct answer itself.A student passes an assessment when he/she can correctly answer all questions on a single attempt.Since students cannot access subsequent modules before passing the assessment, they are allowed 20 attempts on each assessment in the current experiment.The instructional component consists of instructional text and images, interleaved with practice problems which provide students with wrong-answer feedback and problem solution after each attempt. B. Experiment Setup Student subjects were recruited from a calculus based introductory mechanics course of a large south-eastern public university.Subjects were given access to the OLM sequence as an exam review tool one week before the midterm exam that covered CME.No course credit was assigned for completing the OLM sequence. C. Data Collection and Analysis Click-stream data from subjects were collected from the Obojobo platform after the experiment and analyzed using the software suite R [14].ART is defined as the time between the start and end of an assessment attempt, marked by two distinct mouse-click events.Analysis of ART on initial attempt using mixture-model is conducted with the R package mixtools [15]. III. RESULTS A total of 77 students registered for the study, of which 75 launched the assessment of the first module.We first attempted to model the distribution of ART of initial assessment attempt using a two component log-normal distribution following the method outlined in [7]. However, for three of the four cases this method resulted in an unexpected best fitting model where one of the two components accounted for both the "speeded" group on the left and the "slow" group on the right, which contains students that took an exceptionally long time to complete the assessment (the red curve in FIG 2).This result prevented us from estimating the size and mean ART of the "speeded" group alone, and there is no reason to believe that "speeded" and "slow" groups have similar test-taking effort.Adding a third component to account for the "slow" group failed to resolve the problem and resulted in similar outcomes.To properly separate the "speeded" and "slow" groups, we adopted a 3-component normal distribution mixture model of the form: The three components were intended to capture the "speeded", "normal" and "slow" test taker groups respectively.To further exclude the impact of exceptionally long ART data which is highly non-normal, the longest 10% of ART for each assessment is excluded from the analysis.This new model is likely to improve the validity of the method as it improves the fitting of the peak close to zero in the data.The resulting three-component models are plotted in FIG 3 and the parameters displayed in TABLE 1.The estimated number of "speeded" test-takers is less than 20% of the population for module PE, and < 15% for the other three modules.In FIG 4, we plot the number of students who attempted each module, grouped by the number of attempts taken to pass the module.It is worth noting that the problem sets repeat every three attempts, meaning that students who passed the assessment on > 3 attempts benefited from previous attempts. IV. DISCUSSION We have shown that the size of the "speeded" test-taker population on frequent OLM pre-tests can be estimated using a mixture-model analysis method, and that this population is relatively small for the OLMs involved in the current study. Several design features of OLMs might have contributed to the small number of "speeded" test-takers.First of all, the fact that students can directly proceed to the next module without having to go through the rest of the current module may have provided the internal motivation for high testtaking effort.Secondly, the short length of each pre-test may have prevented the decline of test-taking effort, as it has been shown to decrease with the length of the test in some cases [5].Finally, the multiple attempt design of OLMs might have created a game-like environment, in which the assessment is being viewed as a challenge similar to the level-boss in a video game.However, we must also note that the experiment is conducted as a review unit after classroom instruction on the content, and that participation is voluntary.Having been exposed to the content before could have boosted the confidence of students on the pre-test, although the highest first attempt passing rate among all four modules is less than 20%.In addition, unmotivated students might simply have dropped out of the study altogether without making an initial attempt.It is possible that the "speeded" test-taker population will increase if the OLMs are assigned for credit in a course. Nonetheless, the OLMs enabled us to closely monitor students' test-taking effort, and study how it changes with different test administration conditions.To the best of our knowledge, this is the first study to measure students' testtaking effort on a physics pre-test using students' response times. More importantly, the multi-attempt pre-and postassessments of OLMs not only measure the level of mastery for each knowledge component, but also measure the effectiveness of the instructional materials in each OLM.As shown in FIG 4, modules KE and PE have higher effectiveness, as reflected by the number of students who passed the modules in less than 3 attempts (dark blue).In contrast, few students passed module PS after studying the module, indicating that its instructional materials require significant improvement.The data also shows that more students tend to drop out on the first and last modules of the sequence.This might be caused by lack of incentive for students to complete the modules, although the real reason will be an interesting topic for future research By providing rich data on students' learning from online instructional materials on the scale of 20-30 minutes, OLMs can serve as a valuable complement to the traditional pre and post concept tests, which measure learning gains from classroom instruction over much longer timescale. V. LIMITATIONS AND FUTURE DEVELOPMENTS We noticed a few limitations in the current implementation of OLMs which can be improved in the future.For one, to ensure that every student can access all four modules, we had to give 20 attempts on every assessment to ensure that every student can pass the modules.In future implementations, we will give a smaller number of attempts, and allow students to access subsequent modules once all the attempts in the previous module were used up.A second limitation is that the current mixture model analysis relies on a pre-determined number of components.In future implementations with a larger sample size, an iterative bootstrapping likelihood ratio test can be used to determine the optimum number of components for a given distribution based on fitting indices such as AIC and BIC [15].Furthermore, in future studies the validity of the data analysis method can be further examined by surveying students about their test taking effort, as is done in [16]. Finally, it will be an interesting future direction to study how students' test taking behavior change when OLMs are assigned for course credit, as well as how negative effects of over-testing can be avoided if we wish to use OLMs more extensively as a tool for formative assessment. FIG 2 : FIG 2: Example of a 2 component log normal fit of student ART data on initial attempt. FIG 3 : FIG 3: 3-component log-normal fit of ART on initial assessment attempt. FIG 4 : FIG 4: Number of students passing each module grouped by number of attempts. TABLE 1 : Resulting parameters of the 3-component normal distribution mixture model fit.Coding for the three components: 1= speeded, 2 = normal, 3 = slow
3,180.8
2018-03-01T00:00:00.000
[ "Computer Science" ]
Antimicrobial resistance among clinically relevant bacterial isolates in Accra: a retrospective study Objective The aim of this study was to determine the antimicrobial resistance pattern of bacterial isolates from different specimens at various hospitals and private diagnostic service laboratories in Ghana. Results A retrospective data of culture and sensitivity test results from 2016 were extracted from the microbiology record book of six laboratories in Accra, Ghana. The data included type of clinical specimen, sex of patient, name of bacterial isolate and antibiotic resistance profile. A total of 16.6% (n = 10,237) resistant isolates were obtained, however, the proportions of resistant isolates varied significantly between laboratories. High resistance towards tetracycline, ampicillin, cotrimoxazole and cephalosporins, but low towards amoxiclav and aminoglycosides, was observed. This study identified E. coli and Staphylococcus species as the major resistant bacteria from clinical specimen in Accra and the highest prevalence of the isolates was found in urine specimens in all six laboratories (69.1%, n = 204; 52.6%, n = 36; 52.3%, n = 350; 37.9%, n = 298; 53%, n = 219; 62.1%, n = 594) and in female patients (81.4, 50 and 69.5%). Regular surveillance and local susceptibility pattern analysis is extremely important in selecting the most appropriate and effective antibiotic for the treatment of bacterial infections. Electronic supplementary material The online version of this article (10.1186/s13104-018-3377-7) contains supplementary material, which is available to authorized users. Introduction One of modern medicine's greatest achievements has been the production of antimicrobials against diseasecausing microbes, but after more than 70 years of widespread use, these therapeutic agents have gradually lost their potency [1]. Antimicrobial resistance (AMR) is now a serious global health concern causing problems in the treatment and prevention of infections. Nevertheless, these microorganisms especially bacteria, causes some of the most common infections in different settings; in the community, in hospitals or transmitted through the food chain [2]. Antibiotics are among the most commonly prescribed drugs in hospitals and in developed countries about 30% of the hospitalized patients are treated with these drugs [3]. Antibiotic use in Africa is progressively on the rise and the availability of un-prescribed antibiotics is part of the problem. A study combining data from various African countries revealed that approximately 90.1% of cases of acute illness sought care outside the home and 36.2% took an antibiotic medication and over 30% of the individuals acquired antibiotics without prescription [4]. Several studies from various African countries have furthermore shown high levels of antibiotic resistant bacterial pathogens [5][6][7][8][9]. In spite of the ongoing research on antimicrobial resistance, there are still no indications that the situation is abating. In order to formulate an efficient AMR control plan, it is crucial to have a clear view on the current situation so as to determine when, how and where to initiate control measures. The present study assessed a 12 months trend of antimicrobial resistance in clinically relevant bacterial isolates in Accra, Ghana. Study design This study was a retrospective analysis of miscellaneous clinical samples that were tested for bacterial presence and subsequent susceptibility testing dated from January to December 2016, from six locations comprising three hospitals and three private diagnostic service laboratories. The study locations include Trust Specialist Hospital (TH), Holy Trinity Medical Center (HTMC), LA General Hospital (LAGH), Patholab Solutions (Ghana) Limited (PSGL), G2 Medical Laboratory and Mediplast Diagnostic Center (G2ML). All locations are situated in the city of Accra, Ghana and the data were sampled from January to May 2017. Data extraction The clinical information extracted from the six laboratories included type of sample analyzed, name of pathogens isolated and the names of antibiotics used for susceptibility testing and the susceptibility results as recorded in the laboratory report. All information were confirmed by each laboratory technician and reconfirmed by the chief biomedical scientist. Laboratory analysis, clinical samples and collection of bacterial isolates Several types of clinical samples were cultured, including urine, blood, sputum, urethral smear, pus and seminal fluid, and swabs from various body sites (vagina, ear, throat, umbilical cord, eye and wound). All the laboratories sampled for the current study employed similar standard microbiological culturing techniques. Bacterial identification All the laboratories performed similar microscopic identification as previously described [10]. For Grampositive cocci, tube coagulase and catalase were done for species differentiation. Further biochemical tests included lactose fermentation, indole, citrate utilization, urease and Triple Sugar Iron reaction to ascertain the biochemical characteristics of the Gram-negative isolates. Two of the laboratories (Trust Hospital and Mediplast Diagnostic Center) also deployed the BBL Crystal Panel Viewer (Becton-Dickinson Maryland, USA) for further identification of isolates. Antimicrobial susceptibility Antimicrobial susceptibility test were performed by all 6 laboratories using the Kirby-Bauer disk diffusion method [11] and interpreted using the Clinical Laboratory Standard Institute guidelines [12]. Gram-positive and Gram-negative antibiotic disks were selected for Gram-positive and Gram-negative bacterial isolates, respectively. Each laboratory used disks manufactured by Abtek Biologicals, UK and Biomark Laboratories, India. The disks and their concentrations in micro- Data analysis Collected data was entered into Microsoft Excel and loaded into statistical package for social sciences (SPSS, version 20) for analysis. Proportions of predominant isolates and antibiotic resistance profiles were compared using Chi square test. The Pearson correlation test was used to assess associations among locations and isolates in relation to the resistance profile of the isolates at a critical probability P < 0.05. Results Records on resistant bacterial isolates and the antibiotic resistant profile cultured from January to December 2016 in six microbiology laboratories were extracted. These laboratories include three hospitals namely Trust Hospital (TH), LA General Hospital (LAGH) and Holy Trinity Medical Center (HTMC) and three private diagnostic service laboratories, Mediplast Diagnostic Center (MDC), Patholab Solutions Ghana Limited (PSGL) and G2 Medical Laboratory (G2ML). In the study period, a total of 10,237 samples were cultured and 1701 (16.62%) resistant isolates were obtained. However, proportions of antibiotic resistant bacteria differed significantly from one laboratory to the other with PSGL samples generating the highest number of resistant isolates (33.08%, P < 0.05), whereas isolates recovered at G2ML showed the lowest level of resistance (5.51%, P < 0.05) ( Table 1). Out of the six labs, nineteen different specimens were taken and statistical analysis revealed from all laboratories that urine samples are significantly more contaminated with antibiotic resistant bacteria than all other sample types (P < 0.05, Additional file 1). Samples were taken from female and male patients in all the six study areas but only three provided data on sex groups. A two by two comparison revealed for TH and LAGH that female gender is significantly associated with infection by antibiotic resistant bacteria compared to males (P < 0.05, Table 2). MDC had equal proportions (50%) of resistant bacteria from male and female patients ( Table 2). Records of the bacterial species and their resistance pattern can be found in Additional file 2. Here we sum- Discussion Antibiotic resistant bacterial infections have become a threat, in particular in developing countries, but to obtain an effective treatment plan, it is vital to have an overview of the current resistance level. In this study, data on culture and sensitivity results from six microbiology laboratories were extracted. Within the entire period, each of the six laboratories reported varying numbers of antibiotic resistant bacteria strains ranging from 36 to 594 resistant isolates. Statistical analysis revealed that samples from PSGL generated the highest proportions of resistant isolates (33.08%). There is therefore a possibility that patients living around or attending PSGL are likely to be more infected with antibiotic resistant bacteria, as it is situated in a densely populated area of residence, compared to patients who attended the other laboratories. Isolates recovered at G2ML, located in an area which primarily covers a business district, demonstrated the lowest level of antibiotic resistance as compared to the rest (5.51%). Among the six labs, 19 different types of clinical specimen were processed, but the highest level of antibiotic resistant bacteria was found in the urine samples in all laboratories. The proportions of resistant isolates ranged from 37.9% and up to 69.1%. High level resistance (66.7%) in urine has also been shown at the Korle-Bu Teaching Hospital in Ghana [13]. In Sierra Leone 85.7% of multidrug resistant isolates were identified in urine specimen [14]. Furthermore, the most prevailing resistant bacteria isolated from the various specimens in all the laboratories were E. coli and Staphylococcus. E. coli is the primary etiologic agent causing urinary tract infection, accounting for 90% of the cases [15], however, in this study we did not acquire data on disease state of patients. But our data showed that female patients from TH (81.37%) and LAGH (69.46%) had the highest level of resistant bacteria compared to males. That Table 1 Comparison of antibiotic resistance between laboratories using Chi square test Proportions followed by different letters in a column means differences in proportion of resistant isolates at α < 0.05% [5,16,17]. Often, high levels of S. aureus are isolated from different sites of infection, probably due to the fact that this bacterium is part of the normal flora on skin and gut, but can infect breaches on the skin. Moreover, S. aureus is often found at the hospital settings, increasing the risk of infections. Due to the known risk of bacterial spread at hospitals, and lack of typing data, we cannot rule out that some of the resistant bacteria isolated from different specimen may in fact be the same clone. Laboratories Number of isolates Resistant isolates (%) Gram-negative bacteria accounted for 65.4% of the resistant bacteria, whereas Gram-positive accounted for 34.6%. Overall, S. aureus, E. coli, Citrobacter spp., Pseudomonas spp. and Enterobacter spp. from all the laboratories, showed high resistance against tetracycline, ampicillin and cotrimoxazole. Similar report of high resistance to these antibiotics in Ghana has previously been reported [5,18]. They are also regarded as the most widely used antibiotics in developing countries, as they are considered inexpensive and generally have broad-spectrum activity [19]. We also observed high resistance of Klebsiella spp. to ceftriaxone and cefuroxime, which has also been observed in a recent study from Rwanda [20]. But the resistance to amoxiclav, amikacin and chloramphenicol was low across most of the laboratories except at G2ML where resistance towards amoxiclav (augmentin) was mostly high.
2,368.8
2018-04-25T00:00:00.000
[ "Medicine", "Biology" ]
RESEARCH ON THE LAYERING A* ALGORITHM FOR REAL-TIME NAVIGATION According to the existing problems in the applications of embedded navigation, this paper designs the hierarchical search A*algorithm, based on the transferring road network, to meet the need of real-time navigation. In the algorithm, a hierarchical search strategy is applied to route programming of large area, yet the duplicate searching A* algorithm, based on the transferring road network, is applied to the path computation, which is able to handle intersection turn restrictions and node weight, with little storage space but fast searching speed. Practically, the algorithm is proved to meet the technological need of real-time navigation both in computing speed and route rationality. I. INTRODUCTION Along with the current development of the embedded GIS and its related technologies to navigation, many embedded GIS products with navigation application function have appeared; these products can provide real-time path planning, voice guidance, and other functions, which can meet the basic demand of the user's navigation [1] .But in practical application, it also revealed some shortcomings, such as slow speed of data calls; low efficiency of path planning in large area; unadvisable path planning, and without considering traffic restriction factors, etc. [2] .Therefore, it is needed to study how to achieve the algorithm which can meet the application requirements of real-time navigation path planning in the limited resources embedded environment. II. IMPLEMENTATION STRATEGY OF PATH PLANNING In the actual driving process, in the case of easily finding the close distance, the drivers will generally choose to drive in the good high-grade road, rather than to go on the poor low-grade road which can cause traffic jam and with poor traffic capacity, unless the starting point or end point is in poor low-grade road.So in the process of path calculation, the hierarchical search strategy can be adopted, that is, search the low-grade road in the vicinity of the starting point and end point, and search the high-grade road in the midway.In addition, as for the path planning of a city or a small local area, due to the small amount of data, it is usually only needed to load the road network data once into memory, and then use the conventional path planning algorithm to calculate the optimal path.However, in most cases, the path planning calculation is used for vehicle navigation in nationwide large area and a huge amount of road network, so we must develop the reasonable path calculation strategy according to the regional scope of navigation application. Based on the above considerations, in the path calculation, the study determines whether it is a path calculation of local small range or a large range first according to the starting point and end point, and then adopts the corresponding calculation strategies: (1)the path calculation of local small range The path calculation of local small range is to select all grades roads near the starting point and end point, and select the higher-grade road in the midway to constitute the network for the optimal path calculation. (2)the path calculation of large range The path calculation of large range mainly used the hierarchical search strategy [3][4] ; first, search the path in the high-grade road network; then, conduct the local refining optimal path search near the starting point and end point by calculation method of local small range. At present, 20 kinds of path planning algorithms can be used for optimal path analysis according to the classification statistics, of which, more classical algorithms are Dijkstra Algorithm [5] , Bellman-Ford-Moore Algorithm, A* Algorithm [6] and others. The A * algorithm makes an estimate on the distance of the node from the end point when selecting the next node, and to evaluate the possible measures of the optimal path of the road, in this way, it can optimally search the node with the optimal large possibility.So it can be seen from the search process, when it is near to the end point, the number of the searched nodes is fewer with small storage space and quick search speed. Therefore, the study in this paper chose the A * algorithm which needs a small storage space and have quick search speed for the path planning calculation. A. Analysis of conventional network representation Under normal circumstances, road network is abstracted by using a weighted directed graph, road intersection is represented as a node, a section between two intersections is expressed as arc, and some quantitative attributes of a section serve as a weight of segmental arc.As shown in Fig. 1 In the process of driving, inevitable intersection delay directly results in weighted nodes.In addition, influenced by steering restriction on the intersection, one-way traffic, vehicle forbidding and other traffic control information, vehicles cannot reach roads originally connected each other in the actual driving process.Among the above factors affecting vehicle driving, one-way traffic, vehicle prohibiting, etc., can be solved through setting arc weights corresponding to road as ∞(or a sufficiently large positive number), but weighted nodes and steering restriction cannot be solved through the conventional road network representation and the shortest path algorithm.As shown in Fig. 1 B. Transformation of road network construction based on segmental arc In order to deal with the influence of intersection steering restriction and node weights on path planning, the traditional road network representation must be transformed.In the paper, the ideas related to the method of dual graph [7] [8] are introduced herein, and the method to transform road network construction is used to handle intersection steering restriction and node weights.The basic idea is as below: nodes-based weighted directed graph is transformed into arc (section)-based weighted directed graph, and then the optimal path is calculated on the transformed road network. As shown in Fig. 2, segmental arcs (Sections A, B, C, D) in the original network are transformed into new nodes in the network.At this point, if the starting point and the end point are considered as a segmental arc with a length of 0, it is only required to define the weight from one segmental arc to the other segmental arc, then any shortest path algorithm is used to solve the optimal path from the starting point to the end point. Based on the transformed road network, intersection steering restriction and node weights are handled through defining the weight from one segmental arc to the other segmental arc.Assume that the serial number of the i th segmental arc in the original network is L i , if L i exists in the subsequent arc L j and associated node is P, the weight from L i to L j can be defined as below: W(L i , L j )=W(L i )+W p (L i , L j ) (1) Where: W(L i ) represents as the weight of arc L i in the original network, W p (L i , L j ) represents the additional weight of node P, which is used to represent intersection delay.If it is prohibited to turn from L i to L j , however, W p (L i , L j ) represents ∞ (or a sufficiently large positive number).based on segmental arc.Data of navigable electronic map produced by many manufacturers of navigable electronic map, however, do not describe a complex intersection as a node.Fig. 3 (a) shows an intersection, and Fig. 3 (b) shows collected navigable data.There is a median in front of intersection where four roads connected to the intersection pass, so every road is digitalized as two sections in the opposite direction when data are produced, in a total of eight sections, respectively marked as L 1 , L 2 ,...L 8 ;These eight sections constitute four nodes in an intersection, respectively marked as P 1 , P 2 , P 3 , P 4 .Internal sections connected to four nodes are respectively marked as L 9 , L 10 , L 11 and L 12 .It can be seen Fig. 3 (b) that L 2 must pass through L 9 , L 10 before reaching L 7 , so the method to transform road network cannot be directly used to describe the fact that L 2 cannot reach L 7 . Internal sections have no practical significance for path planning calculation, but after the completion of path planning, it is required to organize internal sections into a complete optimal path.Therefore, the paper designs the binary search A* algorithm based on the transformation of road network.The algorithm design idea is as below: When the first path calculation, internal sections are removed, an intersection is considered as the transformed road network of a node structure based on segmental arc, and the A* algorithm is used to search the optimal path.When secondary path calculation, the optimal path first calculated and all internal sections where the path goes constitute the transformed road network based on segmental arc.Regardless of steering relationship, the A* algorithm is used to search the secondary optimal path.Thus, when the first path calculation, the method to transform road network is used to handle intersection steering restriction and node weights, to get the optimal path exclusive of internal sections; when second path calculation, the optimal path, in order to connect to the optimal path, road network used for path calculation only includes the optimal path first calculated and internal sections of the intersection where the path goes, so the calculating speed is fast. The binary search A* algorithm based on the transformation of road network is implemented according to the following steps: 3) When organizing road network, internal sections are deleted, the method to the transformation of road network is used to transform road network data into an arc (section)-based weighted directed graph, intersection steering restriction and node weights are handled according to the segmental arc and its weight defined in Formula (5-1); 4) The A* algorithm is used to solve the optimal path from Arc ( s P , ' 6) The method to transform road network is used to transform the above road network data into an arc (section)-based weighted directed graph; 7) The A* algorithm is used to solve the optimal path from Arc ( s P , ' s P ) to Arc ( ' e P , e P ), namely, the optimal path from the starting point s P to the end point e P . IV. IMPLEMENTATION OF THE HIERARCHICAL SEARCH A* ALGORITHM BASED ON THE TRANSFORMATION OF ROAD NETWORK Based on the above analyses, the paper designs the hierarchical search A* algorithm based on the transformation of road network to meet the demand for real-time navigation application.The basic idea of the algorithm is as below: The hierarchical search strategy is used for path planning of a large area.The binary search A * algorithm based on the transformation of road network, which has the characteristics of able to handle intersection steering restriction and node weights, small storage space and fast search, is used as the path planning algorithm. The hierarchical search A* algorithm based on the transformation of road network is implemented according to the following steps: 1) Judgment whether to be the path planning for a large area according to the distance from the starting point L are connected, to get the optimal path L , and the path calculation is completed. V. EXPERIMENT AND CONCLUSIONS The hierarchical search A* algorithm based on the transformation of road network designed in the paper has been applied in the embedded navigation application system developed by the author, which supports national road network data.The operating environment of the system is the operating system Windows CE 5.0, 500 MHZ, memory of 256 MB, 800×480 screen resolutions.The running interface is shown in Fig. 4.There are massive data on the national road network, when path planning of a large area, so it is impossible to completely load road network data in a computational domain into memory.In the paper, therefore, when verifying the algorithm's validity, the Dijkstra algorithm based on hierarchical search and the hierarchical search A* algorithm based the transformation of road network are respectively used to compare the efficiencies of path planning.Fig. 5 shows the chart for time comparison of path planning among different cities of China after using two algorithms are used respectively. It can be seen from Fig. 5 that two algorithms use the hierarchical search strategy when path planning in China, so both of them have higher computational efficiency.In search of the next section, the hierarchical search A* algorithm based on the transformation of road network uses the distance away from the end point as the measurement to evaluate the possibility of this section locating in the optimal path, and can first search the section with a great probability, so, its search efficiency is superior to that of the Dijkstra algorithm based on hierarchical search.In addition, when path calculation, the hierarchical search A* algorithm based on the transformation of road network uses the method of transformation of road network to transform node weights into an arc (section)-based weighted directed graph, and then the optimal path is calculated in the transformation of road network, therefore, the algorithm can identify and deal with all kinds of traffic control information and the influence of intersection delay. The practical navigation application shows that the hierarchical search A* algorithm based on the transformation of road network has many advantages in calculating speed, path rationality, etc., and is able to satisfy the technical requirements of real-time navigation application under the embedded environment. VI. , Fig. 1 (b) is a road network schematic diagram of Fig. 1 (a) by using a conventional weighted directed graph to represent road intersections. Figure 1 . Figure 1.Schematic diagram for road intersection representation by using a conventional weighted directed graph (a), vehicles driving on Section A is prohibited to turn left into Section D, but Fig. 1 (b) is unable to express the fact that arc (v 1 ,v 2 ) from Node #1 cannot reach arc (v 2 , v 5 ) via Node# 2. Furthermore, different steering behaviors (such as going straight, turning left, turning right, etc.) generally produce different intersection delay time.Fig. 1 (b) is unable to respectively express node weights caused by different steering behaviors after vehicles enter Intersection # 2. Figure 2 .Figure 3 . Figure 2. Schematic diagram for road intersection representation by using the method to transform road network P ) on the transformation of road network (excluding internal sections); 5) Internal sections of the intersection where 1 L pass are extracted, and constitute road network data together with 1L ; Figure 4 .Figure 5 . Figure 4. Interface diagram for the embedded navigation application system
3,400.6
2015-04-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Development of on-line nondestructive testing device for welding cracks based on piezoelectric ceramic excitation Perform real-time non-destructive testing of weld damage in various welded structures of steel to determine the degree of weld damage in order to make timely safety warnings to protect human life and property safety. In this paper, piezoelectric ceramics are arranged on both sides of the weld to find the best frequency of the excitation system. The sine signal of this frequency excites the piezoelectric ceramics on one side, and receives and analyzes the response signals of the piezoelectric ceramics on the other side. This paper uses direct digital frequency synthesis technology (DDS), power management circuit, differential amplifier circuit, band-pass filter circuit, phase detection circuit, amplitude detection circuit, etc. to design a system that can perform real-time nondestructive testing. The whole system can measure the changes of these signals in real time, and send the signals to the upper computer remotely. Analysis of these signals can initially determine the size and location of weld damage. The research results show that when the weld in the steel is damaged, the amplitude of the sinusoidal signal propagating through the weld will be attenuated, and the phase difference between the transmitted signal and the received signal will increase. This change is positively correlated with the degree of damage. Introduction In the weld joint, the fusion zone is its weak link, it is actually an area where the melting is uneven. Some defects in welds such as cold cracks, reheat cracks and brittleness often originate here [1]. Defects in the weld may cause property damage, personal injury or death. In addition to the external defects visible to the naked eye, there are many internal defects that are difficult to find in the welded structure, such as cracks, pores, slag inclusions, incomplete penetration. Among them, cracks have the greatest impact on brittle fracture [2], brittle fracture often occurs suddenly without obvious precursor, so more attention should be paid to cracks. Non-destructive testing methods for welded structures include radiographic testing, ultrasonic testing, magnetic particle testing, penetrant testing, strain gauge testing, etc. Wang Yigang and Ding Keqin use resistance strain gauges to detect the stress of crawler crane arms [4]; Kesheng OU, Xufeng LI proposed phased array ultrasonic testing to determine the external tension of surface cracks [5]; Yanfeng Li, Xiangdong Gao et al. proposed a magneto-optical imaging non-destructive testing system based on rotating magnetic field excitation to extract and detect the characteristics of weld defects [6]; Wang Qiang, Xiao Kun used ultrasonic phased array technology to analyze austenitic stainless steel welds Defects were detected [7]; Shi Yaowu used acoustic emission technology to apply to welding production, and developed a welding crack acoustic emission monitor [8]. The above scholars use different methods to conduct non-destructive testing of steel welded structures, and can identify the size of the deformation or the location of the defect at the weld. However, the detection equipment used is large in size and high in cost, and it is difficult to perform on-line real-time detection of steel weld damage. Therefore, this paper proposes a non-destructive testing method based on piezoelectric ceramic excitation, develops a low-power online testing device, and conducts theoretical analysis and experimental research to verify the feasibility of the device. Piezoelectric ceramic excited steel welding crack detection system The composition of the steel welding area is affected by the welding environment, electrode material and welding process. For example, dust, nitrogen and oxygen in the air will enter the weld during welding, and the electrode coating will fall into the weld during the welding process, and the rapid cooling of the welding process will cause segregation [9], etc., all of which will cause the weld Non-metallic inclusions appear, resulting in various welding defects. Because the chemical composition of the weld seam is not uniform, reflection and refraction will occur when the vibration wave propagates in the steel and encounters the weld seam. The expression of wave is: cos From the analysis of the wave propagation path, compared with the ideal case where the weld composition is uniform, the actual propagation path of the wave in the uneven weld increases, resulting in an increase in the value of phase ψ; from the energy perspective of the wave, the wave is passing through When the weld is defective, the energy will be further attenuated, and the vibration amplitude A of the wave will decrease. Based on this theory, this paper artificially simulates weld defects, measures and compares the changes of waves after passing through a defect-free weld and a defective weld, and finds that the experimental results are consistent with the theory. According to the positive piezoelectric effect of PZT: it vibrate along the polarization direction, and alternating voltages are generated at both ends [10]. The schematic diagram of vibration wave propagation in steel is shown in Figure 1: The overall circuit block diagram of the device is shown in Figure 2. The excitation signal uses direct digital frequency synthesis technology [11] (DDS) to generate the original sinusoidal signal. After the signal is isolated from DC, it is amplified by a differential amplifier circuit, and then an active second-order bandpass filter circuit filters out power frequency interference and high-order harmonic. The signal excites the piezoelectric ceramic arranged on one side of the weld and receives the response signal of the piezoelectric ceramic on the other side. After the response signal is processed in the same way, it is converted into a direct current signal by an effective value conversion circuit, and then input to the ADC pin of the MCU after limiting the amplitude. The excitation and response signals are input to the phase difference detection circuit to obtain the phase difference, and then input to the input capture port of the MCU. Figure 2. Device overall circuit block diagram 3.1.Selection of steel and construction of structure The carbon steel model selected for the experiment in this article is Q235B [12]. The actual welding situation in the project is more complicated. In addition to horizontal welding, there are welding conditions of various angles. Therefore, the platform in the experiment is divided into horizontal welding parts and right-angle welding parts according to the welding conditions, and divided into 5mm parts and 15mm according to the thickness of steel pieces. The attenuation of the wave propagating in the steel is related to the contact point between the steel and other supports before and after the experiment, that is, whether they are in the same position. In the experiment, the steel is fixed on the profile bracket to ensure that the position of the platform before and after the experiment is the same, thereby eliminating the error caused by the position change of the steel, as shown in Figure 3,the online NDT device for welding cracks developed in this paper is shown in Figure 4. 3.2.Selection of PZT The literature [13] mentioned that there are several common types of PZT (PZT): PZT4D, PZT5A, PZT5H, PZT8, etc. Among them, PZT5A has the characteristics of high electromechanical coupling coefficient, high flexibility, high dielectric constant, etc. Its performance in driving and sensing has more advantages than other specifications of ceramic chips, and is mainly used in sensors, accelerometers, pressure gauges, anemometers, etc [13].The effective electromechanical coupling coefficient (keff), resonance frequency, dynamic resistance, and quality factor are all important parameters of PZT. This article compares the three piezoelectric slices of the PZT5A series and selects one of them. The main parameters are shown in Table 1: The experiment uses THS4001 chip to amplify and filter the DDS signal, and the processed sinusoidal signal has a peak-to-peak value of 10Vpp, which is used to excite PZT. Taking the first of the above three PZT as an example, its effective electromechanical coupling coefficient keff=0.177464, so the mechanical efficiency keff2 is about 0.0315, that is, 96.85% is consumed by itself. The dynamic resistance of PZT is 6.935Ω, according to the formula: (4) Among them, Um is the peak value of the sinusoidal signal, Um=1/2Vpp, R is the dynamic resistance of the piezoelectric ceramic, P is the power consumption, calculated P=1.7456w, I=494mA, due to the mechanical efficiency of the first piezoelectric ceramic And the dynamic resistance is relatively small, the power consumption is relatively large, and the front stage of the op amp needs to provide a relatively large current, so the first piezoelectric ceramic does not meet the requirements. Similarly, after calculating the power P=0.21w and current I=59mA of the second piezoelectric ceramic under the same excitation; the power P=0.314w and current I=88mA of the third piezoelectric ceramic, which meet the requirements. Considering the actual situation, the size of PZT should be as small as possible when the performance is satisfied, so this experiment uses the second PZT. 3.3.Piezoelectric ceramic arrangement The arrangement of PZT is divided into 3 groups. The first group: the two sides of the welding seam of the horizontal welding steel plate 1 with a thickness of 15mm are smoothed with sandpaper and wiped clean, and two pieces of PZT with leads are symmetrically pasted on both sides of the welding seam with AB glue, and air-dried naturally, As shown in Figure 5; the second group: paste two pairs of PZT on the steel plate 2 in the same way, as shown in Figure 6; the third group similarly, paste three pairs of PZT on the steel plate 3. As shown in Figure 7, all PZT are led from the positive and negative electrodes with DuPont wires. 4.1.Experiments with different sizes of cracks in welds In this section, three groups of horizontal welding steels with PZT ceramics are arranged as follows: the first group uses a wire cutting machine to cut a groove with a width of 0.5mm and a depth of 5mm on the weld at the connection point of the two PZT. To simulate the crack, use the same sinusoidal signal to excite the piezoelectric ceramic on one side, and record the data of the signal received by the piezoelectric ceramic on the other side; then use a wire cutting machine to cut a groove with a width of 0.5mm and a depth of 7mm along the original position. Keep the other conditions unchanged, use the same sinusoidal signal to excite the PZT on one side, and record the data; then use the wire cutting machine to cut a groove with a width of 0.5mm and a depth of 9mm along the original position, keep the other conditions unchanged, and record the data; After continuous excitation for a period of time, the data is stable, and the experimental data is recorded as shown in Table 2: 4.2.Experiment on different positions of cracks in the weld In the second set of experiments, sine signals were used to excite two PZT ceramics on one side of the weld. The two PZT ceramics were marked as group A1 and group B1. Then the data of these two groups of PZT ceramics were received and recorded, and the remaining conditions were maintained No change, cut a groove with a width of 0.5mm and a depth of 5mm near the connection of the A1 group of PZT ceramics. After cutting, the two groups of PZT ceramics are recorded as group A2 and group B2. Record the two groups of PZT ceramics respectively The third group of experiments and the second group of experiments are comparative experiments. Similarly, the three PZT ceramics on one side of the weld are excited by a sinusoidal signal. These three pairs of PZT ceramics are recorded as C1 group D1 group and E1 group , And then receive and record the data of these three groups of PZT ceramics, keeping the other conditions unchanged, cut a groove with a width of 0.5mm and a depth of 5mm near the PZT ceramic connection of the C1 group, and the three groups of PZT after cutting Ceramics are marked as C2 group, D2 group and E2 group. After the data is stable after continuous excitation for a period of time, the data of these three groups of PZT ceramics are recorded in Table 3 below: 4.3.Data analysis From the data in Table 2, it can be seen that when cracks appear in the welds of steel weldments, the energy of vibration waves will be significantly attenuated, and the attenuation is positively correlated with the depth of the cracks. Due to the existence of cracks in the steel welds, the wave As the actual distance of propagation increases, the phase difference between the transmitted wave and the received wave will increase; from the data in Table 3, it can be seen that the attenuation of wave energy is related to the position of the crack. The closer the crack is to a certain group of piezoelectric ceramics, the pressure of this group The greater the attenuation of the energy received by the ceramic, the less the energy attenuation the farther. Conclusion This paper analyzes the propagation mode of vibration wave in steel, explores the energy loss and phase change of vibration wave after passing through the welding seam, uses PZT to generate vibration wave and receive the energy of vibration wave, and design on-line non-destructive inspection of welding cracks system. This paper uses this system to conduct experiments on steels with different welded structures, and analyzes the influence of different degrees of crack damage in the welds on the experimental results. The results show that the energy loss of the sine wave generated by piezoelectric ceramic vibration increases with the crack expansion in the weld. At the same time, the existence of cracks causes the wave propagation path to be relatively longer, resulting in an increase in the time for the sine wave to propagate to the same location. Comprehensive analysis of the peak-to-peak value of the received signal and the phase difference between the sending and receiving signals can judge the damage degree of the weld and the position of the crack in the weld. The system designed in this paper has the advantages of small size, low cost, convenient layout, and good reliability, and can achieve good results of NDT.
3,297
2020-10-01T00:00:00.000
[ "Materials Science" ]
On the Filtration Efficiency of Composite Media Composed of Multiple Layers of Electret Media This study aimed to explain the discrepancy reported in previous studies between the observed and the calculated filtration efficiencies of a composite formed of multiple electret medium layers. After measuring the composite’s filtration efficiency using particles divided by size and electrical charge status, viz., those possessing no charge, a single charge, and a stationary charge according to the Boltzmann distribution, we traced the discrepancy to the latter attribute. Hence, to accurately predict the filtration efficiency of multi-layered electret media, we must account for the test particles’ electrical charge distribution. INTRODUCTION Under the same pressure drop, electret fibrous media typically offer higher collection efficiencies compared to those of mechanical ones in the same microstructures because of the additional electrostatic particle capture mechanisms (i.e., Coulombic attraction and dielectrophoresis). Electret media have thus been widely used in applications where the pressure drop and/or energy conversation is critical, e.g., in heating, ventilating, and air conditioning (HVAC) systems, and in personal protective equipment (PPE), such as surgical masks and respirators. Composite media composed of multiple layers of electret media (in different specifications) have been found to offer better filtration performance than that of homogeneous electret media (Leung et al., 2009;Leung et al., 2018;Tang et al., 2018a;Chang et al., 2019;Sun and Leung, 2019;Tien et al., 2020). In our previous study (Wang et al., 2020), the initial collection efficiency of composite media made of multiple electret medium layers of the same specification were measured using size-fractionated particles in the Boltzmann stationary charge distribution. It was found that the measured filtration efficiency of the multi-layered composite electret media was lower than the calculated value, assuming each electret medium layer functions independently. The discrepancy observed was reduced as the particle size was increased. The study conducted by Sun and Leung (2019) also found that the figure of merit (FOM) of multi-layered electret media was less than that of base electret media. Moreover, the observed FOM difference was also reduced as the particle size was increased. However, if each electret medium layer functions independently, the FOM of composite media made of multi-layered electret media of the same specification should be the same as that of homogeneous media. The objective of this work is thus to find out the reasoning behind the previously reported discrepancy. After examining all the possibilities, we hypothesized that the reported discrepancy between the measured and calculated collection efficiencies of multi-layered composite electret media was due to the electrical charge status of the test particles. Note that the effect of particle charge on the performance of electret media has already been widely investigated (Kanaoka et al., 1987;Fjeld and Owens, 1988;Otani et al., 1993;Chen and Huang, 1998;Romay et al., 1998;Yang et al., 2007;Chazelet et al., 2011;Sanchez et al., 2013;Tang et al., 2018b) and that this study is, by no To validate our hypothesis, composite media composed of multiple electret medium layers with the same specification were tested. Two electret media, one used by 3M respirators and the other used by HVAC filter panels (rated at MERV 13), were selected as base layers. The specification of the selected base electret media is given in Table 1 for reference. Size-fractionated particles with single charges, no charges and the Boltzmann charge distribution were prepared as test particles. The measured collection efficiency of the double-layered media was then compared to that of the calculated value based on the measured collection efficiency of the base medium layers. The experimental setup and result are presented in the following sections. Fig. 1 shows the schematic diagram of the experimental setup to measure the size-fractionated filtration efficiency of the composite electret media and base electret medium layers. In addition to polydisperse NaCl particles produced by a custom-made Collison atomizer, polydisperse silver particles were prepared by the evaporation-condensation method (Scheibel and Porstendo, 1983). The size distributions of the generated NaCl and silver particles can be found in Fig. S1. Size-fractionated particles were then obtained by classifying the generated polydisperse particles with a differential mobility analyzer (DMA; Models 3081 and 3085; TSI Inc.). The sheath-toaerosol flow rate ratio was kept at 1:10 for the DMA operation. NaCl particles in the sizes of 50, 75, 100, 150, 200, 300, 400, and 500 nm, and silver particles with the sizes of 10, 20, and 30 nm were selected as test particles. For each selected size, particles were prepared in three different charge statuses: singly charged, neutral, and Boltzmann-charge-distributed. Particles sized by the DMA were assumed to be singly charged because multiple-charged particles were minimized by selecting the particles from the right-hand side of the polydisperse particle size distribution modes. Particles with the Boltzmann stationary charge distribution were obtained by passing DMA-classified particles through a 210 Po bipolar charger. Neutral particles were then prepared by passing particles with the Boltzmann charge distribution through an electrostatic precipitator (ESP) to remove the charged particles. The test filter media was placed in the filter holder and tested at the face velocity of 10 cm s -1 , which is approximately the testing velocity for testing the particle collection efficiency of respirators based on the 42 CFR Part 84 regulation. An ultrafine condensation particle counter (UCPC; Model 3776; TSI Inc.) was used in the setup to measure the number concentrations of test particles upstream and downstream of the filter holder. A 3-way valve was used to switch the above upstream and downstream samplings. METHODS Another ESP was also included in front of the UCPC when particles in the Boltzmann stationary charge distribution were chosen for the experimental runs. With the ESP turned on/off, the number concentration of both the neutral and total particles in the sample could be measured. The neutral fraction of sampled particles both upstream and downstream of the filter holder could then be calculated. Stainless-steel tubes were used in the setup to minimize the possible loss of charged particles in the transport lines. RESULTS AND DISCUSSION Assuming each base electret media layer functions independently, the size-fractionated collection efficiency of the double-layered electret media should be given as (Wang et al., 2008): where P1 and P2 are the penetration measured for each base layer, and Pdouble and Edouble are the penetration and collection efficiency of double-layered composite media, respectively. Below, we compare the measured data with that calculated by Eq. (1) using the measured penetration efficiency of base layers. Note that three media samples were tested in each experimental run and the average of measured efficiency data was shown in the following figures (i.e., Figs. 2-5). Error bars associated with each average data shown in figures represents the variation of the particle collection efficiency of tested media due to its nonuniformity. Fig. 2 shows the comparison between the measured and calculated collection efficiencies of double-layered electret media when using test particles of both neutral (Fig. 2(a)) and singly charged ( Fig. 2(b)) status. It is found that the agreement between the measured and calculated values was very good, indicating each base media layer indeed functioned independently in the double-layered composite media. Fig. 3 shows the comparison between the measured and calculated collection efficiencies of the double-layered electret media when challenged by particles with the Boltzmann stationary charge distribution. Differences between the measured and calculated data were only observed in a certain size range of particles (as reported by Wang et al., 2020) for both double-layered electret media. To further investigate the observation shown in Fig. 3, we measured the neutral fraction of test particles upstream and downstream of the filter holder by turning the ESP in front of the UCPC on and off. Fig. 4 shows the neutral faction of particles sampled upstream and downstream of the base electret media A and B at different particle sizes. For reference, the neutral fraction of particles obtained from the theoretical Boltzmann charge distribution is also included in the figure. The charges of the upstream particles were indeed in the Boltzmann stationary charge distribution, but this was not the case for the downstream particles within a certain size range. For particles less than 30 nm in size, only neutral particles were present downstream of the filter media. As the particle size increased, the neutral fraction of downstream particles was reduced, but remained higher than the value given by the theoretical Boltzmann charge distribution, until the size of ~500 nm. This result is due to the collection of more charged particles than neutral ones in the electret media as test particles were passed. Notice that the neutral fraction difference between the upstream and downstream particles is negligible as the particle size is reduced to less than 10 nm, because the charged fraction of particles in the Boltzmann charge distribution is negligible in that range (e.g., 0.7% for 10 nm particles). For particles of large sizes, their capture by the electrostatic mechanisms in electret media is greatly reduced due to reduced electrical mobility (shown in Fig. S2 in SI). The charge status of the downstream particles for large-sized particles was thus close to the Boltzmann charge distribution. Based on the findings shown in Fig. 4, the calculated collection efficiency of the double-layered electret media was corrected by considering the charge distribution of test particles downstream of the base layers. Specifically, the collection efficiencies of the first basic electret media layer for both neutral and charged particles could be calculated by separately measuring the number concentrations of both neutral and charged particles in the upstream and downstream of the media layer (via turning on/off the electrostatic precipitator installed in front of the CPC shown in the setup). Given the known charge distribution of particles in the upstream of the second basic media layer, the collection efficiencies of the second layer could be measured. The above efficiency data were then used to determine the corrected collection efficiency of double-layered electret media. Fig. 5 shows the comparison between the measured and corrected collection efficiency of double-layered electret media made of the base medium layers A and B. The corrected efficiency values are now in good agreement with the measured ones for particles with the Boltzmann stationary charge distribution. For reference, the measured particle filtration efficiency of the first electret medium layer and the calculated efficiency of the second electret Collection efficiency (%) Particle size (nm) Double_exp Double_recal medium layer (with the added consideration of particle charge distribution) for each composite electret media can be found in Fig. S3. These efficiency values were used in the calculation for the data presented in Fig. 5. FINAL REMARKS We resolved the discrepancy between the measured and the calculated filtration efficiencies of a composite formed of multiple electret medium layers, which has been reported in previous studies. We can accurately estimate the composite's filtration efficiency by assuming that each of its layers independently removes particles, but we must also account for the electrical charge distribution of the particles. Owing to the difficulty in directly measuring the surface charge density of electret medium fibers, researchers typically obtain an estimate by comparing the calculated filtration efficiency (derived with an existing model) with observational data in the literature. Our findings suggest that such estimates should also factor in the charge distribution of the particles. For the sake of simplicity, the filtration data for either neutral or single-charged particles may be applied. SUPPLEMENTARY MATERIAL Supplementary data associated with this article can be found in the online version at https://doi.org/10.4209/aaqr.210005
2,651.6
2021-03-18T00:00:00.000
[ "Physics" ]
Gauged Q-ball dark matter through a cosmological first-order phase transition As a new type of dynamical dark matter mechanism, we discuss the stability of the gauged Q-ball dark matter and its production mechanism through a cosmological first-order phase transition. This work delves into the study of gauged Q-ball dark matter generated during the cosmic phase transition. We demonstrate detailed discussions on the stability of gauged Q-balls to rigorously constrain their charge and mass ranges. Additionally, employing analytic approximations and the mapping method, we provide qualitative insights into gauged Q-balls. We establish an upper limit on the gauge coupling constant and give the relic density of stable gauged Q-ball dark matter formed during a first-order phase transition. Furthermore, we discuss potential observational signatures or constraints of gauged Q-ball dark matter, including astronomical observations and gravitational wave signals. Introduction Exploring the nature of dark matter (DM) is one of the central issues in (astro)particle physics and cosmology [1].So far, there are no expected signals of conventional DM candidates like Weakly Interacting Massive Particles (WIMPs) in the DM direct detection and collider search experiments [2][3][4].Then simple WIMP scenarios are strongly disfavored.There are a number of ways to save WIMP scenarios.For example, DM may have substantial couplings only to the 3rd generation fermions [5][6][7], or dark sectors may consist of two or more stable DM species (see, for example, [8]).Or one can discard WIMP scenarios and consider other possibilities for DM productions and annihilations or decays in the early Universe.This status motivates us to study ultralight or ultra heavy DM candidate (for reviews, see Refs.[9,10]). Solitons produced in the early Universe are natural candidates of heavy DM (see, for example, Ref. [11] for hidden sector monopole DM accompanied by stable spin-1 vector DM and massless dark radiation, and Ref. [12] for hunting for topological DM using atomic clocks).These solitons are specific field configurations which are classified into two classes, namely, the topological solitons and the non-topological solitons.Recently, as renaissance of the quark nuggets DM proposed by Witten [13], various new ideas on the non-topological soliton DM are proposed, where the DM relic density can be produced by the dynamical process of cosmological first-order phase transition (FOPT), such as the Q-ball DM [14][15][16][17].These new mechanisms can naturally avoid the unitarity problem for heavy DM [18].Dynamical DM mechanisms are specified by the DM penetration behavior into the bubble which depends on the DM mass and bubble wall velocity [19][20][21].Phase transitions in the early Universe can also be the source of primordial black holes [22][23][24]. There are extensive discussions on the non-topological solitons in a theory of complex scalar field with global U (1) symmetry, proposed in [25] and known as Q-balls [26].And it is natural to study the Q-balls in the gauged case [26][27][28][29][30][31][32][33][34], by promoting the global U (1) symmetry to the local gauge U (1) symmetry.For reviews of U (1) gauged Q-balls, see [35][36][37].Q-balls have been proposed as a potential DM candidate in supersymmetric theories [38,39].They can also explain the baryon asymmetry of the Universe [40].The gauged Q-ball DM in supersymmetry model has been studied in several papers [41,42].It is meaningful to search for other production mechanisms of Q-ball or gauged Q-ball DM without supersymmetry.In this paper, we study whether the U (1) gauged Q-balls produced during cosmic phase transition could be a viable DM candidate.If the gauged Q-balls can be stable under certain circumstances, we still need some mechanism to (1) produce the charge asymmetry (i.e.locally produce lots of particles with the same charge to form Q-ball) (2) and packet the same sign charge in the small size after overcoming the Coulomb repulsive interaction.For the first condition, the primordial charge asymmetry could be produced by some early Universe processes such as decays of heavier particles.Cosmological FOPT can naturally realize the second condition and can produce phase transition gravitational wave (GW) which can be detected by future GW experiments, such as LISA [43], TianQin [44,45], Taiji [46], BBO [47], DECIGO [48], and Ultimate-DECIGO [49]. In this work, for the first time, we study the natural production mechanism of gauged Q-ball DM through a cosmological FOPT.The paper is organised as follows.We describe the basic model that can produce the gauged Q-balls and the numerical solutions of the Q-ball profiles in section 2. Basic properties and the stable parameter space of gauged Qballs are discussed in section 3. Thin-wall approximation and the corresponding analytic evaluations are given in section 4. Phase transition dynamics in the Standard Model (SM) plus an extra singlet and the relic density of gauged Q-ball DM are elucidated in section 5. Signals and constraints of gauged Q-ball DM are given in section 6. Concise conclusions and discussions are given in section 7. 2 Gauged Q-ball 2.1 Friedberg-Lee-Sirlin-Maxwell model In this work, we adopt the Friedberg-Lee-Sirlin two-component model [30] 1 plus gauge component, which is called Friedberg-Lee-Sirlin-Maxwell (FLSM) model [29].This model and the corresponding stability of U (1) gauged Q-ball have been discussed in [27,29,[52][53][54].We begin our discussions with the following Lagrangian density where the potential V (ϕ, h) reads ϕ and h are the complex scalar field and (real) Higgs field respectively.D µ = ∂ µ +ig õ and õν = ∂ µ Ãν − ∂ ν õ where õ is a dark U (1) gauge field and g is the corresponding gauge coupling constant.õ can be identified as the dark electromagnetic field.We fix the Higgs mass m h = 125 GeV and vacuum expectation value v 0 = 246 GeV at zero temperature then λ h = m 2 h /(2v 2 0 ) ≈ 0.13.The complex scalar ϕ gains mass through the portal coupling with the Higgs.In the true vacuum, m ϕ = λ ϕh 2 v 0 .We assume λ ϕh > 0, and thus the Lagrangian density is symmetric under the dark U (1) symmetry which remains unbroken when the Universe goes through the electroweak phase transition.The local U (1) gauge symmetry leads to the conserved current, and the corresponding conserved charge, Once the gauged Q-balls are formed in this FLSM model, one could consider a coherent configuration of ϕ, h, and õ at a given charge Q.The lowest energy state will have no "magnetic field" so the space component Ãi = 0 [27,29].We assume spherical symmetry for the lowest energy configuration.Scaling away the physical dimensions, we introduce dimensionless field variables A, Φ, and H defined in the convention of Ref. [29]. where ρ ≡ √ 2λ h v 0 r = m h r.The Lagrangian, with the substitution of the field variables defined above, becomes where . By varying L with respect to A, Φ, and H, we find the equations of motion (EoM) for the three fields, ) The total energy is given by where And the total charge is given by (2.11) From Eqs. (2.7) and (2.11) we see that Therefore we get, for large ρ, A → λ h Q 2πρ or Ãt → gQ 4πr .Then Eq. (2.8) at large ρ becomes It has been shown in Ref. [36] for ν < k this equation at ρ → ∞ has the solution of the form, where C U is a constant and U (a, b, z) is the confluent hypergeometric function of the second kind.For We can see that in the limit g → 0 this form coincides with the nongauged global Qball, Φ(ρ The difference is caused by taking into account the dark electromagnetic potential A(ρ).For ω = m ϕ or ν = k, Eq. (2.13) takes the form of The solution to this equation reads Φ , where C K is a constant and K 1 (z) is the modified Bessel function of the second kind.For large ρ this solution has the form of Φ(ρ) ∼ ρ − 3 4 e − 2kg 2 Q π ρ which also differs from the global case, in which one expects Φ(ρ) ∼ 1 ρ for ν = k.For the case ν > k, it can be seen from Eq. (2.13) that the corresponding solutions for the complex scalar field are oscillatory at ρ → ∞, leading to unexpected infinite charge and energy.These solutions are unphysical and should be discarded. It is convenient to write Eq. (2.7) in the form of (2.17) Suppose that A(0) > ν/α 2 .Eq. (2.17) then implies that ρ 2 ∂ ρ A is an increasing function of ρ such that ∂ ρ A > 0 and A(ρ) > ν/α 2 for all ρ > 0. This possibility is not acceptable, given that ν > 0 and A(ρ) → 0 at ρ → ∞.The only acceptable possibility is that A(0) ≤ ν/α 2 .Then ∂ ρ A < 0 and therefore A(ρ) is a monotonically decreasing function of ρ.We can then say that A(ρ) obeys the inequalities The energy integral Eq. (2.10) can be written in a different form.Demanding that the Lagrangian (2.6) is stationary at ϵ = 1 under the rescaling of the form ρ → ϵρ, leading to the relation dL(ρ → ϵρ)/dϵ| ϵ=1 = 0, this gives the relation: where in the third line we have integrated by part and use the fact that and A → 0 at large ρ.It can be seen that the energy for the free field solution when we neglect the variation of Φ and H takes the form The "dark electrostatic energy" is roughly proportional to R for charge uniformly distributed on scale R. Numerical results of the field configuration, energy, and charge After qualitative analysis of the gauged Q-ball solution, we begin to numerically solve Eqs.(2.7), (2.8), and (2.9) with the following boundary conditions, The first boundary condition is necessary so that the terms do not become singular at ρ = 0, and the latter is necessary because the energy density E and charge density (ν − α 2 A)Φ 2 should be integrable over the infinite spatial volume and the integral should be finite. It is convenient to introduce the dimensionless quantities for the total energy and charge of the gauged Q-ball, which can be calculated directly once the numerical solutions of the corresponding differential equations are found.We use the relaxation method [55,56] to solve coupled 2nd-order ordinary differential equations, Eqs.(2.7), (2.8) and (2.9), with boundary conditions, Eq. (2.22).The relaxation method solves the boundary value problems by updating the trial functions on the grid in an iterative way.As an example, we fix k = 3.5 and we scan over all of the solutions at a given value of α.The results can be seen in figure 1.The frequency ν firstly decreases in the direction of the arrow.We call this the "first branch" where the backreaction of gauge field is small.Then the solutions turn on the "second branch" as ν increases and the gauge field dominates.Contrary to the global Q-balls, the parameter ν does not uniquely determine the charge and energy of the gauged Q-balls.For the global case, the energy and charge increase as the ν approaches zero.However, in the case of gauged Q-balls, the ν is replaced by (ν − α 2 A).Then on the second branch where the gauge field A dominates, ν has to increase in order to satisfy (ν − α 2 A) > 0. We choose four specific solutions P 1, P 2, P 3, P 4 in figure 1 for α = 1.0 which are marked by the purple triangles and the corresponding numerical profiles of A, Φ and H are shown in figure 2. We can see that as the value of A(0) of the gauged Q-ball becomes larger, the Higgs field value inside the Q-ball is closer to zero.Actually, when the value of A(0) increases, the radius, charge and energy also increase.In other words, the Higgs value is effectively zero inside for large gauged Q-balls. The total charge of gauged Q-balls for different values of gauge coupling α is shown in the left panel of figure 3. It can be seen that the charge of the gauged Q-balls is also finite at a nonzero α whereas for the global Q-balls the charge is unbounded from above.In order to obtain gauged Q-balls with relatively large charge, the gauge coupling has to be small enough. We can define two typical radii for the Higgs field H(ρ) and the Q-ball field Φ(ρ) respectively.The ρ ⋆ is defined by H(ρ ⋆ ) = 1/2 and ρ b is defined by Φ(ρ b ) = Φ(0)/2.These two radii are shown in the right panel of figure 3. We can see that ρ ⋆ is generally larger than ρ b .One may prefer to call ρ ⋆ the radius of the "Higgs ball".Hereafter, we use ρ ⋆ to represent the Q-ball radius which defines the charge and energy of gauged Q-balls.A(ρ) Here we choose the marked points P 1, P 2, P 3, P 4 in figure 1 where k = 3.5 and α = 1.0.3 Basic properties of gauged Q-balls It is well known that for non-gauged global Q-balls the relations dE/dQ = ω holds.Here we will show that this also holds for gauged Q-balls in FLSM model.From Eq. (2.10), we have After integrating ∂ ρ Φ and ∂ ρ H by part and using Eqs.(2.8) and (2.9), we can get (3.2) In the third line we have integrated ∂ ρ A by part and used Eq.(2.7), then the integral vanishes.Finally, Stability of gauged Q-balls The stability of Q-balls is an important criterion to judge whether they can serve as the DM candidate.Unlike the global Q-balls, the stability of gauged Q-balls is still being discussed.In this subsection, we will systematically analyze four stability criteria of gauged Q-balls and show the viable parameter space of stable gauged Q-balls. Quantum mechanical stability The quantum mechanical stability is satisfied if This means that the gauged Q-balls are stable against decay to free scalar particles.It should be noted that if the Q-ball has decay channels into other fundamental scalar particles which have the mass m i that are smaller than m ϕ , we need to replace m ϕ by m i in Eq. (3.5). The decay of global Q-balls or gauged Q-balls has been discussed in several works [40,[57][58][59].When the effective energy of DM particles inside the gauged Q-balls π/r ⋆ ∝ (ω − g Ãt ) is larger than the masses of decay products, the decay process is kinetically allowed.If the Q-ball radius r ⋆ is large enough or (ω − g Ãt ) is small, the gauged Q-balls do not decay into other daughter particles and thus are stable . The ratio Ẽ/ Q is shown in the figure 4. As the frequency ν → k on the first branch, there is a region of parameter space where Ẽ/ Q > 1.It implies the existence of a minimal Q-ball charge Qs defined as Ẽ(Q s )/ Qs = 1 of the quantum mechanically stable gauged Q-ball.We can see from figure 4 that for the global Q-balls the Ẽ/ Q decreases with growing Q when Ẽ/ Q < 1.The branch of Ẽ/ Q > 1 corresponds to ν → k.So the global Q-balls are quantum mechanically stable as ν ≪ k and Q > Qs .One would wonder that the gauged Q-ball will destroy the quantum mechanical stability at large charge because the dark electrostatic energy is proportional to Q2 and thus the Ẽ/ Q is proportional to Q.However, we found the gauged Q-balls are always quantum mechanically stable on the second branch where the gauge field dominates because of the charge of the gauged Q-balls must be finite. Stress stability Now we investigate the effects of the electrostatic repulsion on the stability of the gauged Q-balls.In Ref. [53], the authors pointed out that, just like the hadrons, a necessary condition for stability of the configuration of gauged Q-balls is the balance of the internal forces, called von Laue condition [60,61] ∞ 0 drr 2 p(r) = 0 . (3.6) Here p(r) is the radial distribution of the pressure inside the Q-ball, which can be extracted from the energy-momentum tensor by using the following parametrization [62,63]: s(r) is the traceless part which yields the anisotropy of pressure (shear forces).This kind of stability has also been studied for global Q-balls [63,64]. One stronger local criterion is that the normal force per unit area acting on an infinitesimal area element at a distance r, must be directed outward [64,65], This is a necessary but not sufficient condition for stability.By using the rescaled parameters, we have the expressions of p(ρ) and s(ρ) in the FLSM model: ) from which we get (3.10)We briefly discuss the stress stability of global Q-balls by taking α → 0 in Eq. (3.10).For global Q-balls, we have then we can get (3.12)The EoM for Φ and H in the FLSM model for α = 0 read Substituting Eq. (3.13) into Eq.(3.12), we obtain This implies that F (ρ) is a decreasing function of ρ.Because F (ρ) must satisfy F (ρ) → 0 at large ρ, we find that F (ρ) is always positive.Then the global Q-balls will be always stable under stress stability criterion.Based on this, it can also be expected that the gauged Q-balls are also stable under stress stability on the first branch where the backreaction from gauge field is small enough. For gauged Q-balls, we can first consider one extreme case.At the end point where (ν − α 2 A) → 0 on the second branch, the Φ is finite and Then, from Eq. (3.10), we can see that as H ≈ 0 inside gauged Q-balls in the large ball (thin-wall) limit.This implies the gauged Q-balls with maximal values of gauge filed A = ν/α 2 inevitably break the stress stability. We also expect the gauged Q-balls are unstable under stress stability criterion on the second branch where the gauge field dominates.We prove this by using the numerical calculations.In the left panel of figure 5, we show the profile of F (ρ) for four marked points in figure 1 where α = 1.0.We can see that the F (ρ) has no nodes for P 1 and P 2 on the first branch.On the other hand, negative values exist for F (ρ) on the second branch.The gauged Q-balls on the second branch where the gauge field dominates are unstable.F (ρ) behaves similarly to Eq. (3.15) at ρ = 0 when (ν − α 2 A) → 0, which can also be seen from the P 4 of figure 5.It should be noted that the inequality (3.8) takes an approximation that Q-balls behave as a continuous media, which needs more discussions.And in the right panel we plot the F (ρ) at the transition point between the first and second branch for different values of gauge coupling α.The values of α are set to be 0.0001 − 0.7.It can be seen that they are all unstable under stress stability criterion.This implies that the solutions on the second branch are also unstable where the gauged field becomes larger [53].The conclusion does not change qualitatively for different values of α. The stress-stability is the most stringent stability criterion in this work.We are not sure if the gauged Q-ball on the second branch would decay into free particles or smaller Q-balls.And it is meaningful to explore that how we can get rid of the strong constraints from stress-stability, in other words, how to get large gauged Q-balls with large gauge coupling.Maybe we can consider an example of two-scalar case where the two scalars ϕ and ψ possess opposite charge.If the electrostatic fields produced by the two scalars cancels with each other, which implies with g and g′ being the gauge couplings of ϕ and ψ respectively, this guarantees the electric neutrality of the interior of the gauged Q-balls [66,67].The gauged Q-balls will avoid the electric repulsion which leads to the stress-instability even when for large gauge coupling. We expect these Q-balls can also be the DM candidate because they will behave as the ordinary global Q-balls. Stability against fission For nongauged global Q-balls the corresponding stability criterion against fission takes the form This clearly leads to when E(0) = 0.However, it may not hold everywhere for the gauged Q-balls due to the presence of the gauge potential.In Ref. [35] the authors gave a detailed discussion on the stability against fission of U (1) gauged Q-balls.They pointed out that we could not make any conclusion about the stability against fission for gauged Q-balls on the second branch.Nevertheless, it has been shown that the gauged Q-balls on the first branch are generally stable because the backreaction of the gauge field is generally small. Classical stability The problem of classical stability of U (1) gauged Q-balls is discussed in detail in Ref. [68]. The classical stability criterion was firstly derived in [69] for one-field Q-balls and was discussed for the model with two scalar fields in [30].The proof of Refs.[30,69] was based on examining the properties of the energy functional of the system.Instead, the examination of Ref. [68] is based on the Vakhitov-Kolokolov method [70,71] which utilized linearized EoM for the perturbations above the background solution. We only consider the spherical perturbations on the gauged Q-ball.We adopt the following ansatz: ϕ(t, r) = e −iωt f (r) + e −iωt e γt (u(r) + il(r)) , Ãt (t, r) = Ãt (r) + e γt a 0 (r) , h(t, r) = h(r) + e γt σ(r) , where f (r), Ãt (r) and h(r) are the background solutions.u(r), l(r), a 0 (r) and σ(r) are the perturbations on the background.Then we obtain the linearized EoM as below, The boundary conditions are We want to obtain the parameter γ which depicts the growth of perturbations u, l, a 0 , σ.The classically unstable mode corresponds to γ > 0. This can be done by using the shooting method in Ref. [72].We introduce four basis solutions Ψ (i=1,2,3,4) (r) = (u (i) , l (i) , a (i) 0 , σ (i) ) which satisfy the Neumann boundary conditions dΨ (i) /dr| r=0 = 0 and Dirichlet boundary conditions Ψ (1) (0) = (1, 0, 0, 0), Ψ (2) (0) = (0, 1, 0, 0), Ψ (3) (0) = (0, 0, 1, 0) and Ψ (4) (0) = (0, 0, 0, 1).Then we integrate the Eqs.(3.19) numerically to find the values of Ψ (i) at large r = r ∞ .Now, recall that we are searching for a specific solution Ψ(r Eq. (3.22) has nontrivial solutions only if D c ≡ det Dc = 0.In figure 6 we plot Log|D c | as a function of Log 10 γ where we choose α 2 = 10 −5 and k = 3.5.We found that there is no classically unstable mode for ν = 3.45 as there is no solution for γ > 0. This also holds for other solutions of U (1) gauged Q-balls except for ν → k on the first branch.We found there is one unstable mode for ν = 3.49.Actually, in the limit ν → k, the contribution of gauged field can be neglected and one expects that the case is similar to the non-gauged Q-ball where dQ dω > 0 indicates there exists classically unstable mode.This is also discussed in Refs.[30,68].However, we usually do not have to worry about this because the region already has been excluded by the quantum instability.In Ref. [54], the authors have shown that the gauged Q-balls on the second branch with small gauge coupling are classically unstable with respect to axisymmetric perturbations.This enhances our confidence that the gauged Q-ball on the second branch is unstable.The region is almost covered by the stress stability criterion. In summary, the parameter space of gauged Q-balls is shown in figure 7. The red line represents the region where the gauged Q-ball is dominated by the gauge field such that it is unstable under stress stability criterion.The green line represents the region where the gauged Q-ball is unstable under quantum stability criterion.We only plot the quantum mechanical stability and the stress stability criterion because they cover the space where the gauged Q-balls are classically unstable and are unstable against fission respectively.The gauged Q-balls are stable only in the region of ν ∈ [ν min , ν max ].The ν min corresponds to the maximal charge of gauged Q-balls and is close to the transition point between first and second branches which is marked by the purple point.In order to form gauged Q-balls of given charge at given gauge coupling, the charge has to be smaller than the maximal charge.It should also be noted that, if the dark gauge boson of gauged Q-ball kinetically mixes with SM photons or Z bosons, this would produce distinct experimental signatures which can constrain Q-ball charge and couplings. Thin-wall approximation As the radii of gauged Q-balls become large, the width of profile can be neglected and the gauged Q-balls can be depicted by the thin-wall approximation.The Higgs profile can be approximately viewed as a step function, the vacuum value is equal to zero and v 0 inside and outside the Q-balls, respectively.The derivative of the Higgs field only contributes to the surface term of gauged Q-balls which is negligible when the radius of the ball is large. Then the problems are reduced to those or ones with two fields, ϕ and õ .We will discuss the simplified piecewise model and show that it behaves similarly to the FLSM model.By using the mapping method introduced by Ref. [73], we give some semi-analytic results and some analytic evaluations of the maximal charge of gauged Q-balls. Piecewise model If the Higgs field is approximately h(ρ) = v 0 Θ(ρ − ρ ⋆ ), then we can approximately view the complex scalar moving in the piecewise parabolic potential [25,74,75].The Lagrangian density can be further approximated as where Θ(x) is the Heaviside step function.In our case, Note that v is chosen so that V (ϕ) is continuous at ϕ † ϕ = v 2 .This can be understood from the EoM of the Higgs field in the FLSM model, where we use the definition ϕ(r, t) = f (r)e −iωt and the prime denotes a derivative with respect to r.If the Higgs field is approximately a step function, we can neglect the derivatives, then we have and we could assume f (r) ≈ 0 outside the bubble where 0 outside and inside the Q-ball, respectively.The consistency between the piecewise model and the Friedberg-Lee-Sirlin model has been discussed in Ref. [75] and the classical stability of gauged Q-balls in piecewise model has been studied in Ref. [68,74].The EoM of gauged Q-balls in the piecewise model after the rescaling of Eq. (2.19) are These equations are easier to solve than the FLSM model by using undershooting/overshooting method.The gauged Q-ball energy and charge reads, Here, which has a different form from the FLSM model. We solve Eqs.(4.5) and (4.6) numerically to get the profiles of the complex field and the gauge field.Then we can get the charge and energy of gauged Q-balls by substituting them into Eqs.(4.7).These numerical results are shown in figure 8.We can see that the piecewise model fits the FLSM model well when the gauged Q-ball radius is large.The distinctions appear at small ρ ⋆ at which the Higgs field value inside is not approximately zero. Mapping gauged Q-balls We make a further assumption that the complex scalar field can also be viewed as a step function, Φ(ρ) = Φ 0 (1 − Θ(ρ − ρ b )), where we denote Φ 0 = Φ(0) and ρ b is defined by Φ(ρ b ) = Φ 0 /2.Then the profile of gauged field is [27] The Q-ball radius is defined by Φ(ρ ⋆ ) = 1 2k .In Ref. [73], the authors propose a mapping between the gauged Q-ball and the global Q-ball.Specifically, where ν g is the value of frequency for the global Q-ball with the same ρ b .Then the profile of A is given by Eq. (4.8).This relation holds even for cases beyond the thinwall approximation.Interestingly, the global cases in the piecewise model have analytic solutions [74]: ρ⋆ sin(νgρ) ) and A(ρ) from Eq. (4.8), we have further semianalytic results for charge: .12) In the limit α → 0 and αΦ 0 ρ b → 0, because x coth x ∼ 1 + x2 3 + O(x 3 ) for x → 0. Then we have Q ∼ ν g Φ 2 0 ρ 3 b .And in this case, when ν g → 0, from Eq. (4.11) we have ν g ρ ⋆ ≃ π and sin(ν g ρ ⋆ ) ≃ ν g /k, which lead to From 2 sin(ν g ρ b ) = ν g ρ b , we have ν g ∼ C 1 ρ b , and which is consistent with the global Q-ball case.Using the same procedure that derives Eq. (2.20), we have analytic results for total energy: The second term comes from the integration over discontinuous (∂ ρ Φ) 2 by using the approximation of energy conservation [76]. In the first limit of αΦ 0 ρ b → 0 and large ρ b , the last term of Eq. (4.15) vanishes.We then have ν ≃ ν g , Φ 0 ≃ π 2νg ≃ ρ⋆ 2 and ρ ⋆ ≃ ρ b , then which is just the energy of global Q-ball. In the opposite limit αΦ 0 ρ b → ∞, because A → Q/ρ, then from Eq. (4.8), Thus from Eq. (4.15) the energy of gauged Q-ball is The first term is the Coulomb energy and the second term is the potential energy difference between inside and outside of the Q-ball.The second term is proportional to ρ 2 b because in this case the Compton wavelength of the gauge field 1 gv 0 Φ 0 inside the Q-ball is much smaller than Q-ball radius, r b = ρ b /m h .So the Q ball is superconducting.The potential energy is therefore zero inside as well as outside of the Q ball and is nonzero only in the shell around ρ b [27]. For a given ρ ⋆ , we can solve the Eq.(4.11) to get the corresponding ν g .After using Φ 0 = 1 2k νgρ⋆ sin(νgρ⋆) , 2 sin(ν g ρ b ) = ν g ρ b and Eq.(4.9) we can get Φ 0 , ρ b and ν of gauged Qballs.Substituting them into Eqs.(4.12) and (4.15) gives the charge and energy of gauged Q-balls.These semi-analytic results are shown by the red lines in figure 8.We find the mapping works well for ρ ⋆ ≫ 1 on the first branch where the profiles of scalar fields ϕ and h can be safely viewed as step functions.However, the semi-analytic results of energy do not work well for the second branch because Φ 0 ̸ = 1 2k νgρ⋆ sin(νgρ⋆) in this case.The value of Φ 0 is lower because the gauge potential dominates.This can be seen in figure 1.At the transition point between the first and the second branches, the discrepancy between semi-analytic results and numerical results is about a factor of O(1). Maximal charge and energy of gauged Q-balls: analytic approximations We give analytic evaluations of maximal charge and maximal energy of gauged Q-balls for a given value of gauge coupling.Because the gauged Q-balls should be unstable under stress stability criterion on the second branch where the gauge potential dominates and the ν min is very close to the transition point, the maximal charge is approximately defined by dν dρ⋆ ρ⋆=ρmax = 0.In the limit at large Q-ball with ν g → 0, we have ρ b = C 1 /ν g and Φ 0 ≃ ρ⋆ 2 .Then, from (4.9) we get where we used ρ ⋆ ≃ π/ν g when ν g → 0. Then from dν dρ⋆ ρ⋆=ρmax = 0 when the charge is maximal, we have then we can obtain C 1 αρ 2 max 2π = C 2 with C 2 ≈ 1.08866 and αΦ 0 ρ b ≃ C 2 which is somewhere in the middle of cases of Eq. (4.16) and Eq.(4.18).We can also get the minimal frequency and the corresponding frequency for the global case ν gmin = π ρmax = πC 1 α 2C 2 .We can see that ν min ν gmin = C 2 coth(C 2 ) ≈ 1.367 which is independent of the Q-ball size.This implies that as the solutions are unstable on the second branch, the gauged Q-ball lives on the first branch and is close to the global case.Finally, the maximal charge is given by from which we can see the charge is unbounded from above in the global case where α = 0.The maximal energy reads This also gives us Ẽmax ∝ ρ 3 max ∝ Q3/4 max which is expected in the global case.The analytic results of Q max = 2π λ h Qmax and E max = 2πm ϕ λ h Ẽmax are shown in terms of the red lines in figure 9 and we can see that the maximal charge Q max fits well with the semi-analytic and numerical results.However, the analytic and semi-analytic E max is about 5-6 times larger than the numerical results due to the uncertainties of values of Φ 0 . Gauged Q-ball DM from electroweak FOPT In the above discussions, we have shown that the gauged Q-ball could be stable and hence be a DM candidate.In this section, we begin to discuss the detailed production mechanism of the gauged Q-ball DM from the electroweak FOPT in the early Universe.We consider the minimal Higgs extended model with a singlet scalar field which could trigger a strong FOPT [51,77,78].The discussions can also be applied to other FOPT models.The phase transition dynamics can be modified by introducing some new degrees of freedom beyond the standard model. Electroweak FOPT The electroweak FOPT dynamics is determined by the finite temperature effective potential V eff (h, T ) where h is the real component of the SM Higgs doublet as defined in Eq. (2.1), (5.1) The first term V tree (h) = λ h h 2 − v 2 0 2 /4 is the tree-level SM Higgs potential.V CW (h) is the one-loop quantum correction to the effective potential, i.e., Coleman-Weinberg potential [79].Using the on-shell renormalization scheme, we have where g i is the degree of freedom for each particle, F i = 1(0) for fermions(bosons), m i (h) are masses for = t, W, Z, h, ϕ.The finite-temperature correction term is given by where the integral with "−/+ " sign denotes the contribution of bosons/fermions.Π i is thermal masses of species i.Here, we use the daisy resummation scheme proposed by Dolan and Jackiw [80].It is worth noticing that only the scalar fields and the longitudinal components of the gauge fields have nonzero Π i .For the scalar fields where g and g ′ are the gauge coupling of SU (2) L and U (1) Y , respectively.For the longitudinal components of the gauge bosons, we have (5.5) Hence, for the longitudinal components of the gauge bosons, their physical masses are eigenvalues of the following matrix where Requiring the ordinary electroweak vacuum with h The phase transition is the process of symmetry breaking in the early Universe.Through a process of bubble nucleation, growth and merger, the Universe transits from a metastable state into a stable state.The critical temperature T c is defined by the time when the two minima of effective potential are degenerate, V eff (v(T c ), T c ) = V eff (0, T c ) with v(T c ) being the vacuum value in the true vacuum at T = T c .Bubbles begin to nucleate when the temperature drops to the nucleation temperature T n .The nucleation rate of bubbles is given by with S 3 (T ) being the action of the O(3) symmetric bounce solution [81].The nucleation temperature T n is typically defined by where H(T ) is the Hubble expansion rate, where M pl = 1.22×10 19 GeV is the Planck mass and g ⋆ is the number of relativistic degrees of freedom at temperature T .∆V eff (T ) is the potential energy difference between false and true vacuum ∆V eff (T ) = V eff (0, T ) − V eff (v(T ), T ).The potential difference between the inside and outside the bubbles will cause the bubbles to expand in the Universe so that the volume of the false vacuum diminishes with time.The probability of finding a point in the false vacuum reads, where I(T ) is the fraction of vacuum converted to the true vacuum [82,83], (5.12) where v w is the bubble wall velocity.In the radiation dominated Universe [83], The percolation temperature T p , is defined by I(T p ) = 0.34 [82].This means that 34% of the false vacuum has been converted to the true vacuum at T p .The percolation temperature T p is also the temperature at which the GW is produced from the FOPT [83][84][85][86][87]. We use the following definition of the phase transition strength at percolation temperature: where ρ r = π 2 g ⋆ T 4 /30 represents radiation energy density and ∆V eff is the potential difference between the false and the true vacua.The inverse time duration β at percolation temperature is defined as We use CosmoTransitions [88] to calculate the phase transition dynamics.In the left panel of figure 10, we show the three typical temperatures of the FOPT process in the minimal SM plus singlet model.And in the right panel of figure 10, we show the wash-out parameter v(T )/T at these different temperatures. Bubble wall filtering during FOPT As the particles gain mass inside the bubble, due to the energy conservation, only highenergy particles can pass through the bubble walls and the others are reflected.The condition of penetration in the bubble wall frame reads [19,20]: where p w z is the particle z-direction momentum in the bubble wall frame, ∆m is the mass difference between the true and false vacuum where m in i and m out i are the particle mass in the true vacuum and false vacuum respectively [89].We will set m out i to zero in this work.The particle flux coming from the false vacuum per unit area and unit time can be written as [19,20] where g i is the degree of freedom of the particle.p w is the magnitude of the threemomentum of the particles.f eq i is the equilibrium distribution outside the bubble in the bubble wall frame, where ∓ is for bosons and fermions respectively.γ w = 1/ 1 − v 2 w is the Lorentz boost factor.The particle number density inside the bubble n in i in the bubble center frame can be written as Assuming the particles are massless in the false vacuum, we can integrate Eq. (5.17) analytically and get [20,90] where we have used Maxwell-Boltzmann approximation of DM distribution.One can see that as v w → 1, Eq. (5.20) approaches n out i = g i T 3 /π 2 , which is approximately the equilibrium number density for Boltzmann distribution outside the bubble.The fraction of particles i that are trapped into the false vacuum is defined by In our case, complex scalars ϕ and ϕ † are trapped into the false vacuum due to the filtering effect.The symmetric part would annihilate away in terms of the process ϕ + ϕ † → h + h, then only the asymmetric part survives and composes the charge of gauged Q-balls.It can be easily seen that the penetrated particle number density is sensitive to the bubble wall velocity as it appears in the exponent of Eq. (5.20).The precise calculation of bubble wall velocity [91][92][93][94][95][96][97][98] is beyond the scope of this work and we set it as a free parameter. Charge of Q-ball in the electroweak FOPT We can define T ⋆ as the temperature at which the false vacuum or old phase remnants can still form an infinitely connected "cluster", just like the definition of percolation temperature [17].The T ⋆ satisfies p(T ⋆ ) = 1 − p(T p ) = 0.29 which corresponds to I(T ⋆ ) = 1.24.T ⋆ is the temperature when Q-balls start to form.Below the temperature T ⋆ , the false vacuum remnants formed during FOPT may further fragment into smaller pieces.Ultimately, these pieces would shrink into Q-balls if there exists a non-zero primordial charge asymmetry.We can define the critical radius, r c , at which the remnant shrinks to an insignificant size before another true vacuum bubble forms within it [14].This means [17] Γ (T ⋆ ) 4π 3 r 3 c ∆t ∼ 1 . ( 5.22) where ∆t = r c /v w is the time cost for shrinking.The number density of the remnants n ⋆ Q can be expressed as: since the condition n ⋆ Q 4π 3 r 3 c = p(T ⋆ ) ≃ 0.29.The formation of Q-balls requires a nonzero conserved primordial charge which comes from the primordial DM asymmetry η ϕ = (n ϕ −n ϕ † )/s with entropy density s = 2π 2 g ⋆ T 3 /45.If the DM asymmetry is produced by thermal freeze-out in the early Universe, it is bounded from above by the equilibrium value, where we have used n eq ϕ (T ) = 2ζ(3)T 3 /π 2 with ζ(3) = 1.20206 being the value of Riemann zeta function ζ(s) at s = 3.In order to overcome this constraint, we assume the DM is produced by some non-thermal processes like decay.In this work, we do not specify the origin of primordial charge of the complex scalar ϕ.In the early Universe at higher temperature, new physical processes beyond the standard model may have occurred.The would-be Q-ball DM particles ϕ may have their own conserved charge and be created in asymmetric decays of heavier particles [99].For example, heavy Majorana neutrino could decay into a light fermion and a scalar, like N → χ + ϕ and N → χ + ϕ † where χ is a fermion [100].Assuming the process is CP-violating, the decay rates of these two channels differentiate from each other at loop level (this is similar to the process of leptogenesis.).So the asymmetry between ϕ and ϕ † appears and is retained until the phase transition in this work.The large DM asymmetry can be discussed in a similar manner to the large lepton asymmetry in the leptogenesis scenarios.Recent measurement of 4 He abundance coming from the EMPRESS experiment suggests a large degeneracy parameter of the electron neutrino [101], ξ e = 0.05 +0.03 −0.02 . (5.25) Since the neutrino oscillations among three flavors, the neutrinos with three flavors have the same amount of asymmetry, and then the total lepton asymmetry reads, where g BBN = 10.75 is the relativistic degree of freedom at the epoch of big bang nucleosynthesis.The large lepton asymmetry may come from the low-scale leptogenesis [102,103], Affleck-Dine mechanism [104] or L-ball decay [105]. In a remnant, the trapped Q-charge is given by In figure 11, we show the charge of the gauged Q-ball DM formed during the FOPT for different values of bubble wall velocities.We have chosen η ϕ = η L .When λ ϕh is larger, both the Γ(T ⋆ ) and the Q-ball number density are suppressed, so that the charge is larger at a given η ϕ .Since n Q /s and Q does not change in the adiabatic Universe, at present they are with s 0 = 2891.2cm −3 being the entropy density in current time [106]. We take the approximation T ⋆ ≈ T p then the bounce action can be approximated written as [ In reality, the mass or charge of Q-balls should have a distribution, which has been discussed in detail in Ref. [108].The false vacuum size distribution is given by Eq. ( 15) of Ref. [108].However, it has been shown in Eq. ( 24) of Ref. [108] that the average size of Q-balls during a FOPT is still approximately equal to v w /β, just as in the monochromatic case.Based on this, for simplicity we can still use the average value instead of the distribution of false vacuum.In order to get stable gauged Q-ball with a given value of charge, we must impose Q max > Q, so the gauge coupling g has to satisfy g < 1.28 × 10 −18 1 T ⋆ 100 GeV where ⟨σ anni v rel ⟩ is the annihilation cross section. 67 where H 0 is the Hubble constant today [109].The cross section of process ϕ+ϕ † ↔ h+h reads [110].In our parameter space, where the λ ϕh is around 7, the relic abundance from freeze-out is approximately Ω freeze−out h 2 100 ≈ 2.35 × 10 −4 .So we can omit the DM produced from thermal freeze-out. The DM relic density also receives the contribution from penetrated asymmetric components of DM particles which is given by the excess of ϕ over ϕ † , The relic density of Q-balls at present is where ρ c = 3H 2 0 M 2 pl /(8π) is the critical energy density.We have found that, the gauged Q-ball is generally a mixed state of Eq. (4.16) and Eq.(4.18) as 0 ≤ αB 0 ρ b ≤ C 2 .So we can write down the energy of gauged Q-ball at a given charge, where V 0 = λ h 4 v 4 0 is the potential difference between inside and outside of the gauged Qballs at zero temperature.The first term on the right-side is the zero-point energy of the scalar particles, the second term is the vacuum volume energy inside the Q-ball and the third term is the electrostatic self energy.By minimizing this expression respect to r ⋆ , we obtain, . (5.35) In the limit of zero gauge coupling, which is just the energy of global Q-ball.By using we finally arrive at the expression: (5.37) the Γ(T ⋆ ) can also be expanded by using Γ(T ⋆ ) ≈ T 4 ⋆ e −S 3 (T⋆)/T⋆ and Eq.(5.28), but we keep the expression here to give more accurate results. Although the expression hitherto is general, we apply these to the minimal SM plus singlet model.We choose λ ϕh = 6.8 for which T n = 71.65 GeV and T p = 68.9GeV.The value of this portal coupling satisfies the validity of the perturbative analysis which indicates the portal coupling should be roughly smaller than 10 [111].The number density of gauged Q-balls at production is n ⋆ Q ≃ 3.0 × 10 −31 GeV −3 .In this case, the value of v(T p )/T p is approximately 3.5 and there are still 50% of the DM particles trapping inside the false vacuum even at v w = 0.6.However, it should be noted that in this case, the contribution from penetrated asymmetric DM will dominate, as can be seen in Eq. (5.32).This can be avoided in two ways.One way is to increase the phase transition strength and the corresponding v(T )/T so there are little DM particles penetrating into the true vacuum, resulting F trap ϕ ≈ 1.This can be achieved by introducing new freedoms beyond the standard model or considering dark FOPT instead of electroweak FOPT.The other way is model dependent: one can introduce new decay channels that penetrated ϕ could decay into dark radiation or SM leptons which can account for the lepton asymmetry.The decay process does not destroy the stability of Q-balls as long as π/r ⋆ < m d where m d is the mass of decay products.In this work, we focus on the gauged Q-ball DM so we do not consider the penetrated asymmetric DM in detail.We show the gauged Q-ball DM relic density in figure 12.The colored stars represent the values for the gauged Q-ball with maximal charge.The gray region represents the region where the DM is overproduced.We can see that due to the finiteness of the charge, the gauged Q-balls can explain the whole DM at v w = 0.01 only when the rescaled gauge coupling α ≲ 10 −16 .The DM relic density is slightly enhanced due to the extra electrostatic energy. In table 1 we choose four benchmark points that satisfy the correct DM relic density and show the corresponding F trap ϕ and η ϕ /η L .The condition of a strong FOPT leads to obvious deviation of triple Higgs coupling, which might be detected by the loop-induced e + e − → hZ process [112,113] at future lepton colliders, such as FCC-ee, CEPC, and ILC.One can define the δσ Zh as the fractional change in Zh production relative to the SM prediction at one loop.We list the corresponding δσ Zh for the four benchmark points in table 1.We can see that the value of the required DM asymmetry η ϕ is close to the lepton asymmetry η L .This prompts us to speculate that they have the same origin.Actually, if the DM asymmetry comes from the process N → χ + ϕ and N → χ + ϕ † .We could assume the dark fermion χ is long-lived and decay suddenly into leptons after electroweak phase transition in order to avoid the electroweak sphaleron process.Then we expect the η ϕ and η L is at the same order.The detailed model building is beyond the scope of this work and we leave this in our future studies. .(6.2) • Sound wave The contribution from sound waves could be more significant.The formula GW spectrum from sound waves is [125]: Υ sw is the suppression factor of the short period of the sound wave, where τ sw H p ≈ (8π) v w (H p /β) 3κ v α p /(4 + 4α p ) .(6.6) • Turbulence The formula of the GW spectrum from turbulence is [126]: κ turb represents the efficiency of vacuum energy being converted into turbulent flow: where the ε is set to 0.1. The total contribution to the GW spectra can be calculated by summing these individual contributions: Ω GW h 2 100 = Ω co h 2 100 + Ω sw h 2 100 + Ω turb h 2 100 .(6.11) We show the GW spectra Ω GW h 2 100 for four benchmark points in figure 14.These four benchmark points are shown in Table 1.We choose λ ϕh = 6.8 and 7.0 and for each value of λ ϕh we choose two bubble wall velocities v w = 0.1, 0.6.The colored regions represent the sensitivity curves for future GW detectors LISA [43] and TianQin [44,45] with the signalto-noise ratio (SNR) about 5. We can see that the LISA and TianQin could detect this new DM mechanism when the bubble wall velocity is relatively large.Taiji [46], BBO [47], DECIGO [48], Ultimate-DECIGO [49] could also detect this new DM mechanism by GW signals.1.The future GW detectors LISA [43], TianQin [44,45] could detect BP 2 and BP 4 whose bubble wall velocities are relatively large. Discussion and conclusion In this work, we have systematically discussed the gauged Q-ball DM formed during a FOPT.We have investigated the stability of gauged Q-balls, including quantum stability, stress stability, fission stability, and classical stability.Different from the global Q-balls, the gauge interaction restricts the size of the stable gauged Q-balls.For a given value of the gauge coupling, the stable gauged Q-balls can only be realized in the region of charge Q s < Q < Q max .The upper limit Q max and lower limit Q s mainly come from the stress stability and quantum stability criterion respectively.By using the thin-wall approximation, we show that the piecewise model can describe the basic properties of gauged Q-balls in FLSM model well.Based on this, we further give an approximately analytic evaluation of Q max by using the mapping method.We find the maximal charge is approximately Q max ∝ g−2 where g is the gauge coupling of the dark U (1) symmetry.The constraint on the value of gauge coupling g is given by Eq. (5.30) if the gauged Q-balls are produced by a FOPT.We discuss the relic density of gauged Q-ball DM formed during an electroweak FOPT.Even in the minimal electroweak FOPT model (SM plus singlet), the gauged Q-balls can comprise all the observed DM.And we have found that in order to satisfy the relic abundance of DM, the original DM asymmetry surprisingly coincides with the observed large lepton number asymmetry.The average charge and mass of gauged Q-ball DM can be varied by modifying the phase transition dynamics or the primordial DM asymmetry.Besides, we give combined constraints on the gauged Q-ball DM from DM direct detection (Mica, XENON1T, Ohya), and astronomical observations (CMB, neutron stars, white dwarfs).The formation process of gauged Q-ball DM during a FOPT also produces phase transition GW signals which could be detected by future GW experiments such as LISA, TianQin, and Taiji.The phase transition dynamics in the early Universe provide new formation mechanisms of various soliton DM or dynamical DM.For example, it is reasonable to discuss other species of soliton DM formed during FOPT, such as gauged Fermi-ball DM.The configuration of the Fermi-ball is different from the Q-ball because of the extra Fermi-gas degeneracy pressure.We leave these in our future works. Figure 1 . Figure 1.Values of the three fields at the Q-ball center and the total energy Ẽ for different values of gauge coupling α.The arrows represent the evolution of the frequency ν.We choose k = 3.5 which corresponds to λ ϕh ≈ 6.Four specific solutions P 1, P 2, P 3, P 4 for α = 1.0 are marked by the triangles and the corresponding profiles are shown in figure 2. Figure 2 . Figure 2. Profiles of the dark gauge field, complex scalar field, Higgs field of the gauged Q-ball.Here we choose the marked points P 1, P 2, P 3, P 4 in figure1where k = 3.5 and α = 1.0. Figure 3 . Figure 3. Left: total charge Q for different values of α for U (1) gauged Q-balls.Right: the two typical radii for the Q-ball field Φ(ρ) and Higgs field H(ρ). 40 QFigure 4 . Figure 4.The energy over charge Ẽ/ Q for different values of α.The Qs represents the value of charge which satisfies Ẽ/ Q = 1. Figure 5 . Figure 5. Left: the profile of F (ρ) defined by Eq. (3.10) for the four marked points in figure 1 where α = 1.0.Right: F (ρ) at the transition point between the first and second branches for different values of gauge couplings α. Figure 7 . Figure 7. Viable space of ν of gauged Q-balls for α = 0.7.The red line represents the region where the gauged Q-ball is dominated by gauge field such that it is unstable under stress stability criterion.The green line represents the region where the gauged Q-ball is unstable under quantum stability criterion.The purple point labels the transition point.Gauged Q-balls in the blue region between ν min and ν max are stable. Figure 8 . Figure 8. Charge and Energy as functions of Gauged Q-ball radius.The blue lines are the numerical results in piecewise model; red lines are the semi-analytic results Eq. (4.12) and Eq.(4.15), black lines come from the numerical results in FLSM model. 5 Figure 9 . Figure 9. Maximal charge Q max = 2π λ h Qmax (left panel) and energy E max = 2πm ϕ λ h Ẽmax (right panel) of gauged Q-ball for different α and k.The red lines are analytic evaluations Eq. (4.22) and (4.24); the blue dotted lines are the semi-analytic results and the black dashed lines represent the numerical results for piecewise model. Figure 10 . Figure 10.Phase transition parameters as functions of λ ϕh .Left: the critical, nucleation and percolation temperatures for different values of λ ϕh .For the percolation temperature we choose v w = 0.1.Right: the wash-out parameters v(T )/T for various temperatures as functions of λ ϕh . 100 Ω Q h 2 100 > 0 Figure 12 . Figure 12.Q-ball relic density as function of DM asymmetry η ϕ /η L in the SM plus singlet model where λ ϕh = 6.8.We choose v w = 0.01.The star represents the value for the gauged Q-ball with maximal charge.The gray region represents the region where the DM is overproduced. sw h 2 100 ≃2. 65 × 10 − 6 Υ sw H p β κ v α p 1 5 where κ v reflects the fraction of vacuum energy that transfers into the fluid's bulk motion.The peak frequency of sound wave processes is:f sw ≃ 1.9 × 10 −Hz
13,199.2
2024-04-25T00:00:00.000
[ "Physics" ]
Experimental and Finite Element Analysis of the Tensile Behavior of Architectured Cu-Al Composite Wires The present study investigates, experimentally and numerically, the tensile behavior of copper-clad aluminum composite wires. Two fiber-matrix configurations, the conventional Al-core/Cu-case and a so-called architectured wire with a continuous copper network across the cross-section, were considered. Two different fiber arrangements with 61 or 22 aluminum fibers were employed for the architectured samples. Experimentally, tensile tests on the two types of composites show that the flow stress of architectured configurations is markedly higher than that of the linear rule of mixtures’ prediction. Transverse stress components and processing-induced residual stresses are then studied via numerical simulations to assess their potential effect on this enhanced strength. A set of elastic-domain and elastoplastic simulations were performed to account for the influence of Young’s modulus and volume fraction of each phase on the magnitude of transverse stresses and how theses stresses contribute to the axial stress-strain behavior. Besides, residual stress fields of different magnitude with literature-based distributions expected for cold-drawn wires were defined. The findings suggest that the improved yield strength of architectured Cu-Al wires cannot be attributed to the weak transverse stresses developed during tensile testing, while there are compelling implications regarding the strengthening effect originating from the residual stress profile. Finally, the results are discussed and concluded with a focus on the role of architecture and residual stresses. Introduction Abundant copper demand for electrical applications from various sectors has prompted manufacturers to reduce material costs by replacing this rather expensive and high-density metal partly or entirely. Lower-density and more affordable aluminum-copper (Al-Cu) composite wire is an example of such efforts. The following paragraphs provide a summary of the different features of Al-Cu wires and several other similar composite systems (developed by various techniques) already investigated. The missing aspects and the property of interest to be researched in the current work are then presented at the end of this section. Among those already-studied features are the investigations covering the mechanical behavior and finite element modelling of the manufacture processes of severely cold worked composite systems akin to the one under study in this work. Khosravifard and Ebrahimi [1] investigated the parameters affecting the interface strength of extruded Al/Cu clad bimetal rods along with FEM analysis of the extrusion process. Feng et al. [2] examined the compressive mechanical behavior of Al/Mg composite rods with different types of Al sleeve. Gu et al. [3] modelled the elastic behavior of architectured and nanostructured Cu-Nb composite wires produced by accumulative drawing and bundling (a severe plastic deformation technique) in a multiscale manner. Priel et al. [4] did a computational study (validated by experiments) on co-extrusion of an Mg/Al composite billet and suggested a set-up named "Floating Core" as being ideal. Knezevic et al. [5] made a comparison between three die designs with a material-based approach towards the extrusion of bimetallic tubes discussing the criteria that are to be met for proper solid-state bonding. Moreover, a great deal of research has been done addressing the mechanical behavior of metallic and non-metallic fiber-reinforced composites. Ochiai [6] performed an extensive study on the effect of interface on deformation and fracture behavior of metallic matrix fiber-reinforced composites. Kelly and Lilholt [7] researched stress-strain curve of a fiberreinforced composite of tungsten wires embedded in a pure copper matrix. Kelly and Tyson [8] studied tensile properties of metallic fiber-reinforced composite systems of copper/tungsten and copper-molybdenum. Ebert et al. [9] analyzed the stress-strain behavior of concentric composite cylinders. Sapanathan et al. [10] spiral extruded an aluminum/copper composite to study its bond strength and interfacial characteristics. Hao et al. [11] developed a novel multifunctional NiTi/Ag hierarchical composite, inspired by the hierarchical design of the tendon, by repeated assembling and wire drawing. Tyson and Davies [12] investigated the shear stresses associated with stress transfer during fiber reinforcement with the help of photoelasticity. Superconducting materials embedded into a copper matrix as multifilaments [13] and aluminum-steel fiber composites [14] are the other systems with similarities to the Al-Cu composites under investigation in the current study. The conventional copper-clad aluminum wire (CCA or single-Al-fiber Al-Cu composite wire) is currently being widely used in the electrical industry [15]. Architectured copper-clad aluminum wire (ACCA or multi-Al-fiber Al-Cu composite wire), however, has proved to be superior in a variety of areas offering improved thermal diffusivity [16] and proper electrical conductivity at both low and high frequencies. Moreover, in a previous article, the authors have reported that ACCA samples exhibit rather complex mechanical behavior in both as-drawn and heat-treated states (see [17] for more details). The novelty of this work is the investigation of the origin of the understudied mechanical behavior of the novel architectured Cu-Al composite wires and its promising implications in terms of the in-service reliability. The objective of this article is then to better understand the mechanical behavior of Cu-Al wires with different fiber-matrix configurations. Along with the conventional CCA wire, two architectured configurations (ACCA) with different numbers of Al fibers were investigated. A first assessment of the mechanical properties based on the experimental tensile curves is proposed, revealing improved flow stress for architectured configurations. Numerical simulations of CCA and ACCA configurations were then performed to find the impact of fiber-matrix configurations on the axial stress-strain behavior of these materials. Particularly, the influence of I-transverse interactions and II-processing-induced residual stresses on the mechanical behavior were investigated. The use of finite element analysis is necessary when dealing with the mechanical behaviors that are not easy to understand and interpret experimentally. Crack and fracture behavior are instances of such studies [18,19]. A great complexity in the current work is the experimental measurement of radial and circumferential stresses developing at the interface of the fine Al fibers (tens of micron wide) and the Cu matrix during tensile testing of Arcitectured and even conventional Cu-Al wires. The results show that the processing-induced residual stresses most probably explain the exceptional mechanical properties of architectured wires. Material and Experimental Procedure Copper clad aluminum wires are produced by cold-drawing. For all wires, fully annealed high purity Oxygen Free High Conductivity (OFHC) copper and 99.5% pure Al were employed. For the fabrication of CCA, a copper tube of an outer diameter of 12 mm and inner diameter of 8 mm and an approximately 8 mm-aluminum rod were simultaneously cold-drawn down to 3 mm. For the ACCA drawing, CCA wires were restacked in a copper tube and were further cold-drawn. For these specific architectured wires, two configurations were manufactured, one with 61 restacked 1 mm-CCA wires (labelled ACCA 61 ) and a second one with 22 restacked 1.7 mm-wires (labelled ACCA 22 ). All wires were cold-drawn down to 3 mm without inter-operational heat-treatments. For the CCA wires, the aluminum volume fraction is about 50% whereas values of 25% and 32% are associated to the ACCA 61 and ACCA 22 , respectively. Details about the manufacturing process can be found in the two previous articles [15,17]. The corresponding cross-sections of the three microstructures, imaged via optical microscopy, are illustrated in Figure 1. Simulation of the CCA and ACCA behavior under tensile loading requires the stressstrain data of each component (Al and Cu). For that reason, as-drawn samples of both pure Cu and Al with the aforementioned compositions (threes samples each) were prepared for tensile testing to provide the FEA software with the required input. To prepare the above tensile test samples, an aluminum rod and a copper rod of the same initial diameter of 8 mm, heat-treated for three hours at 300 • C and 500 • C respectively, were cold drawn down to 2 mm each. This was to have the same amount of plastic deformation undergone by a 3 mm-CCA composite wire (considered for simulations) stored in pure Al and Cu samples. The final diameter of the rods was obtained from the following relation for calculating the drawing strain: where D 0 and D are the initial and final diameters respectively. An MTS Criterion Model 43 10 kN-universal testing machine (MTS, Eden Prairie, MN, USA) was used to perform displacement-controlled tensile tests at room temperature and the strain was measured via a conventional 25 mm-gage length extensometer. Samples were mounted on specialized wire tensile testing grips to minimize stress concentration and were strained at an initial strain rate of 0.004 s −1 to avoid possible viscous effects. Engineering stress-strain curves of experimentally tensile-tested pure Al and Cu are plotted in Figure 2. The following curves were then converted into true stress-strain curves and were used as input for elastoplastic simulation of CCA and ACCA wires. Numerical Procedure A comprehensive explanation of the simulation approach is presented in the first subsection. The second subsection is devoted to the simulation details. Parameters and Methodology The application of finite element method made it possible to effectively study various parameters involved from a behavioral perspective. Assumptions such as perfect fiber-matrix interface and isotropic behavior were made for the sake of simplicity. As mentioned earlier, the two key factors I-transverse stresses and II-residual stresses (RS) were investigated in a set of numerical simulations. To this end, evolution of transverse (radial and circumferential) stress components under a tensile load was modelled in both elastic and elastoplastic domains for CCA samples. Tensile elastic-plastic behavior of an ACCA model, created from an actual microstructure, was also studied to discover the potentially distinct development of lateral stresses in this novel configuration. For the sake of conciseness, only the ACCA 61 configuration was considered for simulation. The effect of predefined fields of residual stress in both CCA and ACCA wires was also studied independently. The idea was to realize how significant the contribution of lateral and residual stresses could be to the axial stress-strain behavior of these bimetallic composites separately. CCA simulations hold clues to understanding the more complex tensile behavior of the architectured samples (ACCAs). Transverse Stresses CCA Elastic Simulations There are complexities associated with the elastoplastic behavior of these materials, originating from the formation of yield fronts and gradual elastic-to-plastic transition [9]. In a first attempt to avoid those intricacies, a number of elastic-domain CCA wire simulations were independently run, with the major parameters involved in the evolution of transverse stresses taken into consideration. Those parameters include Young's modulus and Al/Cu volume fraction. Therefore, two 10-mm-long CCA samples of the same outer diameter of 3 mm (arbitrary dimensions), containing an aluminum core and a copper case were modelled. One of the two samples contains 75% Al (2.6 mm-Al core) and the other 25% Al (1.5 mm-Al core). The volume fraction of the experimental CCA wire lies in between these two values. This was to account for the role of volume fraction when one of the phases prevails. It is known from the literature that the elastic behavior of pure copper is largely anisotropic. Its Young's modulus depends on the texture developments and can range between 60 and 200 GPa as plotted and discussed by Pal-Val et al. [20] in the order given, were chosen to take account of Young's modulus effect. Unlike Cu, the elastic behavior of Al is almost isotropic and the Young's modulus alterations of pure aluminum and many aluminum alloys, following cold working and heat-treatment, vary slightly by 10 percent at most [21]. Hereby, an average value of 70 GPa was considered for Al. The elastic-domain impact of Poisson's ratio value difference between the components of bimetallic fiber-composites is generally trivial [9]. In summary, three Young's modulus values for Cu and 2 volume fractions were opted for a total number of 6 simulations. The CCA elastic simulations are summarized in Table 1. CCA and ACCA Elastoplastic Simulations It is known that radial and circumferential stresses may become more important in terms of contribution to the axial stress-strain behavior as one of the two phases in a bimetallic cylindrical composite plasticizes first and the other remains elastic within a certain strain range. This is because the already-yielded component could be assumed to have a Poisson's ratio of 0.5 (due to the incompressible nature of plasticity) and the other would still possess the elastic-domain Poisson's ratio value. Therefore, the difference between the values of Poisson's ratio of the two materials would become greater for a certain range of strain before the elastic component begins to behave plastically as well [9]. Hereby, numerical tensile testing of a set of 3 mm-diameter CCA wires (actual dimension) of four different Al/Cu volume fractions was opted to be modelled with elastic-plastic behavior (without accounting for the Al/Cu interface and residual stresses). Four volume fractions were chosen to have a statistically better approximation of the order of magnitude of transverse stresses. The goal was to discover the degree to which lateral stresses, alone, can possibly influence the tensile behavior. Table 2 lists all the CCA elastoplastic simulations performed. Having insights provided from the elastoplastic simulation of CCA wires, a 3-mm diameter ACCA 61 wire was modelled from the actual microstructure of its transversal cross section (see Figure 3a). The wire contains 61 Al fibers (about 25 percent of the total volume fraction) embedded in a Cu matrix. The corresponding finite element models are presented in Figure 3a,b. Residual Stresses Mechanical residual stresses built up during cold drawing of metals are known to come from the non-uniform nature of plastic deformation in this process. There is a qualitative feature from the literature based on which residual stress simulations are argued in this paper. This feature is the formation of a rather wide range of compressive residual stresses in the central part of a cold-drawn bar and a narrower range of tensile residual stress in its outer part, away from the center. Axial tensile residual stresses forming near the wire surface have detrimental effects on the tensile strength of drawn wires. Modifying the residual profile through the wire cross section by reducing those stresses and boosting the formation of compressive residual stresses favors the yield strength [22]. Atienza and Elices [23] suggest such RS distribution for cold drawn steel wires investigated both numerically and experimentally. Ripoll et al. [24] report a similar RS distribution pattern in their investigation of tungsten wires. Bullough and Hartley [25] introduce an analytical model for co-deformed Cu-Al rods confirming the above-mentioned RS distribution. Consistent with the literature on the magnitude and distribution pattern of drawinginduced residual stresses, a behavioral assessment was conducted. One objective was to analyze how the distinct fiber-matrix configuration of an ACCA sample can possibly affect the axial stress-strain behavior of Al-Cu composite wires of the same Al/Cu volume fraction but different architecture. For the sake of simplicity, this comparison was made irrespective of the fact that ACCA is more strained than CCA and more compressive residual stresses are expected to form in architectured wires. Hereby, the aforementioned 3-mm ACCA sample containing about 25% Al and its corresponding 3-mm CCA sample with 25% Al were considered. Same-diameter cylinders of compressive residual stress were defined in the center of both wires with hollow cylinders of the same width under tensile residual stress, as illustrated in Figure 4a,b. Residual stress modelling and analyses were based on the values reported for copper-clad aluminum wires fabricated by hydrostatic extrusion [25]. The analytical model proposed in [25] is applicable for both hydrostatic extrusion and wire drawing processes and provides a good first approximation for this behavioral evaluation. Simulation details are presented in the following section. It must be noted that all the above simulations were merely intended to test the assumptions made earlier regarding the role of transverse and residual stresses and no verification of the experimental results was planned. Indeed, the exact development of residual stresses in the architectured composite wires is not straightforward. The simulations were applied towards identifying the potential source of the strengthening effect observed in the architectured Cu-Al wires (reported by [17]). Numerical Modelling Details The FEA software Abaqus/CAE (ABAQUS Inc., Johnston, RI, USA) was utilized to perform all simulations. All CCA samples were meshed using a mixture of hexahedral elements of type 'C3D8R' and wedge elements of type "C3D6 (both of linear geometric order, from the standard element library) to generate a regular symmetric mesh. However, the ACCA sample was meshed using only hexahedral elements. Independent Al and Cu parts were then assembled by merging the interfacial elements that satisfies the perfect interface assumption and allows the development of transverse stresses. All models were assigned a boundary condition of type "ZSYMM" (symmetry about a constant z-plane) on the fixed end. An arbitrary displacement of 0.005 mm (0.05% strain-within the reasonable range of elastic domain) was applied on all the CCA elastic models listed in Table 1. Engineering stress-strain data from tensile testing of the as-drawn pure Al and pure Cu samples were calibrated and converted into true stress-strain curves in Abaqus/CAE as input for elastoplastic simulation of the 3-mm diameter CCA samples mentioned in the previous section. The elastic-plastic behavior of the aforementioned ACCA sample (≈25% Al) of a gauge length of 25 mm was also studied by straining it up to one percent. The use of CCA input for simulating the tensile testing of ACCA is acceptable to a fairly good approximation due to stress saturation in both Al and Cu at high strains (see [17]). Chinh et al. [26] also report stress saturation in highly strained Al. In order to model the elastic-plastic behavior of CCA and ACCA composite wires in presence of residual stresses, the 25%Al-ACCA wire and its corresponding CCA sample (containing 25% Al) were chosen. Next, for comparison purposes, a cylindrical section of the same diameter of 1.5 mm was defined in the center of both ACCA and CCA samples. As explained in the previous section, residual stress values for simulation were taken from Ref. [25]. Therefore, predefined stress fields of −90 MPa (compressive) in the central cylinder and +10 MPa (tensile) in the remaining hollow cylinder were defined. It should be noted that the aforementioned values were considered as single uniform values through the cross section of the wire rather than the actual curved-shape residual stress distributions (see the references presented in Section 3.1.2 Residual Stresses). This analytical model-based assumption was made for the sake of simplicity and comparison and does not satisfy the residuals stresses' self-equilibrium requirement. It is however consistent with the literature in terms of the sign of expected residual stresses. Furthermore, to emphasize the favorable impact of compressive residual stresses in the central section of ACCA samples and to reveal its implication for the research problem, a separate simulation with −120 MPs (rather than −90 MPa) and +10 MPa was performed. Compared to the CCA case, the two ACCA wires show an increased flow stress that is closer (ACCA 22 ) or even larger (ACCA 61 ) than that of the corresponding pure copper wire. In that case, the rule of mixture is clearly not fulfilled revealing a complex mechanical behavior that can be attributed to the aforementioned transverse interactions or residual stresses. CCA Elastic Simulations In order to investigate the role played by the elastic-domain transverse interactions on the mechanical behavior of CCA an ACCA wires, elastic simulations were performed as a first attempt. When it comes to the CCA, the following graphs show the impact of the two parameters Young's modulus and Al/Cu volume fraction on the development of tensile testing-induced radial and circumferential stresses versus the normalized distance along the diameter of each wire in the elastic domain. Effect of the two parameters on the distribution and magnitude of transverse stresses are visualized in Figure 6a-d, which represent the radial and circumferential stress profiles of the 75% Al-CCA sample and Figure 6c,d that illustrate those of the 25% Al-CCA sample. For a total elastic strain of 0.05%, average axial stress values of ≈63 and ≈45 MPa developed along the wire axis in the 25% Al-and 75% Al-sample, respectively. The ratio on the graphs' legend is the ratio of the Young's modulus of Cu to Al. The Al core and Cu case areas are delineated on the curves. As observed in Figure 6, the magnitude of transverse stresses evolved in both 25%-and 75%Al-samples is utterly small (on the order of tenths of a megapascal). The magnitude of the corresponding axial stresses are, however, significantly higher as mentioned above. The magnitude of radial and circumferential stresses in both CCA samples of different volume fractions slightly increases as the Young's modulus ratio becomes greater. It reaches its maximum for the ratio E Cu /E Al = 200/70. Additionally, the higher the volume fraction of copper is, the greater the radial stress component in the Al core and Cu case would be. The circumferential stress component though increases in the Al core and decreases in the Cu case at higher volume fractions of Cu. Figure 7 shows the axial stress-strain curves of the CCA samples of the four aforementioned volume fractions simulated with elastic-plastic behavior along with the experimental pure Cu and Al curves. The tensile stress increases with a rise in the Cu volume fraction as expected. Figure 8a,b summarize how transverse stresses evolve during numerical tensile testing of CCA wires with four different volume fractions. The 3D graphs of Figure 8 contain two horizontal and one vertical axes. One of the two horizontal axes represents the axial strain and the other axes show the distribution of radial/circumferential stress (at each strain level) versus the normalized distance along the diameter of each wire between each 0 and 1 with the corresponding volume fraction of Al determined. The Al core and Cu case areas are depicted on the distribution profiles. The three stages indicated in three different colors correspond to the strain ranges of the three common regions on the axial stress-strain curve of concentric composite cylinders (CCA wires in this study) arising from the varying Poisson's ratio of each phase during tensile testing [9]. The main purpose of the 3D diagrams is to demonstrate the order of magnitude of transverse stresses that develop during numerical tensile testing of CCA wires and therefore a further explanation about those three regions is avoided. The maximum magnitude of radial and circumferential Figure 9 shows the simulation axial-stress-strain curve of the ACCA wire containing 25% Al. Radial and circumferential stress fields at a total strain of ≈0.2% are illustrated in Figure 10a,b, respectively. This is the strain level at which the maximum magnitude of transverse stresses was reached during numerical tensile-testing of the architectured sample. Similar to CCA wires, the above strain level corresponds to the onset of stage II at which one of the components begins yielding first in the ACCA sample. The radial and circumferential stress distribution patterns across the ACCA wire cross-section is though distinctively different from those of the CCA wires throughout tensile testing. The most prominent feature is the channels of negative and positive transverse stresses evolving in the inter-fiber space of the copper matrix, pairs of which are depicted in Figure 10a,b (white circles). Figure 10c,d show the distribution of radial and circumferential stresses at the end of the numerical tensile test (at ≈1% strain). The magnitude of transverse stresses nears zero and their distribution becomes homogeneous at this stage. Note that a coarser mesh than that of Figure 3b was used to reduce the computational cost since the numerical solution was well converged with even coarser mesh. Residual Stresses Stress-strain curves of numerically tensile-tested ≈25%Al-ACCA and 25%Al-CCA wires, with and without predefined residual stress fields, are plotted in Figure 11. The stress-strain curves of residual stress-free ACCA and CCA lie over one another as shown in this graph. Figure 11 allows comparisons to be made between CCA and ACCA samples. It reveals the role of architecture. It is implicative of the consequential impact of the residual stress profile and particularly compressive residual stresses built up in the inner section of cold-drawn samples. According to Figure 11, −90 MPa of compressive and 10 MPa of tensile residual stress with the earlier-mentioned configuration put the yield strength of CCA and ACCA by about 10 and 15 MPa above the stress-free curves respectively. A higher-magnitude compressive residual stress of −120 MPa (i.e., −120 MPa/10 MPa) increases the yield strength by about 20 MPa. Please note that these positive deviations are not meant to imply that the presence of residual stresses improve the yield strength. Near-surface tensile residual stresses could actually have deleterious effects on the tensile strength as referred to earlier. It is merely because of the way the residual stress fields are defined based on the analytical model in [25]. Residual stress-free curves are simply presented as a baseline for comparison. The red curves with residual stress fields are to be compared. Figure 11. Numerical stress-strain curves of ≈25%Al-ACCA and 25%Al-CCA wires (with and without residual stress fields). Discussion and Outlook The experimental results revealed a slightly increase in the tensile flow stress of the two architectured Cu-Al wires compared to the rule of mixtures' prediction. The two key parameters I-transverse stresses and II-processing-induced residual stresses were investigated via finite element analysis as the potential sources of this behavior. The two different features and their implications on the mechanical behavior are discussed in the following section. Elastic-Domain Transverse Stresses in CCA Samples The features of interest in the elastic-domain simulations of CCA wires were the order of magnitude of radial and circumferential stresses and the ways this magnitude changes influenced by the parameters involved. Figure 6a-d illustrate how the two parameters Young's modulus and volume fraction of each phase affect the evolution of transverse stresses as explained in the results section. It is evident from those figures that the maximum magnitude of both radial and circumferential stress components is on the order of tenths of a megapascal in all cases. This is while the axial stress component developed in the CCA samples for a corresponding elastic strain of 0.05% (from the linear rule of mixtures) is on the order of about 63 MPa for the 25%Al-ACCA and 45 MPa for the 75%Al-ACCA sample. This implies the quite weak contribution of transverse stresses evolved in the elastic domain of axially strained CCA wires, consistent with the analytical model developed by Ebert et al. [9] for concentric cylindrical composites. Transverse Stresses in CCA and ACCA Samples with Elastic-Plastic Behavior It was pointed out earlier that there is a strain range between the onset of plasticity in the first and second components of a bimetallic cylindrical composite during which a greater Poisson's ratio difference and consequently higher-magnitude transverse stresses may be expected. However, two other major factors also determine the significance of the developed radial and circumferential stresses contributing to the axial stress-strain behavior. The two other factors are 1-volume fraction of each phase, 2-the ratio of their elastic moduli [9]. The Young's moduli of experimentally tensile-tested as-drawn pure copper and pure aluminum are 129 and 66 GPa, respectively ( Figure 2). Figure 8a,b with four different volume fractions of numerically tensile-tested CCA wires provide a good approximation of the order of magnitude of radial and circumferential stresses. It can easily be seen from these figures that the maximum magnitude of transverse stresses would not exceed a few megapascals for different Al/Cu volume fractions. This is because of the relatively small ratio of the Young's modulus of Cu to that of Al (calculated from the experimental stress-strain curves) and again implies the negligible contribution of transverse stresses to the axial stress-stain behavior of CCA wires whose axial stressstrain curves are plotted in Figure 7. Furthermore, it can be deduced from Figure 8b that the greater the volume fraction of one component is, the smaller the magnitude of circumferential stress would be in that component. Evolution of the maximum radial and circumferential stress values during numerical tensile-testing of a ≈25%Al-ACCA wire modelled from its actual transverse cross-section (see Figure 3a,b) is shown in Figure 10a,b. The maximum magnitude of transverse stresses (about ±2 MPa) developed in the ACCA sample is of almost the same order of magnitude of maximum transverse stresses in its CCA counterpart (25%Al-75%Cu). This indicates the fact that architecture does not change the magnitude of transverse stress components and the magnitude is merely a function of volume fraction. The distribution of radial and circumferential stresses, however, interestingly changes due to the novel fiber-matrix configuration of ACCA compared to CCA. It can be observed in Figure 8a that the sign of the radial stress component in both Al core and copper case of CCA wires remains positive throughout the tensile test. Figure 8b also indicates that the sign of the circumferential component in CCA wires is positive in the Al core and negative in the Cu case all along the test. Nevertheless, there are channels of both negative and positive radial and circumferential stresses in the inter-fiber space of the Cu matrix of ACCA wires throughout the tensile test, as shown in Figure 10a,b. This feature may have important implications in terms of interfacial damage initiation and propagation. However, the feature of interest in this study is the magnitude of transverse stresses developed during tensile testing of CCA and ACCA wires. To conclude Sections 5.1 and 5.2, it can be inferred that the magnitude of radial and circumferential stresses evolved in CCA and ACCA wires is quite small that transverse stresses cannot be considered as the underlying reason behind the improved yield strength of ACCA wires. Mechanical Bonding at the Al-Cu Interface The mechanical bonding at the Al-Cu interface of both cold-drawn CCA and ACCA wires is one of the key aspects to be studied when it comes to the axial stress-strain behavior of these bimetallic composites. The focus of this numerical study is, however, to discover the origin of the enhanced strength of ACCA. There are studies attributing the positive deviation from the rule of mixtures (RoM) and improved strength of similar bimetallic composite systems, such as Cu-Nb, to the interface. Those observed strengthening effects have been justified by models such as Hall-Petch Barrier and Geometrically Necessary Dislocations (GND). However, both models are valid where there is a size effect involved and interface (fiber) spacing is on the order of nanometer [27]. Whilst there are nearly 200 grains, as large as 500 nm each, situated in the space between every two Al fibers in the 25%Al-ACCA sample investigated in this study and therefore no size effect is expected. Perfect interface was one of the assumptions made in this work given that any sort of imperfection can potentially bring about loci of stress concentration and be detrimental to the yield strength. Although, all the interface-related discussions are relevant as long as the fiber-matrix bonding is in place. Possible sources of strengthening are the focal points of the current investigation and therefore the Al-Cu interface was not considered since it is not expected to bring about any strengthening effect as argued above. Although, it is intended to conduct a separate study into understanding the bond strength and interfacial behavior of Al-Cu composite wires in both as-drawn and heat-treated conditions in prospect. Residual Stresses A comparison-based approach was adopted towards realizing the impact of residual stresses on the axial stress-strain behavior of CCA and ACCA wires. One should note that residual stresses already contribute to the tensile stress-strain curve of the cold-drawn pure Cu and Al rod samples used as simulation input. However, co-deformation and architecture are expected to form more compressive residual stresses consistent with the following analysis. As illustrated in Figure 11, the simulation curves of the residual stressfree 25%Al-ACCA and 25%Al-CCA samples (dark blue curves) almost entirely overlap because of their similar Al volume fraction as discussed in the Section 5.2. In a first attempt to discover the net effect of architecture in presence of predefined residual stress fields, CCA and ACCA samples were compared irrespective of the different amount of plastic deformation they actually experience. A comparison between the numerical stress-strain curves of the 25%Al-ACCA and 25%Al-CCA wires in Figure 11 shows that the ACCA curve (red with horizontal diamond markers) lies well above the CCA curve (dashed red curve). This clearly demonstrates that the architecture can improve the yield strength under entirely similar conditions (identical residual stress field configurations-see Figure 4) in presence of drawing-induced residual stresses. This can be ascribed to the fact that the novel fiber-matrix configuration of ACCA compared to that of CCA of the same volume fraction, brings more of the stronger phase (that is the copper matrix) into the central part of the composite wire where there is a region of processing-induced compressive residual stresses. This mechanism is consistent with the smaller deviation of the ACCA 22 wire from the rule of mixtures' prediction when compared with the ACCA 61 since the volume fraction of copper in the compressive stress area is lower in ACCA 22 . Moreover, a second comparison with the purpose to provide insights into discovering the origin of the improved strength of ACCA can be made between the two similar ACCA 61 simulations with compressive residual stress fields of different magnitude (solid red curves with markers-see Figure 9). The stress-strain curve of the ACCA sample with a greater compressive residual stress field obviously deviates upwards and shows greater yield strength by lying above. To determine the implications, as mentioned in the Residual Stresses subsection of the Numerical Procedure section, drawing-induced residual stresses come from the non-uniform plastic deformation evolved during the process according to the literature. Bringing some portion of the copper to the center of the wire in the architectured samples could be expected to bring about deformation that is more homogeneous. This can reduce the undesirable tensile residual stresses near the surface of the wire that in turn leads to the prevalence of compressive residual stresses built up in the central region of ACCA wires. Hence, the stress-strain curve of an ACCA sample can exhibit significantly high yield strength in the exact same fashion that the ACCA sample with a larger compressive residual stress field behaves in Figure 11. This strong implication necessitates further simulations and experimental work to model the manufacture process and drawing-induced residual stresses along with experimental measurements of these stresses. A sound comprehension of the tensile behavior of Al-Cu composite wires lays the groundwork for developing a deeper understanding of the mechanical properties of both conventional and novel configurations with different heat-treatment conditions, which in turn leads to optimum production of theses wires. Conclusions The tensile behavior of as-drawn conventional copper clad aluminum and architectured Al-Cu composite wires reveals an improvement in the strength of the architectured fiber-matrix configuration. The influence of the two key parameters 1-transverse and 2-residual stresses as the potential sources of the above behavior were examined using finite element analysis. The tensile response of axially strained conventional (CCA) and architectured (ACCA) copper-clad aluminum wires were then simulated under the influence of those two parameters. The findings suggest the following conclusions: • The effect of the various possible Al-Cu Young's modulus ratios and volume fractions on the evolution and magnitude of transverse stresses was found to be trivial (a few tenths of a megapascal) in Al-Cu composite wires. • Contribution of transverse stresses to the axial stress-strain behavior of both CCA and ACCA wires is insignificant (3 MPa on average at most). • Distribution of transverse stresses in architectured Al-Cu wires is interestingly different from that of conventional CCA wires showing channels of both negative and positive radial and circumferential stress components throughout the tensile test. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The raw/processed data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study. Conflicts of Interest: The authors declare no conflict of interest.
8,309
2021-10-22T00:00:00.000
[ "Materials Science" ]
Computer Simulation of the Multiphase Flow in a Peirce-Smith Copper Converter The multiphase flow in a Peirce-Smith copper converter is numerically explored in this work. Molten matte, molten slag and air are the phases considered. The transient partial differential equations that constitute the mathematical model are discretized using a two-dimensional computational mesh. The Computational Fluid Dynamics technique is employed to numerically solve the discretized equations. The aim of the numerical analysis is to study the influence of the nozzle height on the phase distributions inside the converter. Three values of the nozzle heights are considered. Introduction At industry, the majority of the blister copper is produced today by means of the Peirce-Smith converter (PSC), in which air is injected to oxidize sulfur and chemically reduce copper.Besides, silica flux is added to the converter in order to form a slag which captures the matte impurities [1].The air is injected into the molten copper matte through submerged nozzles [2]. In the PSC an intense momentum transfer is required to get high heat transfer and chemical reaction rates.In some papers, physical experiments on water models are reported to analyze the bubbling to jetting flow regimes in copper converters [2] Here, the multiphase flow in a PSC is analyzed using the CFD technique. Three phases are considered: molten copper matte, molten slag, and air.Three nozzle heights are considered in the two-dimensional transient computer simulations, namely 0.1, 0.25 and 0.5 m. Mathematical Model The conservation of momentum and mass of the considered phases (molten matte, molten slag, air) is modeled using the Navier-Stokes and the continuity equations [10].To simulate the turbulence, the K-ε model is selected [11].To represent the multiphase flow, the Volume of Fluid (VOF) model [12] is employed, whose derivation rests on the assumption that two or more phases are not interpenetrating.In this model it is assumed that in each control volume the volume fractions of all phases sum to unity, and the interface between the phases is obtained by solving the continuity equation for each phase.For the pressure-velocity coupling, the Pressure Implicit with Splitting Operations (PISO) algorithm is selected [13]. Numerical Solution Computational Fluid Dynamics (CFD) software is employed to numerically solve the mathematical model [13] [14].The transient partial differential equations that constitute the mathematical model are discretized using a two-dimensional computational mesh composed of 12,000 trilateral cells [15].The current phases and physical dimensions of a slice of the copper converter are shown in Table 1 and Figure 1.The operating conditions prevailing during the computer simulations are shown in Table 2. Air injection velocity is kept at 10 m•s −1 [16].The properties of the considered phases are shown in Table 3.In order to keep the numerical stability during the integration of the mathematical model equations, a time step value of 0.001 s is employed in the computer runs. Results of Numerical Simulations During the numerical simulations three values of the nozzle height were considered: 0.1, 0.25 and 0.5 m.As is shown in Table 2, the values of the matte height, 4, it can be appreciated that as the nozzle height is increased from 0.1 to 0.5 m, the matte located at the converter bottom becomes less and less agitated. Figures 5-7 show the distribution of the slag inside the converter for lance heights of 0.1, 0.25 and 0.5 m, respectively.The majority of slag remains floating above the matte, however it is observed that small amounts of slag are mixed with the matte at the bottom and right section of the converter.It can be observed that as the nozzle height is increased, the mixing of the slag with the mate is decreased.Remarkably, the thickness of the slag layer that floats in the matte at the right side of the converter is increased as the nozzle height is increased. Related to the bubbling to jetting transition during air injection, it is reported that this phenomenon is properly characterized through the dimensionless Kutateladze number (Ku) [16] [17], which considers the most important forces that determine this transition.These forces are: air inertial forces, bubble buoyancy where v is the air injection velocity, ρ a is the air density, ρ m is the molten matte density, σ m is the matte surface tension, and g is the gravity force.For the PSC, it is reported that the transition from bubbling to jetting occurs for Ku ≥ 3.4832, which corresponds to air injection velocities greater than 50 m•s −1 [16].Results of this work are consistent with those presented in [7] and [15]. Conclusions A numerical analysis of the multiphase flow in a Peirce-Smith copper converter was carried out using the Computational Fluid Dynamics technique.Three phases were considered, namely molten matte, molten slag, and air.The matte [3] [4][5].In recent years, computer simulations have been carried out in order to understand the fluid flow in these devices.Recently, the Computational Fluid Dynamics (CFD) technique is employed to study the flow dynamics in copper converters[6] [7] [8] [9].O. Barrios et al.DOI: 10.4236/ojapps.2018.87022297 Open Journal of Applied Sciences Figure 1 . Figure 1.The phases and dimensions of the 2D copper converter. Table 1 . Physical dimensions of the Peirce-Smith copper converter.
1,185.8
2018-07-23T00:00:00.000
[ "Engineering", "Physics" ]
Abstract REGULAR MAPPINGS BETWEEN DIMENSIONS The notions of Lipschitz and bilipschitz mappings provide classes of mappings connected to the geometry of metric spaces in certain ways. A notion between these two is given by "regular mappings" (reviewed in Section 1), in which some non-bilipschitz behavior is allowed, but with limitations on this, and in a quantitative way. In this paper we look at a class of mappings called $(s,t)$-regular mappings. These mappings are the same as ordinary regular mappings when $s = t$, but otherwise they behave somewhat like projections. In particular, they can map sets with Hausdorff dimension $s$ to sets of Hausdorff dimension $t$. We mostly consider the case of mappings between Euclidean spaces, and show in particular that if $f\colon {\mathbf R}^s\to {\mathbf R}^n$ is an $(s,t)$-regular mapping, then for each ball $B$ in ${\mathbf R}^s$ there is a linear mapping $\lambda \colon {\mathbf R}^s\to {\mathbf R}^{s-t}$ and a subset $E$ of $B$ of substantial measure such that the pair $(f,\lambda )$ is bilipschitz on $E$. We also compare these mappings in comparison with "nonlinear quotient mappings" from [6]. Let us begin with a brief review of "regular mappings" in the ordinary sense, and then proceed to a more general notion that allows for changes in dimension (like Hausdorff dimension). Regular mappings Let (M, d(x, y)) be a metric space.That is, M is a nonempty set, d(x, y) is a nonnegative real-valued function on M × M which is symmetric in x and y, vanishes exactly when x = y, and satisfies the triangle inequality. Let (N, ρ(u, v)) be another metric space, and let f : M → N be a mapping.We say that f is Lipschitz if there is a constant C > 0 such that ρ(f (x), f(y)) ≤ C d(x, y) for all x, y ∈ M. (1.1)The mapping f is said to be bilipschitz if there is a constant C > 0 such that The notions of Lipschitz and bilipschitz mappings provide ways of making comparisons between different metric spaces.If two metric spaces are bilipschitz equivalent -so that there is a bilipschitz mapping from one of the spaces onto the other one-then it means that the two spaces are practically the same in many respects.With Lipschitz mappings one has more flexibility, and the possibility that the spaces are quite different, because of compression of distances. One might add conditions to prevent distances from being compressed too much, without going all the way to bilipschitzness.For instance, one might ask that the image of the domain under a given mapping have positive mass with respect to some measure, like Hausdorff measure (in some dimension).Compare with [14], starting in Chapter 11. The notion of a regular mapping (as in the next definition) gives another way to put limits on the manner in which distances might be compressed by a Lipschitz mapping.Definition 1.3.Let (M, d(x, y)) and (N, ρ(u, v)) be two metric spaces.A mapping f : M → N between them is said to be regular if it is Lipschitz, and if there is a constant C > 0 such that for every ball B in N it is possible to cover f −1 (B) by at most C balls in M with radius equal to C • radius(B). The notion of regular mappings originally came up in [8], in a slightly different form.See also [14], [29] concerning this version.We shall say a bit more about this in Section 3, just after Lemma 3.3. For the record, when we refer to a "ball" in a metric space, this means an open ball, unless something else is indicated.In practice, this does not matter too much, though. Remark 1.4.Recall that a metric space (M, d(x, y)) is said to be doubling if there is a constant C 1 so that every ball B in M can be covered by at most C 1 balls with radius equal to radius(B)/2.When this is true, one can iterate the condition to say that every ball can be covered by C k 1 balls with radius equal to radius(B)/2 k , k ∈ Z + .If (M, d(x, y)) is doubling, then the definition of a regular mapping from M to another metric space N can be simplified, as follows.Instead of asking that f −1 (B) admit a covering by at most C balls with radius equal to C • radius(B), as above, one can ask that f −1 (B) admit a covering by a bounded number of balls in M with the same radius as B. In this paper, metric spaces which are doubling will normally be the primary focus.This includes Euclidean spaces R n (with the standard Euclidean metrics), and subsets of these (with the induced metrics). Examples 1.5.Consider the real line R with its usual metric.The mapping f : R → R defined by f (x) = |x| is regular.This is easy to verify.As an extension of this, consider a mapping from R to R 2 which maps the two half-lines (−∞, 0] and [0, ∞) onto rays σ, τ in R 2 emanating from the origin.Assume that the mapping is linear on (−∞, 0] and [0, ∞), sending 0 ∈ R to the origin in R 2 , and that it has "unit speed", i.e., the derivative is a vector of norm 1.This mapping is Lipschitz with constant 1, and also regular with bounded constant.In other words, the constant for the regularity condition can be taken to be bounded independently of the angle between σ and τ .As long as σ and τ are distinct, the mapping will be one-to-one, and in fact it will be bilipschitz.However, the bilipschitz constant tends to ∞ as the angle between σ and τ goes to 0, while the regularity constant remains bounded.In the limit, one recovers a copy of the first mapping mentioned above, i.e., f (x) = |x| on R. More generally, one can look at locally rectifiable curves in R 2 (or R n ), together with parameterizations of them by arclength.These parameterizations define regular mappings exactly when the curves in question are regular in the sense of [7] (as well as [1]).This means that the amount of arclength measure of the curve inside a disk of radius r is bounded by a constant times r.(Compare with Lemma 3.3.)In particular, with this class of examples, it is easy to see how regular mappings can cross themselves in the image numerous times, have cusps in the image, and so on. Bilipschitz mappings are automatically regular, but the converse is not true in general, as is indicated by the preceding examples.To make a nicer comparison with the notion of regular mappings, one can reformulate the bilipschitz property as follows.If M and N are metric spaces, then a mapping f : M → N is bilipschitz if it is Lipschitz, and if f −1 (B) is contained in a single ball in M of radius C • radius(B) for all balls B in N , where C is a constant that does not depend on B. A basic property of bilipschitz mappings is that they do not increase or decrease Hausdorff measures of a set (in the domain of the mapping) by more than a bounded factor (which may depend on the dimension for the Hausdorff measure that one is using).This is a well-known and straightforward consequence of the definition of Hausdorff measures, as in [16], [17], [25].The same statement is true for regular mappings, and by nearly the same argument.See also Section 12.1 of [14]. Regular mappings between dimensions Definition 2.1.Let (M, d(x, y)) and (N, ρ(u, v)) be metric spaces.Also let s, t be nonnegative real numbers, with s ≥ t.A mapping f : M → N is said to be (s, t)-regular if it is Lipschitz, and if there is a constant C so that the following holds: if B 1 is an arbitrary ball in N , and B 2 is an arbitrary ball in M with radius(B 2 ) ≥ radius(B 1 ), then If s = t, then this is equivalent to the notion of regular mappings from Definition 1.3.In general, the increment s − t allows for mappings which are more like projections, in which the dimension of the image is less than the dimension of the domain. Examples 2.4.Consider R n and the real line R, with their Euclidean metrics.The mapping f : R n → R given by f (x) = |x| (2.5) is (n, 1)-regular.This is not hard to verify.Also, standard orthogonal coordinate projections from R m onto R n , m ≥ n, are (m, n)-regular. It is easy to make more examples like these.One can also make examples by combining ones like these with bilipschitz mappings and ordinary regular mappings, with crossings and so forth, as in Examples 1.5. As in Remark 1.4, if the metric space M is doubling, then one might as well look at coverings of (2.2) by balls with radius equal to radius(B 1 ), rather than C •radius(B 1 ).For the purposes of this paper, it is reasonable to restrict one's attention to metric spaces that are doubling. Note that the definition of (s, t)-regular mappings does not really depend on s and t separately, but only on the difference s−t.It is often nice to mention them explicitly anyway, and to take s to be the dimension of M (e.g., Hausdorff dimension).In common situations, t could be the dimension of f (M ) (as in Examples 2.4), but a priori one has to be a bit careful about this.The (s, t)-regularity condition automatically becomes weaker when t becomes smaller (or s becomes larger), so that t might be less than its optimal value.In particular, it can be less than the dimension of f (M ).In the context of this paper, t will often be the Hausdorff dimension of f (M ). When s = t, a special feature occurs, which is that one has the class of bilipschitz mappings sitting inside the class of regular mappings in a distinguished way.Strictly speaking, when s > t, bilipschitz mappings are also (s, t)-regular, but this is somewhat degenerate.This does not happen if one asks that s be the Hausdorff dimension of the domain, and that t be the Hausdorff dimension of the image, for instance, since a bilipschitz mapping would preserve the Hausdorff dimension. There are special classes of mappings among (s, t)-regular mappings that one might consider, though, and which can be natural when s > t.For instance, if s−t ≥ 1, then one might look at (s, t)-regular mappings f with the additional property that f −1 (u) be connected for every point in the image. The authors originally considered the notion of (s, t)-regular mappings, and observed most of the results discussed in this paper, several years ago.We simply never got around to writing this up formally before.In the intervening time, some related matters have come up.One is the study of nonlinear quotients in [6].There they consider conditions along the lines of "co-continuity" and "co-Lipschitzness", in which images of balls in the domain contain balls in the range, with estimates from below for the radii of the balls in the image, and with the centers of the balls in the domain and range matching up under the mapping.These conditions are quite different from (s, t)-regularity, but there is also some overlap with (s, t)-regularity and related properties.We shall say more about this later, especially in Sections 8 and 9.As a basic instance of this, standard linear projections from R m onto R n give examples both for mappings that are Lipschitz and co-Lipschitz, as in [6], and for mappings that are (m, n)-regular.On the other hand, mappings like x → |x| do not behave well for the conditions in [6] (for balls centered at the origin), but they are accepted by the regularity conditions considered here. One of the issues considered in [6] is the way that nonlinear quotient mappings from an infinite-dimensional Banach space onto another one can respect a substantial amount of the Banach space structure.For instance, one might hope that a nonlinear quotient (which is uniformly continuous and co-uniformly continuous, or Lipschitz and co-Lipschitz) could actually be realized as a linear quotient, or has some properties like that.Some results of this type are discussed in [6]. To put this into perspective, there are analogous questions about Banach spaces which are bilipschitz equivalent, or homeomorphic through mappings which are uniformly continuous and have uniformly continuous inverse.In this case, it is natural to ask whether the two spaces are then linearly isomorphic.See [6] for references for results related to this. Let us emphasize that the nonlinear quotient conditions in [6] allow the fibers to have infinite dimension, and include linear quotient mappings from one Banach space onto another as a special case.With (s, t)-regular mappings, the fibers always have Hausdorff dimension less than or equal to s − t.This is not hard to verify, and there are a number of simple variants of this (concerning the (s − t)-dimensional behavior of the fibers of an (s, t)-regular mapping). The classes of nonlinear quotient mappings in [6] readily accommodate infinite-dimensional behavior, but they are quite nontrivial in purely finite-dimensional situations as well.See [6] for some aspects of this.In particular, there are examples in [6] (in finite and infinite dimensions) of mappings which are Lipschitz, co-uniformly continuous, and for which there is a nontrivial ball in a codimension-1 subspace of the domain which is sent to a single point in the image.More precisely, this is happening even though the image is not of dimension 1 (in which case it would be normal).This is quite different from what would happen with (s, t)-regular mappings, at least if s − t is strictly less than the dimension of the domain minus 1. In another direction, one might look at regular and (s, t)-regular mappings in comparison with quasiregular mappings, in the sense of [27], [28].See also [19], [20] in this regard.Actually, for this it is easier to start with the notion of mappings of bounded length distortion, in the sense of [24].These are also called BLD mappings.BLD mappings are quasiregular mappings in which the Jacobian is locally bounded, and uniformly bounded away from 0. This fits better with the notion of regular mappings (as in Definition 1.3), in that the regularity condition for a mapping on a Euclidean space ensures that the differential of the mapping is uniformly bounded, at every point where it exists (because of the Lipschitz condition), and that the absolute value of the Jacobian is uniformly bounded away from 0. Note that a regular mapping from R n to itself (say) does not have to have positive Jacobian, or Jacobian of constant sign.Changing of sign in the Jacobian occurs with the mapping x → |x| on R, for instance.For mappings between arbitrary metric spaces, something like having the Jacobian be of constant sign does not really make sense anyway.Positivity of the Jacobian is an important condition for quasiregular and BLD mappings, however.In particular, x → |x| on the real line is not quasiregular or BLD, because of the change in sign of the Jacobian. In general, quasiregular mappings are allowed to have Jacobians and differentials which are not bounded or bounded from below, even locally.The concepts of BLD and quasiregular mappings are closer than they might seem at first, in the sense that one can modify the geometry of the domain of a mapping, using the weight that comes from the Jacobian, to put oneself in the situation where the mapping has Jacobian equal to 1 (with respect to the new geometry).Compare with [20], especially Section 2.3. Quasiregular mappings are also discussed in [6], in connection with the classes of nonlinear quotient mappings considered there.Note that quasiregular mappings are always open mappings (i.e., they send open sets to open sets), like the nonlinear quotient mappings considered in [6]. We should perhaps mention that quasiregular (and BLD) mappings are always discrete, i.e., the inverse-image of a point under the mapping is always a discrete set.This holds automatically for regular mappings in the sense of Definition 1.3. The notions of quasiregular and BLD mappings involve having the domains and ranges of the mappings be of the same dimension.With nonlinear quotient mappings as in [6], and (s, t)-regular mappings as in Definition 2.1, one has related classes that allow for the domain and image to have different dimensions.Once the dimensions are permitted to be different, a number of things change, and it is not necessarily so clear what one might want to view as analogues of quasiregular or BLD mappings (if there are any proper analogues). One might also think of quasiregular and BLD mappings in connection with quasiconformal and bilipschitz mappings.That is, they are nearly the same, except for giving up the requirement of injectivity, even locally.When the dimensions of the domain and image are allowed to be different, this aspect changes too. The class of (s, t)-regular mappings is also useful in [31].See Section 16.3 in [31] in particular. The main result of this paper will be stated in Section 6, and proved in Section 7.This concerns the way that, given a ball B, one can add m − t components to an (m, t)-regular mapping f : R m → R n , to get a mapping which is bilipschitz on a subset of B of substantial measure (compared to the measure of B), with uniform bounds.The situation for ordinary regular mappings is reviewed before that, in Section 4, for which stronger assertions are known.Analogues of these stronger statements in the general case do not work, and this is discussed in Section 5. We also review in Sections 5 and 6 some more classical results about Lipschitz mappings, which help to indicate the differences between the two types of situations (where dimensions are preserved or not).Sections 8 and 9 contain some comparisons and extensions, in connection with co-Lipschitz mappings in particular, and Section 10 describes a more complicated version of the main result which is more global. In the next section, we review the notion of Ahlfors-regular metric spaces, and mention a few facts related to them and (s, t)-regular mappings. Ahlfors-regular spaces Definition 3.1.Let (M, d(x, y)) be a metric space, and let s be a positive real number.We say that M is Ahlfors-regular of dimension s if it is complete, and if there is a constant C > 0 so that for all x ∈ M and r > 0 such that r ≤ diam M . Here H s (A) denotes the s-dimensional Hausdorff measure of a set A (as in [16], [17], [25]), and B(x, r) denotes the closed ball in M with center x and radius r.For this definition, one might add the requirement that M have at least two elements, to avoid trivialities. It is a standard fact that Ahlfors-regular metric spaces are always doubling.This is not too hard to show, and it is given in Lemma 5.1 on p. 19 of [14].(There is a small adjustment needed for this, which is that one should allow arbitrary radii R in Lemma 5.1 in [14], and not just R's with R ≤ diam M .Alternatively, it is enough to consider only radii R ≤ diam M if one uses closed balls in M .We were careful to do this in Definition 3.1, and we should have done it in Definition 1.1 in [14].This is only a small technical point, but one can give examples where it is an issue.) For Ahlfors-regular spaces, the notions of regular and (s, t)-regular mappings can be given slightly different characterizations, as follows. In the implications in this lemma, the constants that occur in the conclusions can be bounded in terms of constants that occur in the hypotheses (as usual). When s = t, (s, t)-regularity becomes regularity in the sense of Definition 1.3, and the condition in (3.4) can be reduced to the requirement that for all balls B 1 in N .This is because one can take the ball B 2 in M to be arbitrarily large, and the right-hand side of (3.4) does not depend on B 2 when s = t.The original definition of regular mappings in [8] was given in terms of (3.5), rather than the covering condition in Definition 1.3.This had its genesis in the case of "regular curves" (as in [7], and Examples 1.5), where (3.5) can be interpreted as saying that the amount of arclength of a given curve inside a ball B 1 is less than or equal to C • radius(B 1 ). The case of s = t in Lemma 3.3 is fairly standard, and is given explicitly in Lemma 12.6 on p. 103 in [14].Lemma 3.3 can be proved in essentially the same manner.Let us briefly go through the argument. If f : M → N is (s, t)-regular, then one can get the bound (3.4) directly from the definitions.That is, if B 1 and B 2 are as above, then one can cover f −1 (B 1 ) ∩ B 2 by a constant times (radius(B 2 )/ radius(B 1 )) s−t balls with radius equal to C 0 •radius(B 1 ), by (s, t)-regularity, and each of these has H s -measure less than or equal to a constant times radius(B 1 ) s , by the upper bound in Ahlfors-regularity.This gives (3.4). Conversely, suppose that f : M → N is Lipschitz and satisfies the condition in Lemma 3.3, and let us show that f is (s, t)-regular.Let balls B 1 and B 2 in N and M be given as before.We would like to show that f −1 (B 1 ) ∩ B 2 can be covered by a constant times (radius(B 2 )/ radius(B 1 )) s−t balls with the same radius as B 1 . The main point is to use mass bounds and simple covering arguments.By hypothesis, f satisfies the condition in Lemma 3.3, and we can apply this to the balls 2B 1 and 2B 2 to get that and it does this with some "extra room".Specifically, if f is Lipschitz with constant k, and if the ball in M with center x and radius equal to min(k −1 , 1) • radius(B 1 ). From here one would like to obtain that f −1 (B 1 ) ∩ B 2 can be covered by at most a constant times (radius(B 2 )/ radius(B 1 )) s−t balls with the same radius as This implies that the balls B(a, radius(B 1 )/2), a ∈ A, (in M ) are pairwise disjoint.Hence the balls B(a, min(k −1 , 1/2) radius(B 1 )), a ∈ A, (3.8) are pairwise disjoint too.These balls are all contained in f −1 (2B 1 )∩2B 2 , as in the previous paragraph.Their union therefore has H s measure bounded by the right-hand side of (3.6).The disjointness of these balls implies that the measure of their union is the equal to the sum of their measures.The measure of each ball is at least a constant times radius(B 1 ) s , by the lower bound in Ahlfors-regularity.The combination of these upper and lower bounds implies that the total number of elements of A is bounded by a constant times (radius(B 2 )/ radius(B 1 )) s−t (independently of the choice of A). On the other hand, if A is a maximal subset of f −1 (B 1 ) ∩ B 2 which satisfies (3.7), then f −1 (B 1 )∩B 2 is covered by the balls B(a, radius(B 1 )), a ∈ A. Indeed, if a point x lay in f −1 (B 1 ) ∩ B 2 but not in any B(a, radius(B 1 )), a ∈ A, then one could add x to the set A to get a larger set which still satisfies (3.7), and this would contradict the maximality of A. Thus, by choosing A to be maximal, one can get a covering of f −1 (B 1 ) ∩ B 2 with the required properties.This completes the proof of Lemma 3.3.Remark 3.9.Although the statement of Lemma 3.3 asks that t be positive, the same assertion and proof work when t = 0.In this case, (3.4) becomes an inequality that holds automatically when M is Ahlforsregular of dimension s.In other words, any Lipschitz mapping on M is (s, t)-regular when t = 0 and M is Ahlfors-regular of dimension s.This works more generally for spaces which are semi-regular of dimension s, in the terminology of Definition 5.6 on p. 24 of [14].The latter is equivalent to s-homogeneity in the sense of [2].See [23] for some additional topics related to this property (in connection with topological dimension in particular). Here is another fact about regular mappings and Ahlfors-regular spaces. Lemma 3.10.Let (M, d(x, y)) and (N, ρ(u, v)) be metric spaces, and let f : M → N be a mapping which is regular (in the sense of Definition 1.3).If M is Ahlfors-regular of some dimension s, then f (M ) is Ahlforsregular of dimension s as well. As usual, the constants involved in the Ahlfors-regularity of f (M ) are bounded by constants that depend only on ones implicit in the regularity assumption for f , and the Ahlfors-regularity condition for M .This lemma arises in [8], in a slightly different form.See also Lemma 12.5 on p. 103 of [14].The main point is simply that Hausdorff measure is preserved, to within bounded factors, by regular mappings (as mentioned in Section 1).This leaves the completeness condition in Definition 3.1, but in fact closed and bounded subsets of Ahlfors-regular spaces are compact (because of completeness and the fact that bounded sets are "totally bounded", by the doubling property), and one can use this to get completeness of f (M ) (since compactness is preserved by taking images under continuous mappings). The analogue of Lemma 3.10 for (s, t)-regular mappings does not work.That is, if f : M → N is (s, t)-regular, and M is Ahlfors-regular of dimension s, then it may not be true that f (M ) is Ahlfors-regular of dimension t. As stated, this has no hope of being true, since an (s, t)-regular mapping is always (s, u)-regular when u ≤ t.In any case, for (s, t)-regular mappings, one does not have a means by which to get upper bounds for the Hausdorff measure of sets in the image, as one has for ordinary regular mappings (the case when s = t).It is not hard to get lower bounds, however, just using the definitions; see Lemma 16.38 of [31].If one knows upper bounds for the image in advance (e.g., if N satisfies the upper bounds associated to Ahlfors-regularity of dimension t), then one is in better shape.In particular, in this case the choice of t would be unique, and as large as possible. Regular mappings between Euclidean spaces In this section, we shall consider the special case where the metric spaces involved are Euclidean spaces.We shall also restrict ourselves to regular mappings (rather than (s, t)-regular mappings, to which we shall return later).We begin by recalling a result from [9].Theorem 4.1.Let m and n be positive integers, with n ≥ m.If f : R m → R n is a regular mapping (using the usual Euclidean metrics on R m and R n ), then for each ball B in R m there is a set E ⊆ B such that Here δ and C are constants that depend only on m, n, and the constants involved in the regularity condition for f (and not on B or f ). Notice that if f is bilipschitz on a set E, then f is automatically bilipschitz on the closure of E as well (since f is continuous). See Proposition 1 on p. 95 of [9] for this result.Note that there are stronger statements in [9] (from which this is derived), which permit one to obtain a "large bilipschitz piece", as above, from a Lipschitz condition and information at a single location and scale, rather than an assumption like regularity. for some η > 0, then one can find a set E ⊆ B which satisfies (4.2) and (4.3) above, with constants that depend only on m, η, and the Lipschitz constant for f .See also [10]. In [21], alternate methods are given from which similar conclusions can be derived.The statements in [21] give a somewhat stronger conclusion; instead of having a single "large bilipschitz piece" in a ball B, as above, one gets that for each > 0 it is possible to find subsets of B on which f is bilipschitz, and which cover all of B except for a set of measure less than H m (B).Here the constants for the bilipschitz conditions, and the number of subsets of B that are used, are bounded in terms of constants that depend on , m, n, and the regularity constant for f . As with [9], the results in [21] can be applied to a Lipschitz mapping at a single location and scale, without as restrictive an assumption as regularity.In general, the "bilipschitz pieces" cover all of B except for a set whose image has small measure.This set may not have small measure itself, but this is the case when the mapping is regular, since regular mappings preserve Hausdorff measures to within bounded factors. Let us mention a corollary to Theorem 4.1, using the following definition. Definition 4.4.Let (N, ρ(u, v)) be a metric space.We say that N is uniformly rectifiable if it is Ahlfors-regular of dimension n, for some positive integer n, and if there are constants θ, k > 0 with the following property: for each x ∈ N and r > 0 such that r ≤ diam N , there is a subset A of the ball B(x, r) with H n (A) ≥ θ r n and a bilipschitz mapping from A into R n , where the bilipschitz constant for the mapping is bounded by k. As a special case, N is uniformly rectifiable if it is bilipschitz equivalent to R n .Uniform rectifiability is more flexible than that, allowing partial parameterizations (at all scales and locations), as in Definition 4.4.See [11], [13] for some other characterizations of uniform rectifiability.See [16], [17], [25] concerning classical notions of rectifiability. This is an easy consequence of Theorem 4.1 (and Lemma 3.10).One can get a slightly stronger conclusion than uniform rectifiability, which is that f (R m ) has "big pieces of Lipschitz graphs".See Proposition 1 on p. 95 of [9]. An earlier analysis of subsets of R n which are images of Euclidean spaces under regular mappings was given in [8].This analysis went in a similar direction as Theorem 4.1, if not with quite the same final conclusions.In particular, it was shown in [8] that large classes of singular integral operators are bounded on L p for Ahlfors-regular sets which are regular images of a Euclidean space. See also [10] for more on these topics. More classical results about Lipschitz mappings Let f : R m → R n be a Lipschitz mapping.A basic fact is that such a mapping is always differentiable almost everywhere.See [17], [25], [30], [32], for instance.Now suppose that f : R m → R n is Lipschitz, and that A is a subset of R m such that H m (f (A)) > 0. In this case there are points in A at which f is differentiable, and such that the rank of the differential is m.This can be derived from the following two statements.If E denotes the set of points in R m at which f is not differentiable, then E has measure 0, and this implies that as well, since f is Lipschitz.On the other hand, if L denotes the set of points at which f is differentiable, but the differential of f has rank less than m, then This is a well-known result, which can be established through covering arguments.See Theorem 7.6 on p. 103 of [25], or Theorem 3.2.3 on p. 243 of [17]. Thus, if H m (f (A)) > 0, then A should not be wholly contained in the union of the sets E and L, since This implies that A contains a set of positive measure on which f is differentiable, and where the differential always has rank m. Here is another fact related to these. Proposition 5.4.Let f : R m → R n be a Lipschitz mapping.Then (in the notation above), R m \(E ∪ L) can be covered by a countable family of sets, on each of which f is bilipschitz. This is contained in Lemma 3.2.2 on p. 242 of [17].The latter gives some additional properties that one can have for f on the sets which are used to cover R m \(E ∪ L).Proposition 5.4 provides a more classical version of some of the themes in Section 4. It deals with similar types of properties, but in a fashion that does not give the same type of quantitative estimates. We can use this as a kind of testing ground for questions about mappings between dimensions.Specifically, let f : R m → R n be a Lipschitz mapping again, and imagine now that we are in a situation where the expected dimension of the image of f is less than m. For simplicity, let us just assume that n < m, and that we expect the image of f to be n-dimensional.We still have that f is differentiable almost everywhere.The largest that the rank of the differential of f could be is n, since f takes values in R n . When the rank of the differential of f is equal to n, one is in pretty good shape.One can use results like Proposition 5.4 to say something about the behavior of f .We shall return to this in Section 6. There is an important difference between this situation and the previous one, where the expected dimension of the image is equal to the dimension of the domain.In the previous case, one can determine (as above) that there are many points in the domain at which the differential of f exists and has rank m, simply given the information that the image of f has positive m-dimensional measure (and that f is Lipschitz).This type of argument does not work in the present circumstances. Indeed, there are striking results to the effect that one can have mappings f : R m → R n , m > n > 1, such that f is C 1 (and better than that), (5.5) the differential of f has rank < n at all points in R m , and (5.6) the image of f contains a nonempty open set in R n .(5.7) See [3], [5], [22], [33].One can in fact ask that the differential of the mapping f have rank less than or equal to everywhere, with taken to be any integer in the range 1 ≤ < n, and one can get a degree of smoothness (including C 1 ) which depends on m, n, and .In other words, = 1 is the most impressive in terms of conditions on the differential of f , but then one does not get as much smoothness as when one allows larger 's. These results are related to Sard's theorem, and the examples that show that sufficiently-strong smoothness hypotheses are needed (to conclude that images of critical sets have measure 0 in a given dimension).See Section 3.4 of [17] and [4], for instance.In the context of the preceding paragraph, all points in R m are critical points, which gives a more extreme situation. (m, t)-Regular mappings between Euclidean spaces Theorem 6.1.Let m, n, and t be positive integers, with t ≤ m.Suppose that f : R m → R n is an (m, t)-regular mapping, as in Definition 2.1.Then there are positive constants δ, C so that for each ball B in R m , there is a linear mapping λ : R m → R m−t and a subset E of B with the following properties: Here C depends only on m, n, t, and the constants implicit in the (m, t)-regularity condition for f (and not on B or the particular mapping f ). Note that it makes sense to think about the "norm of λ", as in (6.4), since λ is a linear mapping (which can be represented by a matrix).Exactly what norm one uses does not really matter, since it will only effect the bound to within a constant factor.For the standard "operator norm", our choice of λ will have norm equal to 1, and λ will in fact be a composition of an orthogonal projection and an isometry onto R m−t .Theorem 6.1 is analogous to Theorem 4.1, but for (m, t)-regular mappings rather than ordinary regular mappings.One might also think of the two of them as being similar to the implicit and inverse function theorems.In particular, one way to deal with the implicit function theorem is to add extra components to the given mapping to get one whose differential is invertible, and to which the inverse function theorem is applicable. Unlike Theorem 4.1, Theorem 6.1 does not have such broad extensions to Lipschitz mappings in general, under mild conditions on the size of the image (as in [9], [21]).The results in [3], [22], [33], mentioned in the paragraph of (5.5), (5.6), (5.7), give strong limitations to this.Thus the assumption of (m, t)-regularity is more crucial here. Although one does not have such broad results in this context as before, the proof of Theorem 6.1 is fairly robust, and admits a number of variants.In particular, let us point out that the result is local, in that one could start out with a mapping defined on a ball, for instance (rather than all of R m ), and obtain analogous conclusions.A number of things like this will be clear from the proof.Another result along the lines of Theorem 6.1 will be discussed in Section 10. In analogy with Corollary 4.5, we have the following.Corollary 6.5.Let m, n, and t be positive integers, with t ≤ m.Assume that f : R m → R n is (m, t)-regular, and that the image f (R m ) is Ahlfors-regular of dimension t.Then f (R m ) is uniformly rectifiable (Definition 4.4), with bounds for the uniform rectifiability constants which depend only on the constants implicit in the hypotheses. As with Corollary 4.5, one also has the slightly stronger "big pieces of Lipschitz graphs" property.We shall say more about this in Section 7. Note that Ahlfors-regularity of f (R m ) is an assumption in Corollary 6.5, while it is part of the conclusion in Corollary 4.5.This came up already in Section 3, with Lemma 3.10 and the remarks following it.As indicated there, one can get lower bounds for f (R m ) as in t-dimensional Ahlfors-regularity when f is (m, t)-regular, but not upper bounds.(One could use this to weaken the hypotheses of Corollary 6.5.) We shall discuss proofs for Theorem 6.1 and Corollary 6.5 in Section 7.For the moment let us consider some more "classical" statements, in the spirit of Section 5. Let f : R m → R n be a Lipschitz mapping.As before, f is automatically differentiable almost everywhere. Assume that f is also (m, t)-regular.This implies that (6.6) the rank of df x is at least t, at any point x ∈ R m where the differential exists. It is not hard to verify this assertion, directly from the definitions. For the rest of this discussion, we shall use only (6.6), and not the condition of (m, t)-regularity.Let λ 1 , λ 2 , . . ., λ j be a finite collection of linear mappings from R m to R m−t , which is sufficiently rich so that the following is true: This is equivalent to saying that if P is any plane in R m of dimension less than or equal to m − t (such as the kernel of α), then there is an i so that the kernel of λ i is transverse to P .This can easily be arranged.Lemma 6.8.Under the conditions described above, there is a countable family of sets {E } in R m such that E covers all of R m except for a set of measure 0, and for each there is an i, 1 ≤ i ≤ j, so that This is analogous to Proposition 5.4, and it has much the same relationship to Theorem 6.1 as Proposition 5.4 has with Theorem 4.1.In particular, Theorems 4.1 and 6.1 give quantitative conclusions, while Proposition 5.4 and Lemma 6.8 do not, even if they involve similar kinds of basic structure. It is not hard to derive Lemma 6.8 from Proposition 5.4.Specifically, one would apply Proposition 5.4 to the mappings (f, Almost every element of R m is a point of differentiability for f , and the differential always has rank at least t, as in (6.6).Because of (6.7), for each point of differentiability x ∈ R m , there is at least one i so that the differential of (f, λ i ) will be injective at that point.This ensures that the sets provided by Proposition 5.4, on which the mappings (f, λ i ) are bilipschitz, cover almost all of R m , when one takes the union also over i.This gives Lemma 6.8. Proofs for Theorem 6.1 and Corollary 6.5 To prove Theorem 6.1, we shall use some quantitative results about approximating Lipschitz functions by affine functions.We first review some aspects of this, before proceeding to the main part of the argument. Let h be a real-valued function on R m , and let A denote the set of real-valued affine functions on R m .Given x ∈ R m and r > 0, define α(x, r) by This quantity measures how well h can be approximated by an affine function on B(x, r).In particular, α(x, r) = 0 exactly when h is equal to an affine function on B(x, r). The converse to this last statement is not true in general, i.e., (7.2) may hold even if h is not differentiable at x.This is because the affine approximations A(y) to h may move around significantly while r tends to 0. Notice that if h is Lipschitz with constant C 0 , then This follows by taking A(y) to be the constant function equal to h(x) in (7.1).The boundedness of the α(x, r)'s does not imply that h be Lipschitz, but corresponds instead to the "Zygmund class" of functions. It is easy to check that α(x, r) is continuous in x and r when h is continuous, so that α(x, r) is measurable in particular. Theorem 7.4. Let h be a real-valued Lipschitz function on R , there is a constant C so that for each z ∈ R m and s > 0 we have that Here 1 B( ) (x, r) denotes the indicator function of B( ), so that 1 B( ) (x, r) is equal to 0 when (x, r) does not lie in B( ), and is equal to 1 when (x, r) ∈ B( ).Also, dx and dr refer to ordinary Lebesgue measure on R m and (0, ∞). The constant C in (7.6) can be chosen so that it depends only on m, , and the Lipschitz constant for h. In other words, α(x, r) is small most of the time, in the sense that the exceptional set B( ) is "sparse" for every > 0, as in the Carleson condition above.A key point here is that the integral in (7.6) would diverge if one did not have the indicator function in the integrand, to restrict the integration to pairs (x, r) in B( ). Theorem 7.4 can be derived from the results of [15], and indeed it is definitely weaker than the information provided by [15].See also [21] and [13].In the terminology of [13], the property in the conclusions of Theorem 7.4 is sometimes called the WALA (weak approximation of Lipschitz functions by affine functions), as in Definition 2.47 on p. 45 of [13], and the remarks just after it.The word "weak" is used to distinguish this condition from stronger quadratic estimates on the α(x, r)'s and some variants of them (based on integral norms in (7.1) instead of the supremum), as in [15] and p. 18 of [13].See also Remark 2.28 on p. 336 of [13], concerning a derivation of Theorem 7.4.Corollary 7.7.Let h : R m → R be a Lipschitz function, and let > 0 be given.Then there is a constant k so that for each x ∈ R m and r > 0 there exist x 1 ∈ R m and r 1 > 0 such that B(x 1 , r 1 ) ⊆ B(x, r), r 1 ≥ k −1 r, and α(x 1 , r 1 ) ≤ .This constant k can be chosen so that it depends only on m, , and the Lipschitz constant for h.Thus, although α(x, r) might not be too small itself, there is always a pair (x 1 , r 1 ) which is not too far from (x, r) in R m × (0, ∞) (in terms of hyperbolic geometry) such that α(x 1 , r 1 ) is as small as one likes. The conclusion of Theorem 7.4 is significantly stronger than that of Corollary 7.7, by saying that α(x, r) should be small most of the time, on the whole, and not just at regular intervals.We shall explain how Corollary 7.7 can be derived from Theorem 7.4 in a moment. The property described in Corollary 7.7 and variants of it are discussed in [6], in the context of mappings between Banach spaces more generally.In particular, more direct proofs of Corollary 7.7 are given in [6], which apply to broader situations.The results of [6] also show that k can be taken to be independent of m. To derive Corollary 7.7 from Theorem 7.4, let h and be given as in the statement of the corollary, as well as x and r.Let us restrict our attention to x 1 , r 1 such that |x 1 − x| ≤ r/2 and r 1 ≤ r/2.These conditions imply that B(x 1 , r 1 ) ⊆ B(x, r). Let k be a positive number, greater than or equal to 2, to be chosen soon.Consider the integral The Carleson condition (7.6) implies that this integral is bounded by a constant times r m , where the constant depends only on m, , and the Lipschitz constant for h, and not on k.To obtain the conclusions of Corollary 7.7, it suffices to show that there is a pair ( and (x 1 , r 1 ) does not lie in B( ) (so that α(x 1 , r 1 ) ≤ ).If this were not the case, then the integral in (7.8) would reduce to This would be too large, compared to the upper bound that we have, if k is taken large enough.How large k has to be depends only on m, , and the Lipschitz constant for h.This gives Corollary 7.7. Let us be a bit more explicit about the conclusions of Corollary 7.7.One gets a pair (x 1 , r 1 ) so that α(x 1 , r 1 ) ≤ , and this implies that there is an affine function A : R m → R such that |h(y) − A(y)| ≤ α(x 1 , r 1 )r 1 ≤ r 1 for all y ∈ B(x 1 , r 1 ).(7.10)The norm of the gradient for A can be bounded in terms of the Lipschitz constant for h.This is something that comes out of standard proofs of Theorem 7.4, and one can also derive it directly.Specifically, one can bound the maximal oscillation of A on B(x 1 , r 1 ) in terms of the corresponding oscillation for h, and the norm of the gradient of A is bounded by its maximal oscillation on B(x 1 , r 1 ) divided by r 1 , because it is linear. Let us emphasize that the bound for the gradient of A that one gets does not depend on , i.e., it does not blow up when is small.The price for taking small comes in the constant k in Corollary 7.7, which controls how "far" from (x, r) one might have to go to get a pair (x 1 , r 1 ) for which there is a good affine approximation.Now let us prove Theorem 6.1.Let f : R m → R n be a mapping which is (m, t)-regular, as in the statement of Theorem 6.1.Let a ball B in R m be given. Let be a small positive number, to be chosen later.We would like to apply Corollary 7.7 to get a ball B 1 ⊆ B such that radius(B 1 ) ≥ k −1 radius(B), (7.11) and so that there is an affine mapping A : R m → R n which satisfies The only problem with this is that f takes values in R n now, rather than R.However, it is easy to extend Theorem 7.4 and Corollary 7.7 to the case of R n -valued functions.For Theorem 7.4, one can obtain the R n -valued case by applying the version for real-valued functions to the components of an R n -valued mapping, and combining the information that one gets for them.The analogue of the set B( ) (defined in (7.5)) for the R n -valued mapping can be viewed as a subset of the union of corresponding sets for the individual components (with adjustments in the choice of for them to get in the end for f ).The Carleson condition (7.6) for the larger set then follows from analogous conditions for the individual pieces, as desired for the conclusions of Theorem 7.4. For that matter, common proofs of Theorem 7.4 extend easily enough to the vector-valued case, and would give better information about the constants.Once one has Theorem 7.4 for R n -valued functions, one can derive an R n -valued version of Corollary 7.7 from it in the same manner as before. It is also possible to show that Corollary 7.7 directly implies a version of itself for vector-valued functions.See Proposition 2.2 of [6], which allows general Banach spaces in the domain.On the other hand, there are results in [6] concerning versions of Corollary 7.7 in which the domain is finite-dimensional, but the range is infinite-dimensional.There are results in [6] as well about trouble that occurs when both the domain and the range are infinite-dimensional. At any rate, one can get an affine approximation on B 1 as in (7.11) and (7.12) for a suitable constant k.As before, in the remarks just after (7.10), the differential of A has bounded norm as well.This bound does not depend on , i.e., does not blow up as gets small. From now on, let us imagine that B 1 and A : R m → R n (7.13) have been chosen and fixed, as above.(Note that B 1 and A depend on , though.)Our next task is to show that A is nondegenerate in a suitable way, when is small enough, using the (m, t)-regularity of f . Let L : R m → R n denote the linear part of A. That is, A(x) = a + L(x) for all x ∈ R m , where a is a (single) element of R n (namely, A(0)).Thus L has bounded norm.(7.14)This is the same as the statement above that the differential of A has bounded norm.As before, this bound does not depend on .This will be important later, in that how small should be taken to be will depend on the constant in this bound. The following is a standard fact from linear algebra. Lemma 7.15. There is an orthonormal basis This is a kind of substitute for diagonalization of L. Note that some of the L(v i )'s may be 0 (which is necessarily the case when n < m). To prove Lemma 7.15, consider the mapping L t •L : R m → R m , where L t denotes the transpose of L. This mapping is symmetric, and hence admits a diagonalization by an orthonormal basis {v i } m i=1 of R m .From this it follows that the vectors L(v i ) are orthogonal in R n , because the inner product between L(v i ) and L(v j ) in R n is the same as the inner product between L t • L(v i ) and v j in R m , by definition of the transpose.This proves Lemma 7.15.Lemma 7.16.Let L be as above, and let {v i } m i=1 be a basis for R m , with the properties described in Lemma 7. 15.If is small enough, depending only on m and the (m, t)-regularity constants for f , then there are positive integers i 1 , i 2 , . . ., i t with Here C 0 is a positive constant that depends only on m and the (m, t)-regularity constants for f (and not on in particular). In other words, |L(v i )| is reasonably large for at least t choices of i. (It may be reasonably large for more than t choices of i.)In particular, the rank of L is at least t.This is analogous to (6.6). A version of this lemma comes up in [21], with t = m (so that one is getting lower bounds for all of L, rather than just in some directions).For this one can use simpler hypotheses, about the size of f (B 1 ).For instance, if t = n as well, then it is enough to have a bound from below for the Lebesgue measure of f (B 1 ), compared to the measure of B 1 . To prove the lemma, we start with the (m, t)-regularity of f , and try to convert it into a property for L. Let β be a ball in R n .If radius(β) ≤ radius(B 1 ), then we may apply the (m, t)-regularity of f (Definition 2.1) to obtain that From (7.12) we get that In other words, if y ∈ B 1 satisfies A(y) ∈ β, then f (y) ∈ β, because of (7.12) and the definition of β.Combining (7.20) with (7.18) gives (7.21)A −1 ( β) ∩ B 1 can be covered by ≤ C (radius(B 1 )/ radius(β)) m−t balls in R m with the same radius as β. The preceding observations apply to all balls β in R n such that • radius(B 1 ) < radius(β) ≤ radius(B 1 ).(7.22)In particular, there is no restriction on the center of β.As a result, the analogue of (7.21) with A replaced by L holds, since A and L differ only by a translation.Similarly, one can use translations in the domain and range to replace B 1 in (7.21) (and its analogue for L instead of A) with a ball which is centered at the origin, and has the same radius as before.This uses the fact that A is affine (and L is linear), and it would not work for general mappings like f .One can also make a rescaling in the domain and range, so that radius(B 1 ) is replaced by 1 in all occurrences, and for the same reasons of linearity.To summarize, we can convert (7.21) to At this point, we can take β to be centered at the origin.As in Section 3, one might prefer to formulate (7.23) in terms of volume.This leads to Volume(L −1 (B(0, ρ − )) ∩ B(0, 1)) ≤ C ρ t for all ρ ∈ ( , 1], (7.25)where C is a suitable constant (depending on the regularity constant C and the dimension m).(We are mistreating our notation somewhat here, in that B(0, ρ − ) is supposed to be a ball in R n , while B(0, 1) is a ball in R m .) We want to go from (7.25) to conclusions like the ones in Lemma 7.16.Using the orthonormal basis {v i } m i=1 for R m from Lemma 7.15, we can write L −1 (B(0, ρ − )) down explicitly, as an ellipsoid.Namely, We can write B(0, 1) as If we define an ellipsoid E(ρ) by Volume(E(ρ)) ≤ C ρ t for all ρ ∈ ( , 1], (7.30) by (7.25). On the other hand, standard considerations give Here c m is a constant which depends only on m (which is in fact the volume of the unit ball in R m ).If |L(v i )| = 0 for some i, then we interpret (ρ − )/|L(v i )| as being +∞, so that the minimum with 1 in (7.31) is 1. Combining (7.30) and (7.31), we have that Given ρ and , let us write ν(ρ− ) for the number of integers i, ρ t for all ρ ∈ ( , 1], (7.34)where C 1 is a bound for the norm of L as a linear mapping (as in (7.14)). Let us restrict ourselves now to ρ's such that ρ ≥ 2 .In particular, ρ − is then greater than or equal to ρ/2.Using this, and combining the constants in (7.34), we obtain that where C 2 can be chosen to depend on m and the regularity constants for f (which includes the Lipschitz constant for f ), but nothing else.In particular, we can choose C 2 so that it does not depend on ρ or . On the other hand, we are allowed to choose to be as small as we like.For the purposes of Lemma 7.16, we ask that 2 < C −1 2 .(7.37)This is the only condition that we need to impose on , i.e., in connection with the hypothesis in Lemma 7.16 that be small enough.(There is no problem with consistency here, since C 2 does not depend on .) If we choose ρ in the range [2 , C −1 2 ), then (7.36) does not hold, and so the hypotheses of (7.36) should not hold.This yields ν(ρ − ) ≥ t. (7.38) Going back to the definition of ν(ρ − ) (in the sentence containing (7.33)), we get that (7.39) ρ/2 ≤ ρ − ≤ |L(v i )| for at least t choices of i, If is small enough so that (7.37) holds, then the interval [2 , C −1 2 ) is nonempty.By choosing ρ to be less that C −1 2 , but close to it, one obtains the conclusions of Lemma 7.16 (when (7.37) holds).(In the end, one can take the constant C 0 in (7.17) to be 2C 2 , for instance.)This completes the proof of Lemma 7.16. Remark 7.40.It is only here, in the proof of Lemma 7.16, that the assumption of (m, t)-regularity of the mapping f : R m → R n is invoked, beyond the requirement that f be Lipschitz, for the proof of Theorem 6.1.Of course, the conclusions of Lemma 7.16 will be needed in the rest of the argument, and so the (m, t)-regularity hypothesis will be implicitly employed there as well, but it will not be called up separately again. The (m, t)-regularity assumption was used here to get (7.18)(and hence (7.21), and so on).In the end, for a particular choice of initial ball B (from the statement of Theorem 6.1, and a few lines before (7.11)), we really only need to apply this condition on a bounded range of locations and scales (compared to the radius of B).One can see this from the proof, and the way that ρ was chosen finally.Notice, however, that this range of possible scales and locations includes the range from which B 1 is chosen, as in (7.11) and (7.12), after B is fixed.In particular, it depends on the constant k from Corollary 7.7, and hence on the (m, t)-regularity constants for f .This should be compared with the "equidimensional" case, where m = t, as in Section 4, and with the discussion in Section 5.In other words, although the equidimensional case is generally more "stable", with substantially less than regularity needed for many assertions, one does have some limits on the way that the (m, t)-regularity condition is needed when m > t. From now on, let us assume that is small enough for Lemma 7.16 to be applied.Let us also assume that i 1 , i 2 , . . ., i t are as in the conclusions of Lemma 7.16. Let us write which is the same as taking the span of the v j 's, where j ranges among the integers between 1 and m (inclusively) which are not one of the i 's, = 1, 2, . . ., t.In particular, Q 1 has dimension t, and Q 2 has dimension m − t. Let P denote the plane in R n which is spanned by L(v i1 ), L(v i2 ), . . ., L(v it ).These vectors are all orthogonal, as in Lemma 7.15, and they are all nonzero, by Lemma 7.16.Thus the dimension of P is equal to t. (7.41)In fact, (7.42) the restriction of L to Q 1 is an invertible mapping onto P , with the norm of the inverse bounded by C 0 . Here C 0 is as in Lemma 7.16.The bound in (7.42) follows from the one in Lemma 7.16, together with the orthogonality of the v i 's and L(v i )'s. We are now ready to choose a linear mapping λ : R m → R m−t , for the purposes of Theorem 6.1.Specifically, λ should be a linear mapping which satisfies the following conditions: In other words, λ is the composition of an orthogonal projection of R m onto Q 2 , and an isometry from Q 2 onto R m−t . Let π denote the orthogonal projection from R n onto P . Lemma 7.44.Let C 0 be as in Lemma 7. 16, and let λ, π, etc., be as above.If ≤ min(1, C −1 0 )/2, then the combined mapping (π • f, λ), which maps R m into P × R m−t , has the following property: the image of the ball B 1 in R m under (π • f, λ) contains a ball with radius equal to (min(1, C −1 0 )/2) • radius(B 1 ) in P × R m−t .(Remember that B 1 is as in (7.11), (7.12).) Recall that is also assumed to be small enough for the purposes of Lemma 7.16.Note that the constant C 0 from Lemma 7.16 does not depend on , so that there is no consistency problem with asking that be smaller than min(1, C −1 0 )/2.To prove Lemma 7.44, consider first the corresponding question where f is replaced with our linear mapping L : R m → R n .In this case, the combined mapping (π • L, λ): R m → P × R m−t is also linear.In fact this mapping is invertible, and the norm of its inverse is ≤ max(1, C 0 ). To be more precise, the degree of (π • A, λ) as a mapping from B 1 into P × R m−t is defined for all points y in P × R m−t which do not lie in the image of ∂B 1 under (π • A, λ) (at least if orientations have been selected for the domain and range spaces).The degree is automatically 0 on the complement of the closure of the image of B 1 , but this will not matter too much here. Similarly, if we think of (π•f, λ) as a mapping from B 1 into P ×R m−t , then the degree of f is defined for points y in P × R m−t that do not lie in the image of ∂B 1 under (π • f, λ).Note that the image of The degree is locally constant on the complement of the image of ∂B 1 under (π • f, λ), and is thus constant on the components of this set.The degree is zero at points which are not in the image of (π • f, λ) (on B 1 ).These are general properties for degrees of (continuous) mappings. In addition to (π • f, λ) and (π • A, λ), one can look at convex combinations of these two mappings (as mappings from B 1 into P × R m−t ).Another general property of the degree is that it does not change under continuous deformations of mappings, as long as the point y in the range at which the degree is being evaluated does not lie in the image of the boundary of the domain (B 1 in this case) under any of the mappings under consideration. In our situation, (π • f, λ) and convex combinations of (π • f, λ) and (π •A, λ) all differ from (π •A, λ) on B 1 by at most •radius(B 1 ), because of (7.12).Using this, one gets that the degree of (π • f, λ) is the same as the degree of (π •A, λ) at any point y ∈ P ×R m−t such that the distance from y to the image of ∂B 1 under (π • A, λ) is greater than • radius B 1 . We have already mentioned that the image of B 1 under (π • A, λ) contains a ball in P × R m−t of radius min(1, C −1 0 ) • radius(B 1 ).Let us call this ball β.In particular, the image of ∂B 1 under (π • A, λ) lies in the complement of β, since (π • A, λ) is a homeomorphism (of R m onto P × R m−t ).The degree of (π • A, λ) is equal to 1 or to −1 at all points of β, as before. Let β denote the set of points in β which lie at distance > •radius(B 1 ) from the complement of β.Combining the statements in the previous paragraphs, we get that (π • f, λ) and (π • A, λ) have the same degree at every point y in β .We know that the degree of (π • A, λ) at these points is 1 or −1, and so the same is true for (π • f, λ). In particular, these points lie in the image of B 1 under (π • f, λ), since otherwise the degree would be 0. In short, β is contained in the image of B 1 under (π • f, λ).The conclusions of Lemma 7.44 follow easily from this, when ≤ min(1, C −1 0 )/2, because of the definitions of β and β .This completes the proof of Lemma 7.44. At this stage, it is easy to finish the proof of Theorem 6.1.Let us now fix > 0, once and for all, and small enough for the purposes of Lemmas 7.16 and 7.44.Although should be small enough for these lemmas, one should not take it to be too small, because some of the other estimates deteriorate as gets small.Thus one might as well take it to be as large as possible, subject to the conditions of Lemmas 7.16 and 7.44. The main point now is that we can apply results from [9], [21] (as mentioned in Section 4) to the mapping (π•f, λ): B 1 → P ×R m−t .Note that P × R m−t is essentially the same as R m , since P has dimension t (as in (7.41)). Specifically, in applying the results from [9], [21], we are using two pieces of information.The first is that (π • f, λ): B 1 → P × R m−t is Lipschitz, with bounded constant.The constant for (π • f, λ) is less than or equal to the sum of 1 and the Lipschitz constant for f , because π and λ are both Lipschitz with constant 1, by their definitions (as an orthogonal projection and an orthogonal projection composed with an isometry).The second piece of information is that the (m-dimensional) Lebesgue measure of the image of B 1 under (π • f, λ) is greater than or equal to a constant times the Lebesgue measure of B 1 .This follows from Lemma 7.44. Under these conditions, we obtain that there is a subset E of B 1 such that the measure of E is greater than or equal to a constant times the measure of B 1 , and such that the restriction of (π•f, λ) to E is bilipschitz.The constants in these two properties are controlled in terms of the constants mentioned in the previous paragraph, and the dimension m. This is exactly what we want for Theorem 6.1, except for two points.The first is that we should compare E with our original ball B (mentioned a few lines before (7.11)), rather than with B 1 .As in (7.11) and the line preceding it, B 1 is contained in B, and the radius of B 1 is greater than or equal to a constant times the radius of B. Thus E is contained in the original ball B, and the measure of E is bounded from below by a constant times the measure of B. Note that the constant from (7.11) depends on , and hence this constant for the lower bound for the measure of E in terms of the measure of B does too.This is the place where taking to be too small would have an effect. The second point is that we get a bilipschitz condition for (π • f, λ) on E, rather than for (f, λ).Of course f and λ are already Lipschitz, and so the only issue is with the lower bounds in the bilipschitz condition.For this the bilipschitzness of (π • f, λ) is a stronger property than for (f, λ), since π is Lipschitz.(We shall make use of this stronger information in a moment.)Thus one gets bilipschitzness for (f, λ) on E as well. This completes the proof of Theorem 6.1.Now let us prove Corollary 6.5.Let f : R m → R n be given, where f is (m, t)-regular, and assume that the image f (R m ) is Ahlfors-regular of dimension t.We would like to show that f (R m ) is uniformly rectifiable (with dimension t), with suitable bounds for the constants involved. Fix a point z ∈ f (R m ) and a radius r > 0. Since z ∈ f (R m ), there is an element x of R m such that f (x) = z.Also, let C be the Lipschitz constant for f .We have that f (B(x, C −1 r)) ⊆ B(z, r), (7.45) from these choices. Let us now apply Theorem 6.1, with the ball B in R m taken to be B(x, C −1 r), and with this mapping f .Thus we get a linear mapping λ : R m → R m−t and a set E ⊆ B such that the measure of E is greater than or equal to a constant times the measure of B, and (f, λ) is bilipschitz on E, with a bounded constant.As in the proof of Theorem 6.1, we can take λ to be a composition of an orthogonal projection and an isometry onto R m−t . For On each E u , f is bilipschitz (with bounded constant), because of the bilipschitz condition for (f, λ) on E itself.On the other hand, there exist u's in R m−t such that the t-dimensional measure of E u is bounded from below by a constant times (C −1 r) t .This follows easily from a Fubini theorem argument, since the measure of E is bounded from below by a constant times the measure of B, and since the relevant u's (for which E u = ∅) lie in λ(B), which is a ball in R m−t with the same radius as B. The bilipschitz property for f on E u then gives the kind of "big bilipschitz piece" for f (R m ) in B(z, r) which is required in the definition of uniform rectifiability, as in Definition 4.4.This completes the proof of Corollary 6.5. Remark 7.47.Under the assumptions of Corollary 6.5, one can get a slightly stronger conclusion for f (R m ), which is that it has "big pieces of Lipschitz graphs" (BPLG).This property is defined in practically the same manner as uniform rectifiability was, except that instead of asking that f (R m ) ∩ B(z, r) have a subset of substantial size which is bilipschitz equivalent to a subset of R t , with uniform bounds, we want to have a substantial subset of f (R m ) ∩ B(z, r) which lies in the graph of a Lipschitz mapping (over some t-dimensional plane in R n ), again with uniform bounds. The BPLG condition for f (R m ), under the hypotheses of Corollary 6.5, can be established through nearly the same argument as above, for the uniform rectifiability of f (R m ).The main difference is the following.Instead of the bilipschitzness of (f, λ) on the set E, as provided by the statement of Theorem 6.1 and used in the argument above, one employs the stronger feature of bilipschitzness for (π•f, λ) on E, where π is an orthogonal projection of R n onto an t-dimensional subspace.This was given in the proof of Theorem 6.1, and was indicated at the very end of the proof in particular.This leads to bilipschitzness for π • f on the slices E u , rather than just for f itself.Once one has this, it is not hard to get a big piece of a Lipschitz graph for f (R m ), in the given ball in the image, as before.This is analogous to the situation in [9], for ordinary regular mappings. Remark 7.48.The conditions "f : R m → R n is (m, t)-regular" and "f (R m ) is Ahlfors-regular of dimension t" each make sense for positive real numbers t, whether or not t is an integer.However, if one assumes both conditions at the same time, as in the context of Corollary 6.5, then that implies that t is an integer.Indeed, if f is (m, t)-regular, then it is Lipschitz in particular, and hence differentiable almost everywhere on R m .Let x be a point of differentiability of f .One can show that the rank of the differential df x of f at x should be less than or equal to t, under the condition that f (R m ) be Ahlfors-regular of dimension t.Similarly, the assumption that f be (m, t)-regular implies that the rank of df x should be at least t, as in (6.6).Thus t should be equal to the rank of df x , and this is automatically an integer. Comparisons with the co-Lipschitz property Proposition 8.1.Let m and t be positive integers, with m ≥ t.Suppose that f : R m → R t is an (m, t)-regular mapping.Then there is a constant C > 0 so that for every ball B in R m there is a ball B in R t such that and radius(B ) ≥ C −1 radius(B).(8.3)This constant C can be chosen to depend only on m, t, and the constants involved in the (m, t)-regularity condition for f .The conclusion of this proposition is somewhat close to the co-Lipschitz property in [6].The difference is that the ball B is not required to be centered at the image of the center of B under f .Let us refer to the condition in the conclusion of the proposition as the "non-centered co-Lipschitz property".Proposition 8.1 would not work in general if one did ask that the center of B be the image of the center of B under f .For example, consider the mapping f : R → R given by f (x) = |x| for all x.It is easy to see that this mapping is regular (or (m, t)-regular, with m = t = 1), as in Examples 1.5.This mapping satisfies the conclusions of Proposition 8.1, but this would not be true with the additional condition on the center of B .Specifically, the additional condition does not work when B is centered at the origin in R. One can make similar examples in other dimensions, covering all pairs of positive integers (m, t) with m ≥ t.Thus Proposition 8.1 is reasonably sharp, as a statement which gives part of co-Lipschitzness.In Section 9 we shall go in the opposite direction, and look at some consequences of conditions like the co-Lipschitz property. The proof of Proposition 8.1 is approximately contained in the proof of Theorem 6.1 in Section 7 already.In particular, Lemma 7.44 provides a conclusion which is close to the one being sought here.The main difference is that what was R n is now R t .This implies that what was the t-dimensional plane P in R n before (as in (7.41) and the lines just before it) is now simply R n = R t itself.Similarly, what was the projection π : R n → P (as defined just before Lemma 7.44) is now the identity mapping on R t .With these changes, Lemma 7.44 gives almost exactly the result desired for Proposition 8.1.(One has just to be a little careful about the choice of , and the relationship of B 1 in Lemma 7.44 to the original ball B. These points are essentially the same as in Section 7, for the last part of the proof of Theorem 6.1 (after Lemma 7.44) in particular.) For the present purposes, one could also simplify Lemma 7.44 and its proof.Instead of working with the mapping (π • f, λ), as before, one can use the restriction of f to the t-dimensional plane Q 1 (defined a couple of paragraphs before the statement of Lemma 7.44).One would then want to show that f (B 1 ∩ Q 1 ) contains a ball in R t with radius greater than or equal to a constant times the radius of B 1 .One can do this through much the same argument as before, using degree theory, applied to f as a mapping from B 1 ∩ Q 1 to R t (and the affine approximation of f ). Noncollapsing mappings Let us begin with an auxiliary definition.Given a set E ⊆ R n and a nonnegative number t, define the t-dimensional Hausdorff content of E, H t con (E), as follows.For any sequence of sets {A j } j in R n , consider the sum To get H t con (E), one takes the infimum of this sum over all sequences {A j } j of sets which cover E. If H t (E) denotes the ordinary t-dimensional Hausdorff measure of E, then automatically.This is because H t (E) is defined in terms of the same kind of sums (9.1), but with more restrictions on the coverings {A j } j .Namely, the A j 's would be required to have diameter less than a positive number δ, where one takes the limit as δ → 0 at the end, after first taking the infimum of the sums (9.1) over these coverings.However, it is true that This is not hard to verify.(If H t con (E) = 0, then the coverings of E that one gets consist of sets with small diameter anyway.) If E is contained in a set which is Ahlfors-regular of dimension t, then one does have bounds of the form This is easy to check, just using the definitions.It works as well if one only has the upper bound in (3.2) (with s replaced with t). Hausdorff content behaves in essentially the same manner under Lipschitz and bilipschitz mappings as Hausdorff measure, i.e., it does not increase by more than a constant factor for Lipschitz mappings (depending on the dimension t and the Lipschitz constant), and it does not decrease by more than a constant factor either for bilipschitz mappings.This follows easily from the definition.Definition 9.5.Let m and n be positive integers, and let t be a positive real number with t ≤ m.Suppose that f : R m → R n is a mapping which is Lipschitz.We say that f is (m, t)-noncollapsing if there is a constant C so that There are obviously some natural generalizations and variants of this.In particular, one might consider the condition in which t-dimensional Hausdorff measure is used in (9.6) instead of Hausdorff content.If the image set f (R m ) is contained in a set which is Ahlfors-regular of dimension t (or satisfies the corresponding upper bounds from (3.2)), then the two conditions are equivalent, i.e., using Hausdorff content or Hausdorff measure. If f : R m → R n is (m, t)-regular, then it is (m, t)-noncollapsing.This is not too hard to verify, directly from the definitions.That is, if the t-dimensional Hausdorff content of the image of a ball B were ever too small, then it would mean that there is a covering of f (B) for which a sum like (9.1) would be too small.This covering could then be converted into one for the ball B itself, using the (m, t)-regularity property, in a way that would give a contradiction.Specifically, it would lead to the m-dimensional Lebesgue measure of B being too small compared to (radius(B)) m .(Similar observations are described in more detail in [31].) If t is an integer, and f : R m → R t is Lipschitz and co-Lipschitz (in the sense of [6]), then f is (m, t)-noncollapsing as well.Indeed, in this case f (B) will contain a ball in R t with radius ≥ C −1 radius(B) for some constant C (that does not depend on B), so that the t-dimensional Lebesgue measure will be bounded from below by a constant times (radius(B)) t .The same will be true of the t-dimensional Hausdorff content, as in (9.4). This argument also works if f is Lipschitz and satisfies the noncentered co-Lipschitz property from Section 8.In other words, one does not need to know anything about the center of the ball in R t which is contained in f (B), but only the lower bound for the radius. Conversely, if f : R m → R n is Lipschitz and (m, t)-noncollapsing, then one has the same kinds of conclusions as for (m, t)-regular mappings in Sections 6, 7, and 8.We can state the main part of this as follows. )-noncollapsing if and only if it satisfies the non-centered co-Lipschitz property. The "if" part of the corollary was mentioned before the statement of Proposition 9.7, and the "only if" part follows from the extension of Proposition 8.1 to (m, t)-noncollapsing mappings indicated in Proposition 9.7. Let us look at the reasons why Theorem 6.1, Corollary 6.5, and Proposition 8.1 extend to the case of Lipschitz mappings that are (m, t)-noncollapsing, as well as the other observations from Sections 6, 7, and 8.The main point is that affine approximations of (m, t)-noncollapsing mappings satisfy the same kind of nondegeneracy properties as for (m, t)-regular mappings.For this we shall use the following lemma.Lemma 9.9.Let E be a subset of R n .Assume that for some positive real numbers k and R, and that Here τ is another positive real number, which we assume to be less than 1. (Normally τ will be small, k will be bounded, and R will reflect an arbitrary choice of scale.)If t is a positive real number with t ≥ d, then Here C is a constant which may depend on k, d, and t, but does not depend on R or τ . To see this, one first observes that E can be covered by ≤ C 1 τ −d balls of radius 2 τ R, (9.13) where C 1 is a constant which depends only on k and d.This covering can be obtained as follows.Let π denote the orthogonal projection of R n onto P .Thus π(E) is a subset of P with diameter which is less than or equal to the diameter of E. Because diam E ≤ k R and P has dimension d, one can cover π(E) with ≤ C 1 τ −d balls in P of radius τ R, where C 1 depends only on k and d.To get the covering indicated in (9.13), one takes the balls in R n with the same centers as these balls in P , but with radii 2 τ R instead of τ R.This gives a covering of E itself, because of the assumption (9.11). Once one has a covering as in (9.13), the estimate (9.12) follows immediately from the definition of the Hausdorff content.This proves Lemma 9.9. Notice that the same argument would work when t < d.However, in this case one can simply use the diameter bound (9.10) to get that H t con (E) ≤ (k R) t .One does not need the power of τ in (9.12), which would now be negative, and not helping the estimate.Now let us return to the discussion of mappings between Euclidean spaces.Suppose that f : R m → R n is Lipschitz and (m, t)-noncollapsing.Let x ∈ R m be a point at which f is differentiable (which includes almost all points in R m , since f is Lipschitz).Then the rank of the differential of f at x is greater than or equal to t.To see this, suppose to the contrary that the rank of the differential at some point x is strictly less than t.Let B be a small ball which is centered at x.We would like to apply Lemma 9.9, with E = f (B) and R = radius(B).In this case (9.10) holds automatically, with k equal to twice the Lipschitz constant for f . Let P be the plane in R n which is the image of R m under the affine mapping α defined by α(y) = f (x) + df x (y − x).(9.14)In other words, α is the affine mapping which approximates f well near x, in the sense that If d is the dimension of P , then d is the rank of the differential of f at x, which we are assuming is less than t.The estimate (9.12) then applies to say that (9.16) where C depends only on d, t, and the Lipschitz constant for f .This inequality contradicts the (m, t)-noncollapsing condition for f , when τ is small enough. This shows that the rank of the differential of f is always at least t.More generally, the following is true.Let B be a ball in R m , and suppose that f is well-approximated by an affine mapping A : R m → R n in a ball B. This means that |f (y) − A(y)| ≤ radius(B) for all y ∈ B (9.17) (as in (7.12)),where is a small positive number.Let L denote the linear part of A, as in Section 7, so that A(x) = a + L(x) for some a ∈ R n and all x ∈ R m .If is small enough, depending only on t and the Lipschitz and (m, t)-noncollapsing constants for f , then L satisfies the same kind of t-dimensional nondegeneracy conditions (with bounds) as in Lemma 7.16.This can be shown with the same kinds of arguments as above, using also the orthonormal basis {v i } m i=1 for R m provided by Lemma 7.15. To be more precise, suppose that one does not have a good lower bound for |L(v i )| for at least t choices of i, as in (7.17) in Lemma 7. 16.This means that there is an integer d < t such that |L(v i )| is small for all but d choices of i.Let Q 0 denote the span of d choices of v i in R m which cover all of the v j 's for which |L(v j )| is not too small, and let P 0 denote the image of Q 0 under the affine mapping A. Then each point in f (B) lies within • radius(B) of a point in A(B), because of (9.17), and each point in A(B) lies close to P 0 (compared to radius(B)), because of our assumption about the small values of |L(v k )| (when v k is not among the d choices of v i of which Q 0 is composed).In other words, the small values of |L(v k )| lead to an estimate like (9.11), with P = P 0 and E = f (B) again.If and the values of |L(v k )| for v k not in Q 0 are small enough, then Lemma 9.9 gives an upper bound for the Hausdorff content for f (B) which would be too small for the (m, t)-noncollapsing condition, in much the same manner as before. In this way one can get the same kind of t-dimensional nondegeneracy conclusions as in Lemma 7.16.One can also use an argument more like the one employed for Lemma 7.16 before, in which the (m, t)-noncollapsing condition for f on B is converted into a similar condition for the affine approximation A, and then its linear part L, and then one looks at the ellipsoid which is the image of B under L. In the end, the two arguments are about the same anyway. Once one has the analogue of Lemma 7.16 in this setting, the rest is practically the same as before.That is, it was only for getting Lemma 7.16 that we really needed the (m, t)-regularity assumption in Section 7.This was mentioned in Remark 7.40. The same is true for the arguments in Section 8, which were closely based on the ones in Section 7. In Section 6 some remarks were made about more "classical" statements for the behavior of (m, t)-regular mappings, as in Lemma 6.8.For these all that was really needed was the fact that the differential has rank at least t at any point where it exists (as in (6.6)), and we have already looked at this in the present setting. In short, this is why the results and observations about (m, t)-regular mappings from Sections 6, 7, and 8 carry over to mappings that are Lipschitz and (m, t)-noncollapsing.This includes Theorem 6.1, Corollary 6.5, and Proposition 8.1, as in the statement of Proposition 9.7.In particular, these assertions apply to mappings from R m to R t which are Lipschitz and co-Lipschitz, in the sense of [6] (since co-Lipschitz implies (m, t)-noncollapsing in this case). Although mappings that are Lipschitz and (m, t)-noncollapsing have several properties in common with (m, t)-regular mappings in this way, it is not true that Lipschitz (m, t)-noncollapsing mappings are (m, t)-regular.A counterexample to this is given by an example in [12], namely, Example (j) on p. 869 of [12].This occurs already for mappings on the real line.The basic construction uses "tent" mappings.Given a closed interval I = [a, b] in R, define a corresponding "tent" mapping t I (x) by setting In other words, this mapping vanishes at the endpoints a, b of I, it takes the value (b − a)/2 = |I|/2 at the midpoint of I, and it is linear on the two halves of I (between the endpoints and the midpoint).By combining a lot of "tent" mappings like this, on dyadic intervals [2 j , 2 j+1 ], [−2 j+1 , −2 j ], j ∈ Z, for instance, one can get mappings from the real line to itself which are Lipschitz and (1, 1)-noncollapsing, but not (1, 1)-regular.In particular, the mapping would take the value 0 at infinitely many points, while a (1, 1)-regular mapping should take any given value only finitely many times. A variant of Theorem 6.1 In Theorem 6.1, one chooses a location and scale in R m , as represented by a ball B, and then there is a linear mapping λ : R m → R m−t so that the combined mapping (f, λ): R m → R n × R m−t is bilipschitz on a subset E of B which is of substantial proportion (in terms of Lebesgue measure).The choice of λ depends on B, even if the estimates do not. Instead of this, one might try to find a single mapping g : R m → R m−t so that the combined mapping (f, g): R m → R n × R m−t has some kind of good behavior uniformly over all scales and locations at once.For this one should not be able to take g to be linear, in general, but one would still look for something like a Lipschitz condition. As a basic example, f : R m → R n might be obtained from a bilipschitz mapping by throwing away m − t coordinates.The idea would then be to try to recover those m − t coordinates, or some reasonable versions of them. There are results of this nature that one can get, and which will be discussed in this section.For simplicity (and brevity), we shall only give an outline of some of the main points, rather than precise statements and arguments (which could take a while). A reasonable framework for these issues is provided by the notions of "weakly Lipschitz" and "weakly bilipschitz" mappings from [12].These notions can be described roughly as follows.First, let us reformulate the usual Lipschitz condition by saying that a mapping h : for all balls B in R m .It is easy to see that this is equivalent to the usual Lipschitz condition.For a weakly Lipschitz mapping, one asks that (10.1) hold for "most" balls B in R m , where "most" means that the exceptional balls B(x, r) are parameterized by a set of pairs (x, r) in R m × (0, ∞) which is a Carleson set (as defined in the statement of Theorem 7.4). For the exceptional balls, no condition at all is placed on the mapping.The idea is that the fact that the set of exceptional balls is small, in the sense of the Carleson condition, makes up for this. The lack of restriction on the exceptional balls has the effect of allowing arbitrary behavior of the mapping on reasonably "thin" subsets of R m .For instance, if P is any plane in R m (of dimension strictly less than m), then the set of all balls B(x, r) which intersect P corresponds to a Carleson set of pairs (x, r) in R m × (0, ∞).This is not hard to check, and it implies that a weakly Lipschitz mapping can have arbitrary behavior along such a plane P . As a variant of this, suppose that P is a hyperplane in R m , so that R m \P consists of two components.If a given mapping h is Lipschitz on each of these two complementary half-planes, then it is automatically weakly Lipschitz on all of R m .This holds no matter how much the two Lipschitz mappings may behave differently along the common boundary.Note that h would be Lipschitz on all of R m under these conditions if it were continuous along the common boundary P .Now let us consider bilipschitz conditions.Let us say that a mapping h : R m → R k is "approximately bilipschitz" on a ball B if it satisfies a condition like (10.1), and also In other words, (10.2) provides the kind of lower bound for |h(x) − h(y)| that one has for bilipschitz mappings, but only for x, y in B which are not too close together compared to the size of B. (The precise choice of (diam B)/2 in (10.2) is not too important, though.) It is easy to check that h is bilipschitz in the usual sense exactly when it satisfies this kind of approximate bilipschitz condition for all balls B in R m , and with a uniform choice for the constant C (in (10.1) and (10.2)).As before, we shall say that h is weakly bilipschitz if there is a constant C so that the same condition of approximate bilipschitzness holds on "most" balls B, where "most" means that the collection of exceptional balls B(z, r) should correspond to a set of pairs (z, r) in R m × (0, ∞) which is a Carleson set.For these exceptional balls, one does not ask for any condition on h. As for weakly Lipschitz mappings, weakly bilipschitz mappings can have arbitrary behavior along "thin" subsets of R m , such as planes (of dimension strictly less than m).In particular, one can obtain weakly bilipschitz mappings by combining ordinary bilipschitz mappings on two half-spaces as before, without any conditions on how they might match up on the common boundary.Of course, one can have singular behavior for weakly Lipschitz or bilipschitz mappings which is more diffuse than that, but this construction illustrates some of the basic points. This formulation of weak bilipschitzness is slightly different from the one [12], but the difference is not significant.(E.g., one uses cubes in [12] instead of balls, and the definition is given in a way that accommodates more general spaces.) Although weakly Lipschitz and bilipschitz mappings can have essentially arbitrary behavior on sufficiently thin sets in R m , their average behavior on sets of positive measure is more like that of ordinary Lipschitz and bilipschitz mappings.Results of this nature are given in [12] (and have their genesis in arguments in [21]). With weakly Lipschitz and bilipschitz mappings one has more flexibility for making constructions than for ordinary Lipschitz and bilipschitz mappings.In the present setting, one gets a more global version of Theorem 6.1, in which the conclusion is that there is a weakly Lipschitz mapping g : R m → R m−t so that the combined mapping (f, g): R m → R n × R m−t is weakly bilipschitz.This applies to mappings f : R m → R n which are (m, t)-regular, as in Theorem 6.1, and more generally to mappings f which are Lipschitz and (m, t)-noncollapsing, as in Section 9.In the special case where m = t, one does not need an extra mapping g, and the conclusion is that f itself is weakly bilipschitz.This case is discussed in [12] (and is closely related to [9], [21]).(See Examples (g) and (i) on p. 869 of [12].) In general, when t < m, how might one produce such a complementary mapping g?The basic idea is to use mappings λ as in Theorem 6.1 as the initial ingredients, and then to combine these mappings at different scales and locations to get g.One has to do these things with some care, and in particular one should not try to combine too many of these initial mappings.Otherwise, the estimates will not work properly, like the Carleson conditions in the weak Lipschitz and bilipschitz properties. One could try to do this by iterating the kind of information that one gets in Theorem 6.1.Instead of this, let us indicate a more direct method, in which one uses arguments like those in the proof of Theorem 6.1, together with some extra information. Part of Section 7 already fits nicely with the present discussion.Namely, Theorem 7.4 already provides for the existence of good affine approximations for our mapping f : R m → R n (which is Lipschitz by assumption) on all balls B in R m , except for a collection of balls B(x, r) corresponding to a Carleson set in R m × (0, ∞). As in Lemma 7.16, we also have good bounds for the t-dimensional nondegeneracy of the linear parts of the affine approximations that come from Theorem 7.4, at least if the parameter in Theorem 7.4 is chosen small enough.How small needs to be depends only on the dimensions and (m, t)-regularity constants for f (or Lipschitz and (m, t)-noncollapsing constants, as in Section 9). Suppose that B is a ball for which one has a good affine approximation for f like this.If is small enough, one can then get approximate bilipschitzness on B, in the sense of (10.1) and (10.2), by combining f with a linear mapping from R m to R m−t which complements the linear part of the affine approximation to f on B in a suitable way.We did something very similar to this in Section 7, with the choice of λ in (7.43), and in Lemma 7.44.In particular, in first part of the proof of Lemma 7.44, we saw that the linear mapping (π • L, λ): R m → P × R m−t is invertible, with a bound for the norm of its inverse.(Here L is the linear part of the affine approximation to f on B, π is a certain projection, and P is a t-dimensional plane in R n .)If is small enough, depending only on suitable constants, then this leads to the desired approximate bilipschitz property of f on B, and with uniform bounds for the constants. For the present purposes, the problem with this is that the choice of the linear mapping λ : R m → R m−t depends on B. However, there is some extra information that we can bring in, coming from Carleson's Corona construction.This has the effect of saying that the affine approximations to f can be chosen in such a way that their linear parts do not change too often, while still maintaining a good degree of approximation to f , as in Theorem 7.4.The precise statement for this is a little bit complicated, but it basically says that the set of pairs (x, r) in R m × (0, ∞) around which the linear parts of the affine approximations have to change is a Carleson set. Carleson's Corona construction is actually more directly concerned with the behavior of averages of bounded measurable functions.In the context of affine approximations for a Lipschitz mapping f , one would look at the Corona construction in connection with the differential of f , which gives a bounded measurable function.(The differential is a matrix-valued function, but that is okay.)An excellent reference concerning the Corona construction and related results is [18], especially Chapter VIII.Theorem 6.1 and Section 6 in general in Chapter VIII of [18] are particularly relevant and useful here.A version of this is also reviewed in Chapter 2 of Part IV of [13], especially Section 2.2 there.A translation to the setting of Lipschitz functions and affine approximations of them is also provided there. At any rate, with this one is able to get some information about the dependence of the "complementary" linear mappings λ = λ B : R m → R m−t on the ball B. In particular, one finds that these mappings do not have to be changed too often, as one varies the ball B (in terms of both its center and its radius).The occasions when the λ B 's have to be changed is controlled by a Carleson set of pairs (x, r) in R m × (0, ∞).This is a crucial point for making more global constructions, with weakly Lipschitz and bilipschitz mappings. We shall not go into details about the constructions involved, but let us mention a few of the main points.The first is that it is more convenient to work with dyadic cubes instead of balls.Imagine that Q 0 is some dyadic cube, and that λ 0 : R m → R m−t is a linear mapping associated to it, as above.In particular, let us imagine that the combined mapping (f, λ 0 ): R m → R n × R m−t is approximately bilipschitz around Q 0 , in the same sense as described before for balls, in (10.1) and (10.2). (Actually, one would normally ask for behavior like this on something like the double of Q 0 , as in [12].) The information that we have about the λ's not changing too often implies that this linear mapping λ 0 will normally work well not only for Q 0 , but for many of its dyadic subcubes too.In general, λ 0 would not work for all of the dyadic subcubes of Q 0 , however.As one goes "down" through the locations and scales, one would have to "stop" at various subcubes of Q 0 .One would then want to start over again, for the purpose of constructing a global mapping g : R m → R m−t from the various λ's. For some cubes there is trouble, simply because f does not have any sufficiently-good affine approximation (on the double of the cube, say) with which to work.As before, we know that this does not happen too often, with the exceptional locations and scales controlled by a Carleson condition.When one runs into cubes like these, one simply skips over them, without worrying about it too much, and goes on to cubes for which good affine approximations do exist. Thus one gets to cubes for which there are sufficiently-good affine approximations (on the double of the cube), and hence for which there are corresponding complementary linear mappings λ : R m → R m−t .Imagine that we are working inside of the cube Q 0 , from before, and we have now arrived to a dyadic cube Q 1 contained in Q 0 , for which the linear mapping λ 0 : R m → R m−t that we have already does not work so well.That is, λ 0 does not provide a good complement to f in terms of having approximate bilipschitz properties around Q 1 , even if it might do so at larger scales in Q 0 .Assume however that a new linear mapping λ 1 : R m → R m−t does behave well in this way.We would like to combine λ 1 with λ 0 on Q 1 , in such a manner as to keep the good properties of λ 0 at other scales and locations in Q 0 where λ 0 works fine, while bringing in the new mapping λ 1 for use inside of Q 1 . To be more precise, we shall focus (in a moment) on what happens for a single cube Q 1 inside Q 0 like this, but normally there will be many cubes in this situation.These "stopping places" inside Q 0 will also occur at many different scales in general.I.e., the earlier choice of λ 0 will stop working at variable scales and locations in Q 0 .The cubes Q 1 that arise in this manner will have disjoint interiors, by construction.(Essentially one takes them to be as large as possible.) It turns out that one does not have to worry too much about what happens across cubes like the Q 1 's, inside Q 0 .This is because the notions of weakly Lipschitz and weakly bilipschitz functions allow for sets of locations and scales on which one does not have information about the given mapping, at least if these sets satisfy Carleson conditions.We shall return to this later.At any rate, the basic point is that the problems that occur in going across the Q 1 's can be included in exceptional sets of locations and scales like these.In particular, one does not have to try to smooth out these transitions, and this is a useful feature in working with weakly Lipschitz and bilipschitz mappings. One does have to be careful about what happens inside the individual Q 1 's, and what happens in the original cube Q 0 at locations and scales which lie above the ones described by the Q 1 's.To see what happens for these issues, let us fix a single Q 1 , as before, and just look at it.In the actual construction, one would deal with all of the Q 1 's in the same way, in parallel. Let c Q1 denote the center of Q 1 .On Q 1 , let us imagine replacing our original mapping λ 0 (x) with λ 0 (x) = λ 0 (c Q1 ) + (λ 1 (x) − λ 1 (c Q1 )).(10.3)From the point of view of scales larger than Q 1 , this function still looks like λ 0 (x).To make this precise, notice that for a suitable constant C (which in fact can normally be taken to be 1).This is because the norms of λ 0 and λ 1 as linear mappings are bounded (and can be taken to be bounded by 1), by the way that the λ's are chosen.Combining (10.4) and (10.Thus λ 0 (x) and λ 0 (x) are nearly the same on Q 1 , up to errors which are comparable to diam Q 1 (which would be small at larger scales). On the other hand, λ 0 (x) is also close to λ 1 (x) on Q 1 , in the sense that λ 0 (x) − λ 1 (x) is constant on Q 1 .(10.7) For the purposes of Lipschitz and bilipschitz conditions, λ 1 (x) is essentially the same as λ 1 (x), because of this.This is the basic method by which one puts mappings like the λ's on top of each other for this construction.Here one should modify λ 0 in this manner for all of the Q 1 's in Q 0 , in parallel, as mentioned earlier. In each of the Q 1 's, one repeats the process, with Q 1 in the role that Q 0 had before, and using λ 0 (x), as above.For the internal properties of Q 1 , this is practically the same as λ 1 (x), because of (10.7). One can organize this construction in such a way as to build a global mapping on R m .As one goes through successive repetitions of the procedure, one makes many modifications along these lines.At each step, an individual modification is localized to some cube, and the size of the modification is bounded in terms of the diameter of this cube.This ensures that the modifications add up in reasonable ways as one goes through the generations of the construction, i.e., they are controlled by geometric series. (Let us mention that one never runs up to infinity in scales, but instead one starts on some dyadic cubes in R m and goes down the scales inside them.These starting cubes are chosen in such a way that every dyadic cube in R m is contained in one of them, except for a collection of cubes that satisfies a Carleson condition.Basically, the exceptional cubes can be chosen to lie near a single point, like the origin.Because of the Carleson condition, one does not have to worry about the cubes in R m that are skipped in this way.) For Lipschitz and bilipschitz conditions, these modifications do cause trouble at some locations and scales.In particular, one can have trouble along the boundaries of cubes like the Q 1 's above.More precisely, in replacing λ 0 (x) with λ 0 (x) as above, one has good properties away from the Q 1 's, at scales larger than the Q 1 's in Q 0 , and inside the individual Q 1 's, but one does not normally maintain good properties for measurements that cross the boundary of Q 1 (i.e., as in Lipschitz of bilipschitz conditions). The main point is then to control the total collection of scales and locations for which this kind of trouble occurs, in terms of Carleson conditions.This uses the Carleson conditions that we have from the beginning, and discussed before, i.e., the Carleson conditions for how often the λ's need to be changed, and for the exceptional scales and locations at which sufficiently-good affine approximations of the original Lipschitz mapping f do not exist (so that one may not have suitable λ's to begin with).These conditions control how often the Q 0 's and Q 1 's come up, which is to say, the cubes at which one starts and stops, as well as some nearby cubes over which one might skip.The estimates for the total collection mentioned above also uses some elementary observations about obtaining Carleson conditions for sets of locations and scales associated to other such sets which are already known to satisfy Carleson conditions. For instance, if some set of locations and scales satisfies a Carleson condition, then so do collections of locations and scales which are not too far from the ones in the first set.Also, if {Q i } i is a family of dyadic cubes in R m which satisfies a Carleson condition, then one also has Carleson conditions for sets of locations and scales that lie near the boundaries of the Q i 's.In other words, this includes locations and scales near the locations and scales of the Q i 's themselves, and also ones where the scales are much smaller than that, as long as the locations are close to the boundaries of the Q i 's.This observation uses the fact that the Q i 's have reasonably "small boundaries".In particular, the set of locations and scales near the boundary of a single dyadic cube satisfies a Carleson condition.This is easy to check. This gives an outline of how the construction works.Note that this argument does not use topological degree theory, unlike the one before (in the proof of Lemma 7.44).One makes up for this by using affine approximations of f in stronger ways. in M with radius equal to C • radius(B 1 ). and the restriction of f to E is bilipschitz with constant C.(4.3) lim y→x |f (y) − α(y)| |y − x| = 0,(9.15)because of differentiability.With this choice of P , the condition (9.11) holds with τ as small as one would like (for E = f (B)), when the ball B is sufficiently small.
26,661.6
2000-07-01T00:00:00.000
[ "Mathematics" ]
Thin Reinforced Ion-Exchange Membranes Containing Fluorine Moiety for All-Vanadium Redox Flow Battery In this work, we developed pore-filled ion-exchange membranes (PFIEMs) fabricated for the application to an all-vanadium redox flow battery (VRFB) by filling a hydrocarbon-based ionomer containing a fluorine moiety into the pores of a porous polyethylene (PE) substrate having excellent physical and chemical stabilities. The prepared PFIEMs were shown to possess superior tensile strength (i.e., 136.6 MPa for anion-exchange membrane; 129.9 MPa for cation-exchange membrane) and lower electrical resistance compared with commercial membranes by employing a thin porous PE substrate as a reinforcing material. In addition, by introducing a fluorine moiety into the filling ionomer along with the use of the porous PE substrate, the oxidation stability of the PFIEMs could be greatly improved, and the permeability of vanadium ions could also be significantly reduced. As a result of the evaluation of the charge–discharge performance in the VRFB, it was revealed that the higher the fluorine content in the PFIEMs was, the higher the current efficiency was. Moreover, the voltage efficiency of the PFIEMs was shown to be higher than those of the commercial membranes due to the lower electrical resistance. Consequently, both of the pore-filled anion- and cation-exchange membranes showed superior charge–discharge performances in the VRFB compared with those of hydrocarbon-based commercial membranes. Introduction As energy demand is rapidly increasing around the world and environmental pollution caused by the use of fossil fuels is emerging, renewable energy is attracting attention as the energy source of the future. However, renewable energy has a disadvantage in that the output fluctuates greatly depending on the climatic environment, and to compensate for this, an energy storage system (ESS) with high capacity is required [1,2]. ESS is a key component of a smart grid, and various types of secondary batteries that can be used for a long time and have high energy efficiency during operation are mainly used for large-capacity energy storage [1][2][3]. That is, lithium-ion batteries, lead-acid batteries, NaS batteries, and redox flow batteries (RFBs) are known to be efficient secondary batteries for ESS applications. Among them, RFBs possess higher availability and energy efficiency and lower capital cost requirements than other competing technologies. In addition, they are believed to have several advantages, such as low toxicity and long lifespan. In particular, the RFBs operate at room temperature, and the power and capacity of the battery can be designed independently of each other [3]. The RFB is a battery system in which an active electrode material dissolved in an electrolyte solution is oxidized and reduced to charge and discharge. In more detail, after dissolving the cathode and anode active materials in the electrolyte, they are respectively stored in an external tank and circulated through the stack using a pump when necessary, and electric energy is charged and discharged. Undesirable mixing of the electrolyte components can be prevented by independent storage of cathode and anode active materials, and electric energy is charged and discharged. Undesirable mixing of the electrolyte components can be prevented by independent storage of cathode and anode active materials, and a high level of stability can be obtained by using an aqueous electrolyte. As an active material for the RFB, redox couples with various potentials, such as iron/chromium, iron/titanium, all vanadium, vanadium/bromine, polysulfide bromine, zinc/bromine, and zinc/cerium, can be selected [4][5][6]. In the case of having different redox ion species at the cathode and the anode, such as an iron/chromium system, significant cross contamination of electrolytes can occur due to the concentration gradient of each species in the two sides of the membrane [7]. Since the crossover of these active materials causes self-discharge and limits the RFB performance, a technique using the same species as redox couples for both the anode and cathode has been proposed [7,8]. A typical example is an all-vanadium flow battery (VRFB) using vanadium species as both the anode and the cathode redox materials. The VRFB has several advantages, such as excellent energy efficiency, long lifespan, and high cost-effectiveness. Figure 1 shows the structure and charge-discharge principle of a typical VRFB system. Meanwhile, the membrane is one of the most important components that determine the charge-discharge performance and durability of all kinds of RFB systems [9]. Although it depends on the type of redox couples, RFBs mainly employ ion-exchange membranes (IEMs) that prevent the mixing of electrolytes between the anode and cathode compartments and act as an ion conductor [2,5]. The IEMs used in the RFB system require low electrical resistance, high selective permeability for specific ions, low diffusion coefficient for solvents, and excellent chemical and mechanical stabilities [10]. Considering the characteristics of the VRFB system operated under strongly acidic conditions, the IEMs should also have high acid and oxidation resistance and excellent selective permeability to hydrogen ions compared with vanadium cations [10,11]. From this point of view, Nafion, a perfluorinated cation-exchange membrane (CEM), has been widely utilized as a separation membrane in VRFB systems, but it is suffering from some drawbacks, such as high membrane cost and significant vanadium crossover. As an alternative, therefore, the use of anion-exchange membranes (AEMs) has recently attracted attention, and hydrocarbonbased IEMs are being actively developed to lower the expensive membrane cost [12,13]. However, in the case of hydrocarbon-based IEMs, despite their excellent electrochemical characteristics, they are difficult to apply to practical systems due to their poor chemical Meanwhile, the membrane is one of the most important components that determine the charge-discharge performance and durability of all kinds of RFB systems [9]. Although it depends on the type of redox couples, RFBs mainly employ ion-exchange membranes (IEMs) that prevent the mixing of electrolytes between the anode and cathode compartments and act as an ion conductor [2,5]. The IEMs used in the RFB system require low electrical resistance, high selective permeability for specific ions, low diffusion coefficient for solvents, and excellent chemical and mechanical stabilities [10]. Considering the characteristics of the VRFB system operated under strongly acidic conditions, the IEMs should also have high acid and oxidation resistance and excellent selective permeability to hydrogen ions compared with vanadium cations [10,11]. From this point of view, Nafion, a perfluorinated cation-exchange membrane (CEM), has been widely utilized as a separation membrane in VRFB systems, but it is suffering from some drawbacks, such as high membrane cost and significant vanadium crossover. As an alternative, therefore, the use of anion-exchange membranes (AEMs) has recently attracted attention, and hydrocarbonbased IEMs are being actively developed to lower the expensive membrane cost [12,13]. However, in the case of hydrocarbon-based IEMs, despite their excellent electrochemical characteristics, they are difficult to apply to practical systems due to their poor chemical stability, so research on this is urgently needed [2,10,14,15]. In addition, a partially fluorinated IEM can be considered to improve the chemical stability and reduce the manufacturing cost of the IEMs [16][17][18][19][20]. The traditional manufacturing process (i.e., "a paste method") of commercial IEMs is known to be complicated and expensive. In this method, typically, the IEMs are fabricated by impregnating a paste mixed with monomers and rubber into a reinforcing fabric net, radical polymerization, and then introducing ion-exchange groups through a post-treatment, such as quaternization (for AEMs) or sulfonation (for CEMs) [21]. Meanwhile, a pore-filled ion-exchange membrane (PFIEM), in which an ionomer is filled into pores of a thin porous polymer film, is shown to possess low mass transport resistance and strong mechanical strength, so it is being considered for application to various energy conversion technologies and water treatment processes [22][23][24]. The PFIEM, which is intermediate between a homogeneous membrane and a heterogeneous membrane, exhibits excellent electrochemical properties while lowering the manufacturing cost due to the use of inexpensive reinforcing material and a reduction in the amount of raw materials used. Figure 2 illustrates the fabrication principle of the PFIEM. stability, so research on this is urgently needed [2,10,14,15]. In addition, a partially fluorinated IEM can be considered to improve the chemical stability and reduce the manufacturing cost of the IEMs [16][17][18][19][20]. The traditional manufacturing process (i.e., "a paste method") of commercial IEMs is known to be complicated and expensive. In this method, typically, the IEMs are fabricated by impregnating a paste mixed with monomers and rubber into a reinforcing fabric net, radical polymerization, and then introducing ion-exchange groups through a post-treatment, such as quaternization (for AEMs) or sulfonation (for CEMs) [21]. Meanwhile, a pore-filled ion-exchange membrane (PFIEM), in which an ionomer is filled into pores of a thin porous polymer film, is shown to possess low mass transport resistance and strong mechanical strength, so it is being considered for application to various energy conversion technologies and water treatment processes [22][23][24]. The PFIEM, which is intermediate between a homogeneous membrane and a heterogeneous membrane, exhibits excellent electrochemical properties while lowering the manufacturing cost due to the use of inexpensive reinforcing material and a reduction in the amount of raw materials used. Figure 2 illustrates the fabrication principle of the PFIEM. In this study, novel IEMs optimized for VRFB application were developed by combining an ionomer with a porous polyethylene (PE) substrate as a reinforcing material. It was expected that the PFIEMs could possess low electrical resistance, excellent chemical and physical stabilities, and low production cost by employing a simple pore-filling method. Figure 3 shows the synthesis process of the anion-and cation-exchange polymers prepared. For the membrane fabrication, 4-vinylbenzyl chloride (VBC) or styrene (Sty), the main monomer; benzoyl peroxide (BPO), a thermal initiator; and divinylbenzene (DVB), a cross-linking agent, were filled in the pores of a porous PE substrate, and a base membrane was then prepared through in situ radical polymerization. The prepared base membrane was followed by quaternization or sulfonation post-treatment to produce a pore-filled anion-exchange membrane (PFAEM) and a pore-filled cation-exchange membrane (PFCEM), respectively. In addition, 1H,1H,5H-octafluoropentyl methacrylate (OFPMA) monomer was mixed with the monomer solution to prepare PFIEMs with a fluorine moiety. The OFPMA employed is a chemically robust fluorine monomer widely used in surface coatings to prevent oxidation, and is characterized by low Tg and low surface energy [25][26][27][28]. The introduction of fluorine groups was expected to effectively improve the chemical stability of the PFIEMs under strongly acidic and oxidative conditions In this study, novel IEMs optimized for VRFB application were developed by combining an ionomer with a porous polyethylene (PE) substrate as a reinforcing material. It was expected that the PFIEMs could possess low electrical resistance, excellent chemical and physical stabilities, and low production cost by employing a simple pore-filling method. Figure 3 shows the synthesis process of the anion-and cation-exchange polymers prepared. For the membrane fabrication, 4-vinylbenzyl chloride (VBC) or styrene (Sty), the main monomer; benzoyl peroxide (BPO), a thermal initiator; and divinylbenzene (DVB), a crosslinking agent, were filled in the pores of a porous PE substrate, and a base membrane was then prepared through in situ radical polymerization. The prepared base membrane was followed by quaternization or sulfonation post-treatment to produce a pore-filled anionexchange membrane (PFAEM) and a pore-filled cation-exchange membrane (PFCEM), respectively. In addition, 1H,1H,5H-octafluoropentyl methacrylate (OFPMA) monomer was mixed with the monomer solution to prepare PFIEMs with a fluorine moiety. The OFPMA employed is a chemically robust fluorine monomer widely used in surface coatings to prevent oxidation, and is characterized by low T g and low surface energy [25][26][27][28]. The introduction of fluorine groups was expected to effectively improve the chemical stability of the PFIEMs under strongly acidic and oxidative conditions of VRFB. In this work, in particular, the effect of the content of a fluorine moiety on membrane properties and VRFB performance was systematically investigated, and the characteristics of PFAEMs and PFCEMs were also compared. of VRFB. In this work, in particular, the effect of the content of a fluorine moiety on membrane properties and VRFB performance was systematically investigated, and the characteristics of PFAEMs and PFCEMs were also compared. Materials and Membrane Preparation A porous PE film (Hipore, t = 25 μm) was supplied by Asahi Kasei (Japan) and used as a reinforcing material for preparing the PFIEMs. The specifications for the commercial porous PE substrate used in this work were found from the literature and are summarized in Table 1 [29]. As described previously, VBC and/or Sty were used as main monomers for introducing ion-exchange groups. BPO and DVB were employed as a thermal polymerization initiator and a cross-linking agent, respectively. For the quaternization and sulfonation of base membranes, trimethylamine (TMA) and chlorosulfonic acid (CSA) Materials and Membrane Preparation A porous PE film (Hipore, t = 25 µm) was supplied by Asahi Kasei (Japan) and used as a reinforcing material for preparing the PFIEMs. The specifications for the commercial porous PE substrate used in this work were found from the literature and are summarized in Table 1 [29]. As described previously, VBC and/or Sty were used as main monomers for introducing ion-exchange groups. BPO and DVB were employed as a thermal polymerization initiator and a cross-linking agent, respectively. For the quaternization and sulfonation of base membranes, trimethylamine (TMA) and chlorosulfonic acid (CSA) were used, respectively, and 1,2-dichloroethane was employed as a solvent. As a monomer containing a fluorine moiety, OFPMA was purchased from TCI (Japan) and employed. All reagents except OFPMA were purchased from Sigma-Aldrich (USA) and were used as received without any purification. In addition, AMX and CMX (Astom Corp., Japan) were chosen as the commercial hydrocarbon-based AEM and CEM, respectively, for membrane property comparison. Table 1. Specifications of the porous PE substrate used for this study [29]. Structure Single layer Composition Polyethylene For the membrane fabrication, a porous PE film was first impregnated with a monomer mixture. At this time, the monomer mixture was prepared with VBC and/or Sty and OFPMA at molar ratios of 1:0, 2:1, 3:1, and 4:1, respectively. In addition, 20 wt% of DVB as a cross-linking agent and 2 wt% of BPO as an initiator were added and fully mixed using a magnetic stirrer. The detailed composition of the monomer mixture is summarized in Table 2. After that, the monomer-filled substrate film was heated at 80 • C for 3 h in an oven for the thermal radical polymerization, producing a base membrane. For the fabrication of the PFAEM, the base membrane was then immersed in 1.0 M TMA aqueous solution, followed by a quaternization reaction at 60 • C for 5 h. Similarly, the PFCEM was prepared by reacting the base membrane in 10 wt% CSA (in 1,2-dichloroethane) solution at 50 • C for 5 h. The prepared PFIEMs were washed with distilled water and ethanol and stored in 0.5 M NaCl aqueous solution before use. Membrane Characterizations The membrane electrical resistance (MER) of IEMs is related to the internal resistance of the VRFB system and is a factor that dominantly determines the voltage efficiency (VE). To measure the MER, the membrane sample was first immersed in 0.5 M NaCl solution for at least 5 h to reach its equilibrium state. First, the blank resistance (R 2 ) of the 0.5 M NaCl solution was determined using a lab-made clip cell connected to a potentiostat/galvanostat with electrochemical impedance spectroscopy (SP-150, Bio-Logic Science Instruments, France). The membrane sample was then inserted into the clip cell and immersed in 0.5 M NaCl solution to measure the membrane + solution resistance (R 1 ). Finally, the membrane electrical resistance was calculated by substituting the measured R 1 and R 2 values into the following Equation (1) [30]: 2 (1) where A is the effective area of Pt electrodes constituting the clip cell. Meanwhile, the ion conductivity (σ) of the IEMs was obtained from the following Equation (2) [31]: where L is the thickness of the membrane sample. To measure the water uptake (WU), a membrane sample having a size of 2 × 2 cm 2 was immersed in distilled water to reach an equilibrium state. After removing the water present on the sample surface with filter paper, the wet weight (W wet ) was measured and then dried in a dry oven at 80 • C for more than 6 h to measure the dry weight (W dry ). The WU values were determined using the following Equation (3) [31]: The swelling ratio (SR) of the prepared membranes was determined with the following Equation (4) by measuring the volume of the dried membrane (V dry ) and the volume of the wet membrane (V wet , swelled with the electrolyte solution used for VRFB tests) [31]: The ion-exchange capacity (IEC) of the prepared membranes was measured with a sample having a size of 2 × 2 cm 2 . In the case of an AEM, the sample was immersed in 0.5 M NaCl solution for more than 6 h so that the ion-exchange groups were exchanged with Cl -, and then washed several times with distilled water, and the wet weight was measured. Then, it was immersed in 0.25 M Na 2 SO 4 aqueous solution for 3 h or more so that Cl − ions of the ion-exchange groups were fully replaced with SO 4 2− ions. The amount of Cl − present in the solution was determined by Mohr's method using a K 2 CrO 4 indicator and 0.01 M AgNO 3 aqueous solution as titrant. In the case of a CEM, the sample was immersed in 0.5 M HCl solution for more than 6 h to reach an equilibrium state. After washing with distilled water, the sample was immersed in 0.5 NaCl solution for 3 h or more so that Na + was exchanged with H + present in the sample. The amount of H + existing in the solution was then determined by a traditional acid-base titration using a phenolphthalein indicator and a 0.01 M NaOH titration solution. Finally, the IEC value of the sample was calculated by substituting the measured parameters into the following Equation (5) [31]: where C is the normal concentration of ions measured through titration (meq./L), V s is the solution volume (L), and W dry is the dry membrane weight (g). The transport numbers (t − for anion and t + for cation) of the IEMs were determined by measuring the membrane potential using a pair of Ag/AgCl electrodes in a twocompartment diffusion cell and calculated by the following Equations (6) and (7) [32]: where E m is the measured membrane potential, R is the gas constant, T is the absolute temperature, F is the Faraday constant, and C L and C H are NaCl concentrations of the compartments (1 and 5 mM, respectively). To evaluate the oxidation stability of the prepared IEMs, a membrane sample (2 × 2 cm 2 ) was immersed in an aqueous solution containing 3% H 2 O 2 and 3 ppm Fe 2+ (i.e., Fenton's reagent), and then the reaction proceeded at 80 • C for 8 h. During the oxidation test, the membranes could be decomposed by free radicals (·OH and ·OOH) formed by H 2 O 2 in the presence of Fe 2+ ions [33]. Therefore, each sample reacted for 0 (fresh), 4, and 8 h was washed several times with distilled water and dried in a dry oven at 80 • C for more than 6 h, and then the weight was measured to confirm the weight loss of the sample. Additionally, it was attempted to confirm the chemical stability of the IEMs in the VRFB system through a vanadium oxidation stability test. This is based on the principle that VO 2 + , which is a V(V) species, is reduced to VO 2+ , which is a V(IV) species, by an oxidation reaction of a film immersed in a VO 2 + /H 2 SO 4 solution. For the measurement of vanadium oxidation stability, a membrane sample (2 × 2 cm 2 ) was immersed in 20.0 mL of 0.1 M V 2 SO 5 (in 5 M H 2 SO 4 ) solution, and the temperature was maintained at 40 • C. The concentration of VO 2+ ions in the solution was determined by measuring the absorbance using UV-VIS spectroscopy (UV-2600, Shimadzu). A permeability test was also carried out to confirm the vanadium crossover through the IEMs. A membrane sample having a size of 5 × 5 cm 2 was immersed in a 2 M H 2 SO 4 solution for 2 h or more to reach an equilibrium state, and then inserted in a lab-made two-compartment diffusion cell. Amounts of 2 M VOSO 4 /2 M H 2 SO 4 (feed) and 2 M MgSO 4 /2 M H 2 SO 4 (permeate) solutions were filled in each compartment. The time-course change in the VO 2+ ion concentration was then determined by measuring the absorbance using UV-VIS spectroscopy, and the permeability (overall dialysis coefficient, K A ) of the VO 2+ ion through the IEM was calculated using the following Equation (8) [34]: where C I A and C I I A are the molar concentration of component A (i.e., VO 2+ ) in feed (I) and permeate (II) compartments, respectively; C I A0 is the initial molar concentration of component A in the feed compartment; A is the membrane effective area, V I and V I I are the solutions volume in feed (I) and permeate (II) compartments, respectively; k v is the solution volume ratio of both compartments (= V I /V I I ); and t is time. The mechanical strength of commercial and prepared IEMs was evaluated according to international standards (ASTM method D-882-79) using a universal testing machine (5567 model, Instron). VRFB Performance Tests The evaluation of the charging-discharging performance of the VRFB was performed using a lab-made RFB unit cell. A 2.0 M V 2 (SO 4 ) 3 /2.0 M H 2 SO 4 aqueous solution was used as the cathode electrolyte, and a 2.0 M VOSO 4 /2.0 M H 2 SO 4 aqueous solution was employed as the anode electrolyte. Carbon felt (GF20-3, Nippon Graphite) was used as the electrode, and the effective area of the electrode and membrane was 12.5 cm 2 . Using an automatic battery cycler (WBCS 3000, Wonatech), it was charged to 1.9 V at a current density of 20 mA/cm 2 and then discharged to 0.9 V. Coulombic efficiency (CE), VE, and energy efficiency (EE) for charging-discharging performance evaluation were calculated through the following Equations (9)-(11), respectively. Membranes 2021, 11, 867 8 of 18 Figure 4 shows photographs of the prepared PFIEMs. Unlike the opaque porous substrate film, it can be seen that the prepared PFIEMs are shown to be transparent, and from this, it can be indirectly confirmed that the pores of the substrate are completely filled with a polymer. In addition, in the case of PFCEM, the color was changed to dark yellow after the post-treatment, and therefore, it can be expected that a sulfonation reaction occurred in the pore-filled polymer. Figure 4 shows photographs of the prepared PFIEMs. Unlike the opaque porous substrate film, it can be seen that the prepared PFIEMs are shown to be transparent, and from this, it can be indirectly confirmed that the pores of the substrate are completely filled with a polymer. In addition, in the case of PFCEM, the color was changed to dark yellow after the post-treatment, and therefore, it can be expected that a sulfonation reaction occurred in the pore-filled polymer. The content of ionomer filled in the porous substrate is measured and summarized in Table 3. The content of the filled ionomer is similar to the porosity of the porous substrate shown in Table 1. In addition, field-emission scanning electron microscopy (FE-SEM, JSM-7500F, JEOL Ltd., Japan) analysis was performed to check the surface morphology of the prepared IEMs. As shown in Figure 5, it was confirmed that the pores of the porous substrate were completely filled with a polymer after the membrane preparation, and there were no open pores [35]. The content of ionomer filled in the porous substrate is measured and summarized in Table 3. The content of the filled ionomer is similar to the porosity of the porous substrate shown in Table 1. In addition, field-emission scanning electron microscopy (FE-SEM, JSM-7500F, JEOL Ltd., Japan) analysis was performed to check the surface morphology of the prepared IEMs. As shown in Figure 5, it was confirmed that the pores of the porous substrate were completely filled with a polymer after the membrane preparation, and there were no open pores [35]. The FTIR spectra of the porous PE substrate and the prepared PFAEMs and PFCEMs are shown in Figure 6. In the FTIR spectra of PFAEM, the successful introduction of quaternary ammonium groups was confirmed from the absorption peaks observed at 1372, 975, 890, and 812 cm -1 [36][37][38][39]. Additionally, the presence of C=C bonds and aromatic rings was checked from the absorption peaks found at 1640 and 1390 cm -1 , respectively [40,41]. Meanwhile, CF2 stretching vibration was observed at 1170 cm -1 in the spectrum of PFAEM- The FTIR spectra of the porous PE substrate and the prepared PFAEMs and PFCEMs are shown in Figure 6. In the FTIR spectra of PFAEM, the successful introduction of quaternary ammonium groups was confirmed from the absorption peaks observed at 1372, 975, 890, and 812 cm −1 [36][37][38][39]. Additionally, the presence of C=C bonds and aromatic rings was checked from the absorption peaks found at 1640 and 1390 cm −1 , respectively [40,41]. Meanwhile, CF 2 stretching vibration was observed at 1170 cm −1 in the spectrum of PFAEM-1 [42]. In addition, C=O and C-O-C bonds were confirmed at 1740 and 1120 cm −1 , respectively, indicating the introduction of a fluorine moiety due to the copolymerization of OFPMA monomer [43,44]. Meanwhile, in the spectra of PFCEM, the absorption peaks assigned to sulfonic acid groups were found at 1127, 1037, 1008, and 678 cm −1 , elucidating the successful introduction of cation-exchange groups [38]. In the case of the PFCEM, the existence of the fluorine moiety could not be checked from the FTIR spectra due to the overlapping of the absorption bands. Overall, it can be demonstrated that the preparation of the PFAEM and PFCEM was successfully performed through the monomer pore filling, in situ radical polymerization, and post-treatment reaction. The FTIR spectra of the porous PE substrate and the prepared PFAEMs and PFCEMs are shown in Figure 6. In the FTIR spectra of PFAEM, the successful introduction of quaternary ammonium groups was confirmed from the absorption peaks observed at 1372, 975, 890, and 812 cm -1 [36][37][38][39]. Additionally, the presence of C=C bonds and aromatic rings was checked from the absorption peaks found at 1640 and 1390 cm -1 , respectively [40,41]. Meanwhile, CF2 stretching vibration was observed at 1170 cm -1 in the spectrum of PFAEM-1 [42]. In addition, C=O and C-O-C bonds were confirmed at 1740 and 1120 cm -1 , respectively, indicating the introduction of a fluorine moiety due to the copolymerization of OFPMA monomer [43,44]. Meanwhile, in the spectra of PFCEM, the absorption peaks assigned to sulfonic acid groups were found at 1127, 1037, 1008, and 678 cm -1 , elucidating the successful introduction of cation-exchange groups [38]. In the case of the PFCEM, the existence of the fluorine moiety could not be checked from the FTIR spectra due to the overlapping of the absorption bands. Overall, it can be demonstrated that the preparation of the PFAEM and PFCEM was successfully performed through the monomer pore filling, in situ radical polymerization, and post-treatment reaction. The IEMs used in the RFB are required to have excellent mechanical strength to resist pressure drop owing to high flow rates. Tensile strength and elongation at break are important parameters indicating the mechanical strength of IEMs [45]. The results of tensile strength and stress measurement for the commercial membranes, the porous substrate film, and the prepared PFIEMs are summarized in Figure 7 and Table 4. It can be seen that the porous PE film used as the reinforcing material has an excellent tensile strength (125.1 MPa) and elongation at break (46.47%) despite a relatively thin film thickness compared The IEMs used in the RFB are required to have excellent mechanical strength to resist pressure drop owing to high flow rates. Tensile strength and elongation at break are important parameters indicating the mechanical strength of IEMs [45]. The results of tensile strength and stress measurement for the commercial membranes, the porous substrate film, and the prepared PFIEMs are summarized in Figure 7 and Table 4. It can be seen that the porous PE film used as the reinforcing material has an excellent tensile strength (125.1 MPa) and elongation at break (46.47%) despite a relatively thin film thickness compared with those of commercial IEMs. In addition, the prepared PFIEMs revealed largely improved tensile strength and toughness compared with the porous substrate film. From the results, it was confirmed that the PFIEMs fabricated in this work had superior mechanical strength despite having a thickness of about 1/6 of the commercial membranes. Various characteristics of the commercial IEMs and the prepared PFIEMs are summarized in Table 5. In this study, the PFIEMs were prepared according to the molar ratio of VBC, Sty, and OPFMA, and then the membrane properties were systematically evaluated. As the mole ratio of VBC or Sty is increased, the portion of OPFMA is relatively decreased, and therefore, the content of the fluorine part in the membrane is reduced. The amount of ion-exchange groups introduced into VBC or Sty increased while decreasing the fluorine part, resulting in an increase in the IEC. In addition, it can be seen that the WU and σ of the PFIEMs increased, and the electrical resistance decreased due to the increase in the IEC. The high elongation of the prepared membrane was due to the intrinsic characteristics of the porous PE support used, which means that it could be stretched when a strong external force is continuously applied. However, it was proven that the excessive swelling of the membranes did not occur in actual use based on the SR data shown in Table 5. Meanwhile, the σ of the PFIEMs showed a lower value compared with the commercial membranes because the non-ion conducting area was greatly increased due to the use of an inert PE substrate like in the case of a heterogeneous IEM. However, all the PFAEMs and PFCEMs fabricated in the considered composition range showed significantly lower electrical resistance compared with the commercial IEMs, which was mainly due to the relatively thin film thickness. As a result of measuring the surface contact angle, it can be observed that the hydrophobicity of the PFIEMs containing a fluorine Various characteristics of the commercial IEMs and the prepared PFIEMs are summarized in Table 5. In this study, the PFIEMs were prepared according to the molar ratio of VBC, Sty, and OPFMA, and then the membrane properties were systematically evaluated. As the mole ratio of VBC or Sty is increased, the portion of OPFMA is relatively decreased, and therefore, the content of the fluorine part in the membrane is reduced. The amount of ion-exchange groups introduced into VBC or Sty increased while decreasing the fluorine part, resulting in an increase in the IEC. In addition, it can be seen that the WU and σ of the PFIEMs increased, and the electrical resistance decreased due to the increase in the IEC. The high elongation of the prepared membrane was due to the intrinsic characteristics of the porous PE support used, which means that it could be stretched when a strong external force is continuously applied. However, it was proven that the excessive swelling of the membranes did not occur in actual use based on the SR data shown in Table 5. Meanwhile, the σ of the PFIEMs showed a lower value compared with the commercial membranes because the non-ion conducting area was greatly increased due to the use of an inert PE substrate like in the case of a heterogeneous IEM. However, all the PFAEMs and PFCEMs fabricated in the considered composition range showed significantly lower electrical resistance compared with the commercial IEMs, which was mainly due to the relatively thin film thickness. As a result of measuring the surface contact angle, it can be observed that the hydrophobicity of the PFIEMs containing a fluorine moiety was somewhat higher than that of the commercial membranes, which is thought to be owing to the characteristic of the PE substrate with strong hydrophobicity (i.e., contact angle of the PE substrate = ca. 99.0 degree) [46]. It can be seen that the prepared PFIEMs generally exhibited a low water content and high surface hydrophobicity compared with the commercial membranes, which was considered to be advantageous in reducing the crossover of vanadium ions and increasing the oxidation stability of the membrane. Figure 8 exhibits the results of measuring the weight change of the IEMs during the Fenton oxidation test. It can be seen that PFAEM-1 and PFCEM-1 containing the most fluorine moieties exhibited the best oxidation stability among the membranes tested. The data demonstrate that the oxidation stability elevated as the content of the fluorine moiety increased. Therefore, it can be confirmed that the oxidation stability of the PFIEMs was improved due to the introduction of the fluorine moiety. However, PFCEM-2 and PFCEM-3 showed lower oxidation stability compared with PFCEM-0 without a fluorine moiety. In the case of poly(styrenesulfonic acid), it is known that chain scission by the HO· radical is promoted at low pH conditions [47,48]. That is, it is believed that the high content of sulfonic acid groups (i.e., high acidity) promotes the oxidative degradation of a cation-exchange polymer. Results and Discussion PFIEMs generally exhibited a low water content and high surface hydrophobicity compared with the commercial membranes, which was considered to be advantageous in reducing the crossover of vanadium ions and increasing the oxidation stability of the membrane. Figure 8 exhibits the results of measuring the weight change of the IEMs during the Fenton oxidation test. It can be seen that PFAEM-1 and PFCEM-1 containing the most fluorine moieties exhibited the best oxidation stability among the membranes tested. The data demonstrate that the oxidation stability elevated as the content of the fluorine moiety increased. Therefore, it can be confirmed that the oxidation stability of the PFIEMs was improved due to the introduction of the fluorine moiety. However, PFCEM-2 and PFCEM-3 showed lower oxidation stability compared with PFCEM-0 without a fluorine moiety. In the case of poly(styrenesulfonic acid), it is known that chain scission by the HO⸱ radical is promoted at low pH conditions [47,48]. That is, it is believed that the high content of sulfonic acid groups (i.e., high acidity) promotes the oxidative degradation of a cation-exchange polymer. The percentage reduction of the membrane IEC after the Fenton oxidation experiment was determined, and the results are shown in Figure 9. As a result, it was found that PFAEM-1 and PFCEM-1, which had the highest fluorine content among the membrane samples tested, showed the same tendency with the change in weight and had the lowest IEC reduction rate. This result demonstrates that the fluorine moiety introduced into the membranes can also improve the oxidative stability of the ion-exchange groups. The percentage reduction of the membrane IEC after the Fenton oxidation experiment was determined, and the results are shown in Figure 9. As a result, it was found that PFAEM-1 and PFCEM-1, which had the highest fluorine content among the membrane samples tested, showed the same tendency with the change in weight and had the lowest IEC reduction rate. This result demonstrates that the fluorine moiety introduced into the membranes can also improve the oxidative stability of the ion-exchange groups. Meanwhile, IEMs applied to a VRFB operating under strongly acidic conditions may cause problems, such as a decrease in the degree of cross-linking and decomposition of functional groups when used for a long period of time [49]. Therefore, the chemical stability of the prepared PFIEMs by measuring the rate of membrane decomposition in a vanadium electrolyte solution was also evaluated. When immersed in a VO2 + /H2SO4 solution, VO2 + is reduced to VO 2+ due to the oxidation reaction of the membrane. Thus, there is a proportional relationship between the generation rate of VO 2+ ions and the decomposition rate of the IEMs [50]. Figure 10 shows the results of measuring the concentration of VO 2+ generated by the oxidation reaction of each IEM. As a result, it was confirmed that the oxidation stability of the PFIEMs in the vanadium electrolyte was significantly higher than that of the commercial membranes. This is considered to be a result of the excellent stability of the porous PE film used as the reinforcing material. In addition, as the ratio of the fluorine moieties increased, a lower VO 2+ ion production rate was exhibited. From this, it was also confirmed that the chemical stability of the IEMs for VRFB application could be improved by introducing fluorine moieties into the membrane. Meanwhile, IEMs applied to a VRFB operating under strongly acidic conditions may cause problems, such as a decrease in the degree of cross-linking and decomposition of functional groups when used for a long period of time [49]. Therefore, the chemical stability of the prepared PFIEMs by measuring the rate of membrane decomposition in a vanadium electrolyte solution was also evaluated. When immersed in a VO 2 + /H 2 SO 4 solution, VO 2 + is reduced to VO 2+ due to the oxidation reaction of the membrane. Thus, there is a proportional relationship between the generation rate of VO 2+ ions and the decomposition rate of the IEMs [50]. Figure 10 shows the results of measuring the concentration of VO 2+ generated by the oxidation reaction of each IEM. As a result, it was confirmed that the oxidation stability of the PFIEMs in the vanadium electrolyte was significantly higher than that of the commercial membranes. This is considered to be a result of the excellent stability of the porous PE film used as the reinforcing material. In addition, as the ratio of the fluorine moieties increased, a lower VO 2+ ion production rate was exhibited. From this, it was also confirmed that the chemical stability of the IEMs for VRFB application could be improved by introducing fluorine moieties into the membrane. Vanadium ion permeability through a membrane is a parameter indicating the crossover characteristic of vanadium ions. The crossover of vanadium ions as the active electrode material can be regarded as a self-discharge process in a VRFB system. Vanadium ions pass through the IEM and chemically react with vanadium ions of different oxidation numbers, resulting in efficiency loss and capacity reduction [51]. Since the crossover behavior is mainly determined by the membrane characteristics, fabricating the IEMs with Vanadium ion permeability through a membrane is a parameter indicating the crossover characteristic of vanadium ions. The crossover of vanadium ions as the active electrode material can be regarded as a self-discharge process in a VRFB system. Vanadium ions pass through the IEM and chemically react with vanadium ions of different oxidation numbers, resulting in efficiency loss and capacity reduction [51]. Since the crossover behavior is mainly determined by the membrane characteristics, fabricating the IEMs with a reduced crossover rate is one of the important issues in the RFB technology field. An ideal IEM for VRFB should have low vanadium ion permeability to reduce self-discharge and achieve high current efficiency [52]. The vanadium ion permeability values of the commercial membranes and the prepared PFIEMs are listed in Table 5. As a result, it was confirmed that the permeability of vanadium ions increased as the ratio of VBC and Sty in the prepared PFIEMs increased. That is, the permeability of the vanadium active material is elevated because the water uptake and the free volume increase by increasing the IEC in the membranes. However, despite the thin film thickness compared with the commercial membranes, the vanadium ion permeability in the PFIEMs was shown to be relatively low due to the low water uptake (wettability) and surface hydrophilicity. As a result, the vanadium ion permeability was lowest when the ratio of fluorine moieties was highest in both the PFAEMs and PFCEMs. In addition, when comparing both types of IEMs, it can be seen that the vanadium ion permeability in the PFAEMs is relatively low compared with that of the PFCEMs. This result demonstrates that in the case of the PFAEMs, the crossover of cations (i.e., vanadium ions) can be effectively reduced by means of the Donnan exclusion. The VRFB performance evaluation results with various IEMs are summarized in Figure 11 and Table 6. The results show a tendency to increase the VE by increasing the VBC or Sty molar ratio of the membrane owing to the reduced electrical resistance [52]. As a result, PFAEM-1 and PFCEM-1 having the highest fluorine content exhibited the lowest VE among the membranes tested, but did not show a significant difference from other membranes. On the other hand, the CE was shown to increase by elevating the fluorine content in the membrane, and therefore, PFAEM-1 and PFCEM-1 showed the highest values. The capacity loss of the VRFB showed a tendency to increase as the number of cycles increased, and it is known that this capacity loss originates from the crossover of the active materials through the IEMs and the irreversibility of the oxidationreduction reactions [53]. In conclusion, PFAEM-1 and PFCEM-1, which have the most fluorine content, showed the highest EE values, and it was confirmed that they had better charge-discharge performance than the commercial membranes. Although it may seem that the difference among the membranes is not large in the results of such a single cell, it is expected that the significant performance difference will occur in a practical system with a large membrane area. Although it is difficult to make an accurate comparison due to the different experimental conditions, the energy efficiency of the fluorine-containing PFIEMs developed in this study was shown to be superior to those of the investigated commercial membranes listed in Table 7. among the membranes is not large in the results of such a single cell, it is expect the significant performance difference will occur in a practical system with a large brane area. Although it is difficult to make an accurate comparison due to the di experimental conditions, the energy efficiency of the fluorine-containing PFIEMs oped in this study was shown to be superior to those of the investigated commercia branes listed in Table 7. Membranes 2021, 11, x FOR PEER REVIEW The characteristics of PFAEM-1 and PFCEM-1, which showed the best VRFB performance, are compared with each other as a spider chart in Figure 12. It can be seen that PFAEM-1 shows better performance than PFCEM-1 in all evaluation criteria. In summary, the PFAEM was not disadvantageous in terms of electrical resistance compared with traditional CEMs when applied to the VRFB due to its thin film thickness. Moreover, it was expected that the vanadium ion crossover could be effectively reduced by the Donnan exclusion, and long-term stability could also be greatly improved by employing the PFAEMs with a fluorine moiety rather than CEMs. The characteristics of PFAEM-1 and PFCEM-1, which showed the best VRFB performance, are compared with each other as a spider chart in Figure 12. It can be seen that PFAEM-1 shows better performance than PFCEM-1 in all evaluation criteria. In summary, the PFAEM was not disadvantageous in terms of electrical resistance compared with traditional CEMs when applied to the VRFB due to its thin film thickness. Moreover, it was expected that the vanadium ion crossover could be effectively reduced by the Donnan exclusion, and long-term stability could also be greatly improved by employing the PFAEMs with a fluorine moiety rather than CEMs. Conclusions In this study, novel thin reinforced IEMs were developed by combining a porous PE substrate and an ionomer containing a fluorine moiety for VRFB application. By adjusting the ratio of VBC or Sty and OFPMA, the electrochemical and physicochemical properties of the membranes were effectively controlled. The prepared PFIEMs exhibited superior mechanical properties compared with the commercial membranes despite the thin film thickness owing to the tough physical properties of the porous substrate used. The ion conductivity of the PFIEMs, which contain a large non-ion conducting region owing to the use of the inert porous substrate, was revealed to be lower than that of the commercial membranes. However, the electrical resistance of the PFIEMs could be greatly reduced due to the thin film thickness. Meanwhile, as a result of the evaluation of oxidation stability using Fenton's reagent, it was confirmed that the oxidation stability of the membranes could be greatly improved through the use of a PE support and the introduction of a fluorine moiety into the filling ionomer. It was also found that the PFIEMs having higher fluorine content exhibit better chemical stability in the vanadium electrolyte, similar to the result of the Fenton oxidation. In addition, the membranes with the highest content of Conclusions In this study, novel thin reinforced IEMs were developed by combining a porous PE substrate and an ionomer containing a fluorine moiety for VRFB application. By adjusting the ratio of VBC or Sty and OFPMA, the electrochemical and physicochemical properties of the membranes were effectively controlled. The prepared PFIEMs exhibited superior mechanical properties compared with the commercial membranes despite the thin film thickness owing to the tough physical properties of the porous substrate used. The ion conductivity of the PFIEMs, which contain a large non-ion conducting region owing to the use of the inert porous substrate, was revealed to be lower than that of the commercial membranes. However, the electrical resistance of the PFIEMs could be greatly reduced due to the thin film thickness. Meanwhile, as a result of the evaluation of oxidation stability using Fenton's reagent, it was confirmed that the oxidation stability of the membranes could be greatly improved through the use of a PE support and the introduction of a fluorine moiety into the filling ionomer. It was also found that the PFIEMs having higher fluorine content exhibit better chemical stability in the vanadium electrolyte, similar to the result of the Fenton oxidation. In addition, the membranes with the highest content of fluorine (i.e., PFAEM-1 and PFCEM-1) showed the lowest vanadium ion permeability, which resulted in the highest current efficiency in the VRFB tests. The PFIEMs also exhibited higher VE compared with the commercial membranes due to the relatively low mass transfer resistance. Overall, PFAEM-1 and PFCEM-1, which have the highest portion of fluorine, showed the highest energy efficiency (89.9% and 87.6%, respectively), and the VRFB performance improvement by using the thin reinforced membranes was expected to increase in a practical system with a large membrane area. Moreover, as a result of comparing the PFAEM and PFCEM, it was concluded that the PFAEM is better than the PFCEM in terms of both charging-discharging performance and durability of VRFB.
10,913
2021-11-01T00:00:00.000
[ "Materials Science", "Engineering", "Chemistry" ]
Study on mechanism and influential factors of progressive collapse resistance of base‑isolated structure Progressive collapse means that under the action of accidental loads such as terrorist attacks, fires, and vehicle impacts, the local damage to the structure causes a chain reaction and then causes damage to other parts of the structure, resulting in the collapse of Abstract The progressive collapse of the structure caused by the partial failure of the structure will cause severe consequences and massive losses, and structural progressive collapse resistance has always been a hot topic of current research. In order to study the progressive collapse mechanism of base-isolated structures, the test study and numerical simulation of the base-isolated structures were carried out based on the vertical Pusho-ver method and analysis of the variation rule of the capacity of the remaining structure and influence mechanism. The isolation bearing failure position, the size of the beam of the seismic isolation layer, the type of the isolation bearing, and the horizontal stiffness of the seismic isolation layer on the capacity of the remaining structure were compared and analyzed. The results show that the non-uniformity of the beams and the concentrated loading at the nodes were easy to form a linear catenary mechanism, resulting in more severe beam end damage than mid-span damage. In the case of side isolation bearing failure, due to the lack of sufficient lateral restraint, the capacity was significantly lower than other conditions, which were more likely to cause partly collapse. Therefore, setting more transfer paths to improve the structure’s resistance to progressive collapses was necessary. Increasing the size of the beam of the seismic isolation layer could improve the capacity of the remaining structure of the alternate load path in the base-isolated structure. The changes in the horizontal stiffness of the seismic isolation layer and the type of the isolation bearing have little effect on the progressive collapse resistance capacity of the remaining structure. seismic buildings. It was necessary to conduct relevant research on the progressive collapse of the base-isolated structure under accidental loads. Tavakoli et al. [25] conducted a nonlinear static analysis on the base-isolated structure to analyze the structure's progressive collapse resistance collapse ability. It was found that the isolation system did not play an influential role in improving the structure's collapse resistance but did not enhance the anti-collapse ability of the base-isolated structure. Huang et al. [26] based on the introduction of a seismic damage model of isolators, a reliability analysis of baseisolated frame-wall structures was conducted using the global reliability method and the performance of progressive collapse resistance and structural seismic damage can be acquired. Yang et al. [27] study the progressive collapse performance of the base-isolated frames supported by stepped foundation in mountainous areas under two-directional coupled dynamic excitation and advised the design of progressive collapse of the frame structure of the base-isolated frames supported by stepped foundation in mountainous areas. The mechanism and influencing factors of progressive collapse resistance have not been studied. This paper used the base-isolated structure's nonlinear static Pushdown analysis method to conduct the test research and the finite element analysis verification of its capacity and the collapse mechanism. The influence of the isolation bearing failure position, the size of the beam of seismic isolation layer, the type of isolation bearing, and the isolation bearing stiffness on the collapse mechanism of the base-isolated structure were compared and analyzed. Pushdown analysis methods for base-isolated structure Pushdown analysis used the pushover analysis method to analyze the progressive vertical collapse of structures. Pushover analysis, namely the nonlinear static analysis method, was a method to evaluate existing structures and design new structures based on performance, mainly used to evaluate the seismic performance of structures. Pushover analysis focuses mainly on the lateral collapse of structures, while pushdown analysis focuses on the vertical collapse of structures. In pushdown analysis, there were two vertical loading modes: full-span and damaged loading, mainly divided into force loading and displacement loading. The full-span load refers to the vertical load of the structure, which increases uniformly in each span. In the failure span loading, only the increase of the failure span caused by the failure beam element was considered, while the vertical load of other unaffected spans remains unchanged. This paper adopted the Pushdown analysis method of damaging cross-loading proposed by Khandelwal et al. [28] for analysis. Increasing loads were applied to the initial buckling region, and gravity loads were applied to the entire region. Pushdown analysis based on displacement control was used, as depicted in Fig. 1. Testing model design In this study, a student dormitory building is used as a prototype for the experimental model design. The project is a reinforced concrete frame structure with a total of 6 floors. The height of each floor is 3.30m, and the total height is 19.30m. Category C building, seismic fortification intensity of 8°, basic seismic acceleration 0.3g, design earthquake group 2, and site characteristic period 0.4s. The testing model was composed of isolation bearings and a superstructure model. The superstructure adopted the RC frame structure. The primary reference parameters of the frame testing model of the base-isolated structure were as follows: the section size of the frame column was 150mm×150mm, the size of the upper frame beam was 100mm×150mm, and the size of the beam of the seismic isolation layer was 100mm×200mm. Moreover, the isolation bearing was selected with a diameter D 100mm lead rubber bearing (LRB). The specific parameters of isolation bearing were demonstrated in Table 1, and the size of the testing model is depicted in Fig. 2. By Chinese code [29], the compressive strength of concrete at the age of 28 days was determined by averaging the tested values of three 150 mm×150mm×300mm were 25.6 MPa. The yield strength of reinforcing f y and the ultimate strength of reinforcement f t was determined by averaging the tested values of three 400-mm long bars from the same batch of bars used in the test. The yield strength of reinforcement f y was 238MPa, and the ultimate strength f t was 319MPa. The layout of measuring points was one of the critical points of the model test. According to the model and test characteristics, as well as the actual demand for data and test equipment limitations, the response information of the whole and local components was mainly measured. According to the overall measurement and local measurement, the test device was designed and tested to study the progressive collapse behaviors and anti-progressive collapse performance of the remaining structure after the initial failure of the base-isolated structure. Force and vertical displacement sensors were set at the initial failure position of the bearing. Horizontal displacement sensors and steel bar strain sensors were arranged at both ends of the beam of the isolation layer in the initial failure isolation bearing section, and the deformation and crack development of the superstructure, and the deformation of the remaining bearing were observed and recorded in the whole process of the test. The specific layout and number of the sensors are shown in Fig. 3, in which a, b, and c represent the force, displacement, and strain sensors, respectively. The displacement sensor was used to monitor the displacement response during the test, and the force sensor was used to monitor the internal force redistribution process of the damaged structure. The finite element software ABAQUS established the finite element model of the base-isolated structure. The perspective and plan view of the structure is presented in Fig. 4. The beams, slabs, and columns used the hexahedral reduced solid element C3D8R, and the steel reinforcement used the three-dimensional two-node truss element T3D2, and the steel skeleton was embedded in the concrete through the EMBED command. The isolation bearing was modeled with solid elements. Due to the incompressible or almost incompressible properties of the material, the 8-node hexahedral hybrid reduction element C3D8RH was required for the material. The hybrid formula allowed the nodal displacement of the element to be used only to calculate the deviatoric strain and deviatoric stress, and the compressive stress of the element was determined by an additional degree of freedom, which could prevent the problem of volume self-locking. The reduced integration could reduce the integration points and increase the calculation efficiency. The lead core was a conventional material. In order to prevent the self-locking problem of shear force, it was recommended to use the reduced integration 8-node linear hexahedron element C3D8R for calculation. Due to the small deformation, the steel could use the 8-node hexahedral incompatible element C3D8I to prevent the shearing cut self-locking problem. The superstructure was connected to the isolation bearing by a Tie connection. The mesh size of the finite element model is determined according to the experimental model mesh division suggestions. The concrete mesh size is 50mm, the reinforcement mesh size is 60mm, and the steel plate and rubber mesh size in the isolation bearing are the same, which is 50mm. The lead core grid size is 30mm, and the grid sensitivity analysis is carried out. The impact of the grid on the results is acceptable. Because the concrete was cast-in-place, the merge command was used in the numerical model to perform Boolean operations on the concrete elements to form a whole. Reinforcement was embedded into the concrete using the EMBED command without considering the bond slip between reinforced concrete. The mesh sizes of reinforcement and concrete elements were adjusted according to the model by trial calculation. The constraints in the direction of U1 and U2 were set on the failure column, and the failure column only shifted up and down the direction of U3 in the loading process. The model adopts the fixed constraint mode at the bottom of the column instead of the ground anchor action in the test. In the numerical simulation, displacement control was used to load the structure, and coupling reference points were set on the failure cylinders to ensure the convergence of the model. Overall collapse and failure mode To verify the rationality and reliability of the selected structural elements and the finite element parameters in the finite element model, test analysis and ABAQUS finite element model simulation analysis of a single frame were carried out, respectively. The testing model is shown in Fig. 5. From the vertical pushover diagram of test and simulation in Fig. 6, it could be observed that during the whole collapse process, from the beginning of loading to the complete failure of the structure, both the testing model and the finite element model went through the ascending section to reach the same ultimate capacity. The concrete cracked, the capacity of the remaining structure decreased sharply to a specific value and then went through the transition stage, and finally, the reinforcement fracture structure failed. With the gradual improvement of structural nonlinearity, due to the difference between the testing model and finite element model in constitutive and boundary constraints, static loading reinforcement bar stress remains unchanged until the reinforcement stage and the difference between the two rises. With the increasing vertical displacement of the failed column, the stiffness of the frame structure was further decreased. By comparing the testing structure and the simulated tensile damage condition in Fig. 7, the deformation of the adjacent span beam-column members was observed, and apparent cracks appeared in the members. However, the in-plane tilt phenomenon appeared in the adjacent span at the end of loading, indicating that the seismic isolation and the timely release of the reaction force from the layer could play a specific protective role on the adjacent span internal members, but there was a risk of causing the remaining structure to tilt or even overturn. From the failure mode of the whole testing model, the final failure of the structure was more similar to the "brittle failure" mode of the less reinforced beam. The main reason for this failure mode was that the concentrated loading at the joint was easy to form a linear catenary mechanism, resulting in more severe damage at the beam end than at the middle of the beam span. The beam end would be damaged when the load increased to a certain extent. Remaining structural strain The strain curve of the concrete surface at the end of the beam could be seen in Fig. 8 that individual strain data are discrete. The main reason was that the location of concrete cracks was uncertain. However, the strain curve of concrete at the end of the beam generally was confirmed to the law of concrete strain variation and can reflect the internal force at the end of the member. The change of surface strain of concrete was also related to the response of the remaining structure. Various strain curves in Fig. 8 shows a sudden change of strain values in different degrees in the range of vertical displacement from 29.50 to 45.50mm. Combining with the relationship curve in Fig. 6, it could be found that the structure has completed the transformation from beam mechanism to catenary mechanism in this vertical displacement range. During this process, the sudden release of external loads and the sudden decrease of internal forces caused a significant change of strain at the end of the beam. Displacement and damage of the isolation bearing The displacement condition of the isolation bearing in the test and simulation was extracted and tested. Figure 9 was a comparison diagram of the displacement derived from the isolation bearing tested and the finite element analysis. Figures 10 and 11 were the isolation bearing test and finite element analysis deformation diagram. It was found that the stiffness of the upper structure was much greater than that of isolation bearings. The relative displacement between the isolation bearings was minimal and could be ignored. The isolation bearings on both sides of the failed isolation bearings were extracted for comparison. The changing trend of the isolation bearing displacement was the same, the maximum displacement of the isolation bearing was similar, and the difference was only 3mm. The simulated isolation bearings had the same shape and size, solid elements simulated the isolation bearing, and the constitutive adopted the constitutive relationship obtained from the experiment. So, the use of solid elements to simulate isolation bearings had good applicability, as shown in Fig. 11. It can be seen from the results of the finite element model and the experimental model that the finite element model could well reflect the progressive collapse of isolation bearing capacity and collapse mechanism of the base-isolated structure and could reveal the mechanism of progressive collapse. Therefore, the finite element model had particular applicability and accuracy. The failure condition of the adjacent isolation bearings under the failure condition of the center column isolation bearing was extracted in Fig. 12, and the internal failure condition of the lead core isolation bearing was analyzed. It could be seen from the simulation process that as the vertical displacement of the center column increased, the isolation bearing was affected by the horizontal thrust of the beam of the seismic isolation layer, and the local stress of the isolation bearing became larger and larger, and the stress on the left and right half of the isolation bearing was differentiated. The stress at the end near the failed isolation bearing increases gradually. The stress at the end away from the failed isolation bearing with little change, the lead-core isolation bearing under uneven compression, with the increase of horizontal displacement, the stress on both sides of the adjacent isolation bearing of the failed isolation bearing would aggravate the non-uniformity of the compressive stress distribution. Node damage When the test was completed, most of the concrete near the beam-column joint area was crushed, and the structural damage mainly occurred at the beam end and the beamcolumn joint areas were demonstrated in Fig. 13. With the increase of displacement, the plastic hinge fails, and the bending moment at the beam end decreases. At this time, the reinforcement is tensioned, the axial force in the beam provides anti-collapse bearing capacity. However, when the components were finally destroyed, the tensile reinforcement was generally constricted and fractured. The concrete in the compression area was crushed and broken. Equivalent plastic strain clouds mapped in the finite element model were extracted. When the maximum principal strain of the element reached ε max =0.01, the element failed. It could be seen from Fig. 13 that the tensile reinforcement at the beam end was broken. The damage position of the concrete was similar to the test, and the numerical model of a layer of frame beam end concrete crushed, seismic isolation layer on both sides of the beam end damage was severed, a large number of unit failure, node damage similar to test, edge beam under large deformation were apparent torsion damage, caused unit more than the ultimate strain. The failure location of the side beam in the test was consistent with the simulated distribution. The comparison between numerical simulation and test failure shows that numerical simulation could well reflect experimental phenomena. Analysis of influencing factors of progressive collapse mechanism of base-isolated structure In order to further study the vertical progressive collapse mechanism and capacity of the base-isolated structure, this paper selects four main factors that affected the progressive collapse mechanism of the base-isolated structure, the failure position, the beam height of the isolation layer, and the stiffness of the isolation layer, and the type of isolation bearing was studied. Through the establishment of 24 finite element models, the Table 2. Failure position analyses In order to analyze the influence of crucial isolation bearing failure position on the structure's resistance to progressive collapse, the working model numbers A1-A3 in Table 2 were selected for analysis. Figure 14 shows the load-displacement curve at different failure positions. Remaining structural capacity and displacement at different failure positions were extracted, as demonstrated in Table 3. The data in the table shows that the peak capacity of the remaining structure under the A2 working model was only 97.74kN, which had been significantly reduced, which was only 60% of the failure case of the middle isolation bearing, which was more likely to cause the partial collapse of the structure. The reason for the analysis was that due to the lack of sufficient lateral restraint of the failure position of the side isolation bearing, only the isolation bearing provides lateral restraint on one side, resulting in a decrease in the capacity of the remaining structure and compared with the A3 working model; the beam mechanism displacement was 18 mm; and the A3 working model was 18 mm. There was a delay in reaching the peak value of the beam mechanism, and the peak value between the two was not much different. The peak value of the beam mechanism of the capacity of the remaining structure under the A1 working model was 163.6kN, slightly higher than the A3 working model. The beam mechanism of the capacity of the remaining structure was 157.7kN. After the peak value, the capacity of the remaining structure of the two catenary phases did not increase significantly. The displacement of the adjacent isolation bearings at the failure position of the A1 and A3 working models was minimal, while the displacement of the A2 working model was only 5mm, which also shows that the side span was to the collapsed span frame beam. The axial restraint was significant, and the capacity of the remaining structure was also high. Analysis of component failure condition during the collapse According to GSA [30] and DoD [31] advice, if the displacement of the failed column reached 1/5 of the single beam span, the member was considered invalid. In this study, the synergistic effect of the beam-slab substructure under large deformation was considered, and the vertical displacement limitation was set at 160mm, which equals 1/4 of the beam span of 650mm. The specification stipulates [35] that the maximum horizontal displacement of the isolation bearing under rare earthquake should not exceed the smaller 0.55 times its effective diameter and three times the total thickness of the layer. In the case of failure of the center column isolation bearing, the compression damage was mainly concentrated on both sides of the failed column, and the horizontal displacement of the side isolation bearing was the largest due to the lack of sufficient lateral restraint of the side isolation bearing. The seismic isolation bearing was subjected to a large horizontal displacement under the horizontal action of the beam. If the horizontal displacement limit of the isolation bearing was exceeded, the structure was declared to fail. The overall failure form of the structure was analyzed based on the failure of the middle isolation bearing. For the convenience of description, the concrete columns were numbered from left to right as I, II, III, IV, and V. As presented in Fig. 15 (a), when the vertical displacement of the top of the failed column δ reached 24mm, the II and IV span beams inclined slightly, and the horizontal displacement of the bearing ∆ began to move. The compression failure of column II and column IV beam-column joints in the beam of the seismic isolation layer began (see Fig. 15 b). When δ reached 44mm, the concrete was crushed at both ends of the failed isolation bearing. The tensile reinforcement began to yield, and the isolation bearing displacement was further increased (see Fig. 15 c). When δ reached 98mm, the deformation of beams on both sides of the failed isolation bearing increased, and torsional deformation occurred, and the isolation bearing displacement reached the maximum. When δ reached 160mm, the vertical deformation of the II and IV span was large, the test piece collapsed as a whole, the internal failure force of the member was released, and the displacement of the isolation bearing was reduced. The final failure characteristics of the test are shown in Fig. 15 (d). When the middle column isolation bearing failed, the compression damage was mainly concentrated around the failed column, which was transmitted along the beam span, and the damage to the diaphragm was almost negligible. When the side isolation bearing failed, the displacement of other isolation bearings changed little, so it would not be discussed. The fault condition of the side isolation bearing was depicted in Fig. 16. When δ reached 30mm, spans I and II inclined slightly, and the beam-column joints on the right side of column II of the seismic isolation layer began to be damaged by compression, as depicted in Fig. 16 (a). When δ reached 93mm, the deformation of spans I and II were obvious, part of the concrete was crushed, and the tensile reinforcement began to yield (see Fig. 16b). When δ reached 120mm, the deformation of spans I and II increased, and the torsional deformation of spans I and II occurred (see Fig. 16c). When δ reached 160mm, the vertical deformation of spans I and II were large. The specimen collapsed as a whole, and the final failure characteristics of the test are shown in Fig. 16 (d). When the damage of the mid-span isolation bearing was similar to that of the midspan isolation bearing was observed in Fig. 17 (a), when δ 24mm, Iand III span beams were slightly inclined, and the displacement of the isolation bearing began to rise. The beam-column joints of column II in the isolation layer began to be damaged by compression, as shown in Fig. 17 (b). When δ reached 51mm, at both ends of the isolation bearing, the concrete was crushed, the tensile steel began to yield, and the displacement of the isolation bearing further increased, as shown in in Fig. 17 (c). When δ reached 108mm, the deformation of the beams on both sides of the failed isolation bearing increased, torsional deformation occurred. The isolation bearing displacement reached a maximum. When δ 160mm, the vertical deformation of I and III spans were large; eventually, the whole collapsed. Stress condition analysis of bars When the isolation bearing failed, the internal force of the superstructure would be redistributed due to the change in position of the bottom isolation bearing. In the postprocessing, the concrete in the frame was hidden, only the complete skeleton of the steel bar was displayed, and the stress condition of the steel bar in the remaining structure after the failure of the isolation bearing was analyzed. When the equivalent plastic strain of the reinforcement reached 0.1, the reinforcement element failed. Figure 18 shows the Mises stress cloud diagram of the structural steel bar under three working conditions. The end near the failed isolation bearing was the near beam end, and the side far from the failed isolation bearing was the far beam end. In the A1 working model, due to the failure of the middle isolation bearing, the upper reinforcement at the far end of the failed isolation bearing beam was broken, and the lower reinforcement near the beam end was broken. The stress distribution was symmetrical with the mid-span symmetry axis. At this time, the compression reinforcement in the beam was under tension. With the gradual shrinkage and failure of the compression reinforcement, the vertical deformation of the remaining structure was divergent and reached the collapse state. In the A2 working model, the steel bar fracture mainly occurred at the far end of the beam, while the steel bar stress of the other side span beam was the smallest. The reason for the analysis was that the lack of sufficient lateral restraint led to a reduction in the remaining structural capacity. In the A3 working model, the damage of the reinforcement at the beam end was similar to that of the middle isolation bearing, and the stress of the reinforcement at the side column increased. Beam height analyses of the seismic isolation layer The most significant difference between the base-isolated structure and the ordinary structure was that the seismic isolation layer was added, so the different structural measures of the seismic isolation layer would inevitably lead to the difference in the progressive collapse resistance of the structure. Under normal circumstances, a beam-slab floor would be set on top of the seismic isolation layer, and the relevant parts of the seismic isolation bearing should adopt a cast-in-place concrete beamslab structure. The capacity of the remaining structure should be greater than general floor slab beams' stiffness and isolation bearing capacity [32]. The requirements in the above code were mainly based on the response characteristics of the structure under earthquake action. The influence of the structural measures of the seismic isolation layer on the progressive collapse resistance of the structure needed to be further studied. Therefore, the failure condition of the seismic isolation bearing was used as the background to compare and analyze the structure progressive collapse resistance performance of the beam of seismic isolation layer of different sizes and then determined its variation law. When changing the beam size of the seismic isolation layer, the contribution of the seismic isolation floor slab was not considered. That is, the thickness of the seismic isolation floor slab was assumed to be 0mm. The B1-B9 working model was selected for analysis. The capacity of the remaining structure performance of different beam sizes under different working conditions was presented in Fig. 19, and the remaining structural capacity and displacement were illustrated in Table 4. It could be seen from the data in the table that when the beam size of the isolation layer changed, the isolation bearing displacement of the isolation bearing hardly changed. Under the condition of different failure positions, the height of the beam of the isolation layer was 175mm and 150mm, respectively, and the peak capacity of the remaining structure was not much different. When the number of isolation layer beams increased, compared with different failure location conditions, the peak isolation bearing capacity of the remaining structures of working model B1 was increased by 18%, 13%, and 15%, respectively, compared with B4, B2, B5, B3, and B9. Therefore, increasing the size of the beam of the isolation layer could improve the remaining structure's isolation capacity, and the beam mechanism's peak value was also slightly increased. Therefore, for a seismic-isolated structure with progressive collapse resistance collapse requirements, the size of the seismic isolation layer's beam should be appropriately increased to improve the capacity of alternate load paths in the remaining structure on the premise of ensuring the essential seismic resistance and structural requirements of the beam of the seismic isolation layer. Influence of isolation layer isolation bearing type and restraint stiffness The base-isolated structure differed from the traditional seismic structure, and the different seismic isolation layers of the base-isolated structure had a particular influence on the capacity of the remaining structure against progressive collapse. In establishing a working model, as indicated in Table 2 for comparative analysis, the effects of different isolation bearing types and horizontal restraint stiffness of the seismic isolation layer on the progressive collapse mechanism were studied. Figure 20 was a comparison diagram of pushdown curves under various working conditions. It was found that the peak value of the beam mechanism of the capacity of the remaining structure of the non-baseisolated structure (constrained with 6 degrees of freedom) was slightly higher than the capacity of the remaining structure of the base-isolated structure under the action of the beam mechanism because of the isolation. A summary of the remaining structure's capacity considering the isolation layer's stiffness and the type of isolation bearings is illustrated in Table 5. The change of seismic isolation bearing would not improve the flexural capacity of the beam end of the frame. The horizontal restraint of the seismic isolation layer of the base-isolated structure was weaker than that of the non-isolated structure, so the peak capacity of the base-isolated structure was more minor. At the same time, the existence of the isolation bearing enabled a particular translation and rotation of the beam end of the isolation layer. Therefore, under the condition of removing the short-side middle isolation bearing and inner isolation bearing, the failure of the plastic hinge at the beam end of the base-isolated structure was less than that of the non-base-isolated structure. Seismic structures were delayed. However, baseisolated structure with different isolation bearings and isolation bearings with reduced stiffness has little difference in the vertical collapse capacity of the beam mechanism stage under different failure isolation bearing conditions. In the catenary mechanism stage, the intermediate isolation bearing fails. The capacity of the remaining structure of the base-isolated structure was higher than that of the non-isolated structure under the failure of middle isolation bearing. The reason was that compared with the non-isolated structure, the damage of the plastic hinge at the beam end of the base-isolated structure was delayed, so the deformation of the catenary mechanism was also delayed, resulting in the capacity of the remaining structure decrease. Due to the lack of sufficient lateral restraint, the vertical capacity of the non-base-isolated structure and the base-isolated structure in the catenary stage were not much different in the case of side isolation bearing failure. Limited by the test conditions, the test model in this study is a two-dimensional plane model, which cannot fully reflect the progressive collapse mechanism of the actual three-dimensional model. Therefore, the progressive collapse resistance of the actual base-isolated structure should be much stronger. However, since the isolation layer is far less constrained than other floors, the isolation structure cannot form an effective beam mechanism and catenary mechanism like the non-isolation structure to give full play to the progressive collapse resistance of the remaining structure. Therefore, it is very necessary to carry out a special anti-progressive collapse design for the seismic isolation structure with a high risk of extreme events. Conclusions From the results of the current study, the following conclusions can be drawn: (1) Releasing the reaction force from the seismic isolation layer could protect the adjacent internal members to a certain extent. As the vertical displacement of the center column increases, the stress on both sides of the adjacent isolation bearings of the failed isolation bearing would intensify the compressive stress distribution. The non-uniformity of the beams and the concentrated loading at the nodes was easy to form a linear catenary mechanism, resulting in more severe beam end damage than mid-span damage. (2) In the case of side isolation bearing failure, due to the lack of sufficient lateral restraint, the capacity was significantly lower than other conditions, which were more likely to cause partly collapse. Therefore, setting more transfer paths for the base-isolated structure was necessary to strengthen the progressive collapse resistance of structures. (3) Increasing the size of the beam of the seismic isolation layer could improve the capacity of the remaining structure of the alternate load path in the base-isolated structure. For the base-isolated structure which needs to resist progressive collapse, the size of the beam of the seismic isolation layer could be appropriately increased to improve
7,641.2
2022-12-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
OpenPepXL: An Open-Source Tool for Sensitive Identi fi cation of Cross-Linked Peptides in XL-MS Cross-linking MS (XL-MS) has been recognized as an effective source of information about protein structures and interactions. In contrast to regular peptide identi fi cation, XL-MS has to deal with a quadratic search space, where peptides from every protein could potentially be cross-linked to any other protein. To cope with this search space, most tools apply different heuristics for search space reduction. We introduce a new open-source XL-MS database search algorithm, OpenPepXL, which offers increased sensitivity compared with other tools. OpenPepXL searches the full search space of an XL-MS experiment without using heuristics to reduce it. Because of ef fi cient data structures and built-in parallelization OpenPepXL achieves excellent runtimes and can also be deployed on large compute clusters and cloud services while maintaining a slim memory footprint. We compared OpenPepXL to several other commonly used tools for identi fi cation of noncleavable labeled and label-free cross-linkers on a diverse set of XL-MS experiments. In our fi rst comparison, we used a data set from a fraction of a cell lysate with a protein database of 128 targets and 128 decoys. At 5% FDR, OpenPepXL fi nds from 7% to over 50% more unique residue pairs (URPs) than other tools. On data sets with available high-resolution structures for cross-link validation OpenPepXL reports from 7% to over 40% more structurally validated URPs than other tryptic peptides S. pyogenes DSS could cross-link to the N termini or C-termi-nal lysines. kept in 12 separate groups without overlapping peptide sequences. cross-linked with and cross-linked the of three In Brief XL-MS has been recognized as an effective source of information about protein structures and interactions. OpenPepXL is a sensitive XL-MS identification software that reports from 7% to 40% more structurally validated cross-links than other tools on data sets with available high-resolution structures for cross-link validation. It is open source and has been built as part of the OpenMS suite of tools. OpenPepXL supports all common operating systems and open data formats. Graphical Abstract Cross-linking MS (XL-MS) has been recognized as an effective source of information about protein structures and interactions. In contrast to regular peptide identification, XL-MS has to deal with a quadratic search space, where peptides from every protein could potentially be cross-linked to any other protein. To cope with this search space, most tools apply different heuristics for search space reduction. We introduce a new open-source XL-MS database search algorithm, OpenPepXL, which offers increased sensitivity compared with other tools. Open-PepXL searches the full search space of an XL-MS experiment without using heuristics to reduce it. Because of efficient data structures and built-in parallelization Open-PepXL achieves excellent runtimes and can also be deployed on large compute clusters and cloud services while maintaining a slim memory footprint. We compared OpenPepXL to several other commonly used tools for identification of noncleavable labeled and label-free cross-linkers on a diverse set of XL-MS experiments. In our first comparison, we used a data set from a fraction of a cell lysate with a protein database of 128 targets and 128 decoys. At 5% FDR, OpenPepXL finds from 7% to over 50% more unique residue pairs (URPs) than other tools. On data sets with available high-resolution structures for cross-link validation OpenPepXL reports from 7% to over 40% more structurally validated URPs than other tools. Additionally, we used a synthetic peptide data set that allows objective validation of cross-links without relying on structural information and found that OpenPepXL reports at least 12% more validated URPs than other tools. It has been built as part of the OpenMS suite of tools and supports Windows, macOS, and Linux operating systems. OpenPepXL also supports the MzIdentML 1.2 format for XL-MS identification results. It is freely available under a three-clause BSD license at https://openms.org/openpepxl. Cross-Linking Mass Spectrometry (XL-MS) has proven to be a valuable tool in studying the structures and interactions of proteins (1)(2)(3)(4)(5). Although XL-MS is maturing as a very useful method, there is space for improvement at every step of the workflow. Especially the enrichment step of cross-linked peptides derived from cross-linked protein samples has profound effects on the XL-MS analysis as well as the following computational identification and the statistics of the FDR of annotated MS2 spectra. In many XL-MS experiments the samples still contain a vast number of noncross-linked, i.e. linear peptides; consequently cross-linked peptides usually occur with low intensities and are thus less likely to be selected for fragmentation in data-dependent acquisition as well. Therefore, precursor and fragment spectra of relatively few cross-links must be identified among a large set of spectra from unmodified peptides. This is one of the issues that make the statistics for post-processing and filtering XL-MS data more difficult when compared with the identification of linear peptides. Fragment spectra of cross-linked peptides are also more difficult to annotate as they contain fragments from two peptides. Scoring the whole cross-link fragment spectrum match might result in identifications where one peptide sequence is covered by many fragment ions whereas the second peptide is identified by its precursor mass and very few matching fragment ions only. Reliable identification of one of the peptide sequences does not depend on correct identification of the other sequence. It is possible to have an identification of a cross-linked peptide pair with a high score in a database search where the high score is based on a legitimate good match to one correct peptide, but with a bad match to the second peptide. Reliable identification of a cross-link, that is intended to be useful for modeling a protein structure or complex, requires correct identifications for both peptides and hence the whole identification can only be as good as the identification of the worst of the two peptides (6). The search for two peptides in each fragment spectrum also has implications for the performance of XL-MS identification software. For a given precursor mass in conventional MS-based protein identification, the length of a possibly matching linear peptide can be roughly estimated by applying an 'averagine' model (7). The number of candidates to be considered for matching peptides in database search primarily depends on the width of the precursor mass tolerance window and the size of the protein database. In XL-MS the mass distributes across two peptides and only the sum of their masses plus the mass of the cross-linker is known. The computational search space contains all possible combinations of cross-linked peptides whose sum of masses lies within the precursor mass window. Searching all combinations of peptides rather than just linearly scanning all peptides requires efficient algorithms to perform a search on acceptable time scales. The most obvious solution used by some XL-MS search tools is a brute-force enumeration of all peptide-peptide pairs and filtering them by precursor mass (8). Searches can be sped up by using stable-isotope labeled cross-linkers in the cross-linking experiment (9,10). Such labeling makes crosslinked spectra easily identifiable on the MS1 level and thus reduces the number of corresponding MS2 spectra to be searched by the database search tool. Several conventional linear peptide search tools (11) as well as xQuest (9,10,12) and pLink2 (13) use pre-calculated fragment ion indices to retrieve peptides from the protein database based on observed fragment ions. Just like StavroX (8), xQuest fragments and scores pairs of peptides at a time. Therefore, the use of labeled linkers combined with an ion index limits its computational memory consumption and makes it applicable to large protein databases. Another method for reducing the large search space is to use multi-pass scoring. A first scoring step based on a quick heuristic or a partial score can substantially reduce the number of candidates subjected to full scoring, thus reducing the overall runtime. For example, Kojak (14), XiSearch (15), and pLink2 (13) start with a linear peptide search using an open-modification search strategy. Kojak uses a few hundred of the top-scoring peptides and combines them into pairs fitting the precursor mass, whereas XiSearch and pLink2 only keep a certain number of these and search the entire database again for the second peptide. The existing algorithms constrain the search space for their full scoring. That means they do not apply their final, most discriminative score to every candidate cross-link within the precursor tolerance window. This might prematurely dismiss some candidate peptides that would have a high score as a peptide pair and reduce sensitivity in favor of efficiency. It was previously shown that one of the two peptides of a correctly identified cross-link might not be found within the first few hundred or even thousand peptides by pre-scoring linear peptides (16). Our own experiments have also shown that it is not rare to find thousands of peptide pairs with at least 3 matched fragments for each peptide for one fragment spectrum and a middle-sized database of fewer than 500 proteins (data not shown). Sensitivity is defined as the proportion of real cross-links in a data set identified by a search tool. Unfortunately, it is difficult to calculate the true number of real cross-links in a data set, because the crystal structures are often incomplete, especially for the larger complexes. The theoretical number of possible cross-links for most protein complexes is very high and only a small fraction of them is usually identified. Also, this number is the same for any fixed sample or searched database and does not affect the comparison of tools. Therefore, in this study we use the number of reported cross-links from the target protein database given a fixed FDR threshold as a substitute for the real sensitivity of a search. In this work we introduce OpenPepXL, an efficient opensource software for identification of cross-linked peptides in fragment mass spectra. It is based on a full exploration of all possible candidate cross-link peptide pairs for each precursor mass in order to achieve high sensitivity, but because of efficient index data structures and search algorithms, it can achieve much improved runtimes. OpenPepXL supports both labeled and label-free, mono-and heterobifunctional noncleavable cross-linkers. It is based on the OpenMS software framework (17) and makes use of multi-core architectures using the OpenMP API. OpenPepXL is part of The OpenMS Proteomics Pipeline (TOPP) that includes tools for labeled and label-free quantification, pre-and post-processing, and visualization of spectra and identification data. It can be installed on all major operating systems (Windows, macOS, and Linux) and is compatible with most computing clusters and cloud services for large-scale data analysis. It can be run as a command-line tool with a preconfigured file containing the settings, or as part of a workflow built using the graphical user interface of the free to use KNIME Analytics Platform (18). OpenPepXL supports several output formats for XL-MS identification data such as the MzIdentML 1.2 format (19), the xQuest XML output format and simple text-based tabular formats. The output can, therefore, be easily integrated into many existing XL-MS data analysis pipelines and is also compatible with the public repository PRIDE (20) which is part of ProteomeXchange (21). We compare OpenPepXL to other commonly used tools for identification of noncleavable cross-linkers (pLink2 (13), XiSearch (15), Kojak (14), StavroX (8), and xQuest (9)) on a diverse set of XL-MS experiments and show that it tends to be more sensitive while still achieving very good runtimes. Open-PepXL is available under a three-clause BSD license at https://www.openms.de/openpepxl/. MATERIALS AND METHODS Algorithm Overview-OpenPepXL belongs to the category of algorithms that score an entire candidate molecule of two peptides covalently linked with a cross-linker against an experimental spectrum without doing an open-modification search for linear peptides first. In this sense it has more in common with xQuest (9) and Stav-roX (8) than with pLink2 (13), Kojak (14), or XiSearch (15). Open-PepXL keeps a list of all linear peptides with modifications and their masses after in silico digestion of the protein database. The candidate peptide pairs are then enumerated for each MS2 spectrum precursor mass (Fig. 1). This way only the necessary pairs are created. By using the indices of the linear peptide table to reference the peptides in a pair, only a minimal amount of additional memory is required for this candidate peptide pair enumeration. Loop-links and mono-links are also considered in this step. Then theoretical spectra containing all linear and cross-linked fragments expected from the peptide pair are generated. By default, b-and y-ion series including neutral losses of NH 3 and H 2 O are considered, but a-, c-, x-and zions can also be generated to accommodate different fragmentation methods. A spectrum matching algorithm matches peaks between these theoretical and the experimental spectra. From the number of matched peaks the match-odds score for a candidate peptide pair is calculated (more on the score below). For experiments using labeled cross-linkers a few additional preprocessing steps are necessary. To pair MS2 spectra of the same peptide pairs linked by light and heavy isotope labeled cross-linkers the MS1 features across mass traces and retention time have to be detected and paired. We use the OpenMS tool for MS1 labeling (Fea-tureFinderMultiplex) to detect pairs of MS1 features from light and heavy cross-links based on the characteristic mass shift. OpenPepXL then maps MS2 spectrum precursors to their respective features. MS2 spectra mapped to feature pairs are then paired up and processed ( Fig. 2) to get peak sets from linear and cross-linked fragments with reduced noise. When matching theoretical spectra against these peak sets, only linear theoretical fragment peaks are matched against the experimental linear peaks and vice versa. This preprocessing step is derived from the xQuest algorithm and focuses the matching and scoring to smaller sets of peaks to reduce the chance of false-positive peak matches. The scores of the linear and crosslinked ion matches are combined to one score before the ranking and filtering of candidates. Match-Odds Score-The match-odds score used in OpenPepXL is based on the score of the same name from the xQuest algorithm (9). It is based on the probability of a random match between any peak from the experimental fragment ion spectrum and any peak in the theoretical fragment ion spectrum, given the mass tolerance window tol, mass range r, the number of peaks in the theoretical fragment spectrum s and the number of considered charges for all theoretical peaks c. The probability of one random match to a fragment ion peak is calculated as: FIG. 1. Overview of peptide pair candidate enumeration and identification in OpenPepXL. After in silico digestion a database of modified peptides sorted by mass is kept. For each MS2 spectrum the precursor mass (1) is used to determine the mass range for a peptides (heavier). Iterating through this list (2), for each a peptide, the mass range for b peptides is determined (3) and a list of pairs is enumerated. For each candidate pair, theoretical spectra are generated and scored against one experimental MS2 spectrum (label free experiment) or one linear-ion and one cross-linked ion spectrum (labeled cross-linker experiments). The cumulative distribution function of a binomial distribution with sample size s and probability p is used to determine the probability of getting more than k matched peaks between the experimental and theoretical fragment ion spectra by random chance: This probability will decrease toward 0 for higher numbers of k where a smaller probability denotes a better match, because it is less likely to have happened by chance. With the -log() function the probability is turned into a score with higher numbers denoting a better match: We call this the match-odds score m and it is combined with the precursor error pe (difference between theoretical and experimental precursor mass in ppm) in the following formula to get the final OpenPepXL score: This formula was determined by an agreement between a linear regression and a linear discriminant analysis done to find the best linear combination to separate target from decoy hits on several XL-MS data sets (refer to supplemental Methods for more details). Mass Spectrometry of CRM Complex-The trimeric complex of human CRM1, SNP1 and Ran carrying a Q69L mutation was crosslinked with bis(sulfosuccinimidyl)suberate (BS3) and injected into an EASY-nLC 1000 HPLC system coupled to a Q Exactive mass spectrometer (Thermo Fisher Scientific) in duplicates under three normalized collision energy (NCE) conditions using a 50-min method. MS1 and MS2 resolution were set to 70,000 and 17,500, respectively. Fifteen most abundant precursors with charge of 3-7 were selected for MS2 fragmentation at NCE 20, 24 or 28% (refer to the supplemental Methods for more details on experimental procedure). For the protein database only the three UniProt sequences O14980, O95149 and P62826 were used. They were manually modified to reflect the modifications made during the protein expression and purification (22). The MS proteomics data including the modified protein sequences have been deposited to the ProteomeXchange Consortium (21) via the PRIDE (20) partner repository with the data set identifier PXD014359. Public Data Sets-In addition to the CRM complex data set described above, three data sets were downloaded from public repositories or kindly provided to us by other laboratories. We chose a more complex publicly available data set derived from a BS3-cross-linked crude ribosomal fraction obtained by size exclusion chromatography of HEK293 cell lysate (ProteomeXchange ID PXD006131) (23). The resulting sample was a complex mixture of more than 1700 proteins, which were quantified by label-free quantification of linear peptides. With this data set several protein databases were provided. Starting from one containing the 32 most abundant proteins and doubling in size up to the 512 most abundant proteins. We searched the HCD fragmented subset of this data set consisting of about 170.000 HCD fragmented MS2 spectra against a database of the 128 most abundant proteins and 128 reversed sequence decoys. Additionally, we analyzed a data set with labeled DSS-d 0 /d 12 and PDH-d 0 /d 10 (pimelic acid dihydrazide) cross-linkers. Commercial Bovine Serum Albumin (BSA; Sigma-Aldrich) was cross-linked with labeled DSS or PDH cross-linker in separate experiments. Both samples were independently analyzed using HCD fragmentation and high-resolution MS/MS detection (Orbitrap Fusion Lumos) or ion trap CID fragmentation with low-resolution MS/MS detection (Orbitrap Elite). This data set was published previously as part of a larger study (24) and kindly provided to us by Alexander Leitner upon request. Furthermore, we used a cross-linked synthetic peptides data set published by Beveridge et al. (ProteomeXchange ID PXD014337) (25). Instead of using proteins digested by trypsin, tryptic peptides from the S. pyogenes Cas9 protein with one internal lysine each were synthesized in that study. The peptide termini were modified to make sure that DSS could not cross-link to the N termini or C-terminal lysines. The peptides were kept in 12 separate groups without overlapping peptide sequences. Each group was cross-linked with DSS and the cross-linked peptide solutions were mixed before the MS data acquisition of three technical replicates. This means that identified cross-links with the two cross-linked peptides coming from FIG. 2. Preprocessing of experimental spectrum pairs for experiments with labeled linkers. DSS D 0 /D 12 is used as an example. Two experimental spectra from the same peptide pair but a different linker mass are matched without a mass shift and with a mass shift of the label mass difference considering multiple charges. The result is a linear ion spectrum with unknown charges and a cross-linked ion spectrum with known ion charges. This allows for a more constrained and targeted matching to theoretical peaks. the same group are almost certainly valid identifications, whereas cross-links between peptides from different groups are certain to be false identifications. The protein database we used was the S. pyogenes Cas9 sequence with 10 additional proteins from the supplemental material of the original publication of this data set. Data Processing-The .RAW files of all data sets were converted into mzML, mzXML, and MGF files using MSConvertGUI from the ProteoWizard toolkit version 3.0.10577. The binary encoding precision was set to 64-bit. Writing an index and TPP compatibility were turned on. No compression was used for mzML files. Reversed sequence decoy protein databases were generated from the target protein databases using the TOPP tool DecoyDatabase. Because it creates its own decoys, only the target database was provided to pLink2. OpenPepXLLF 1.1 (OpenPepXL Label-Free) with the TOPP tool XFDR for False Discovery Rate (FDR) estimation, XiSearch 1.6.731 with xiFDR 1.1.27 for FDR estimation, TPP 5.1.0 with Kojak 1.6.0 and PeptideProphet for FDR estimation, xQuest 2.1.3 with xProphet for FDR estimation as well as pLink 2.3.5 and StavroX 3.6.6.5 with their built-in FDR estimation algorithms were used to identify cross-links in the label-free data sets. The parameters of the different tools were set to equal values where possible and to reasonable or similar values otherwise (supplemental Table S1, Table S2, Table S3). Additional filtering and post-processing were partly done with the TOPP tools IDFilter, IDMerger and TextExporter for OpenPepXL and xQuest output and otherwise with R scripts. An FDR cutoff of 5% on the cross-link spectrum match (CSM) level was applied to every tool and all data sets unless indicated otherwise. Additionally, after this cutoff only unique residue pairs (URPs) supported by at least two of the remaining CSMs were kept. Also, a filter for link distance was applied to intra-peptide links or loop-links. Linked residue pairs were only kept, if they were at least 4 residues apart in the database sequence. This was done to further harmonize the tool results, because this cutoff was different among the tools and linked residue pairs with short sequence distances are not very informative. All tools were compared on the same Windows 10 PC with an Intel(R) Core(TM) i5-6500 CPU and 8 GB of RAM using one CPU core. The data sets with labeled cross-linkers were only processed with OpenPepXL and xQuest. The TOPP tool FeatureFinderMultiplex was used to detect pairs of MS1 features for OpenPepXL. Otherwise, the same processing steps and filtering rules as for the first two labelfree data sets were applied. The synthetic peptides data set was processed with OpenPepXL with search settings and filter criteria matching those used in the original publication of this data set in Beveridge et al. (25). Search results for the other tools were taken from the publication. This includes the 5 and 1% CSM-FDR results from the search against the S. pyogenes Cas9 sequence with 10 additional proteins. For this data set only the CSM-FDR cutoff was applied and the other filtering steps skipped to make the results directly comparable to those from the original publication. The MS proteomics data from the CRM data set, including search results from all tools compared in this study have been deposited to the ProteomeXchange Consortium (21) via the PRIDE (20) partner repository with the data set identifier PXD014359. The reanalyzed ribosomal fraction data set was deposited with identifier PXD014520 and the BSA data set with identifier PXD014523. The OpenPepXL results for the synthetic peptide data set were deposited with identifier PXD021417. Sensitivity and Specificity-In this study we use the number of reported cross-links under a fixed FDR threshold from the target protein database as a substitute for the real sensitivity of a search. Because of the tradeoff between sensitivity and specificity, we compare the sensitivity of all tools at the same FDR setting of 5% at the CSM level. Additionally, only URPs matched to at least two spectra are kept. For OpenPepXL we also recalculate the FDR at unique link level by keeping the decoy hits through the filtering steps and recalculating the FDR for the filtered list of URPs. Where it is possible, we validate the URPs against previously published structural data. For the synthetic peptide data set, it is possible to validate the identified cross-links more objectively than using protein structures. Structural Validation-TopoLink (26) was used for an analysis of solvent accessible surface distances (SASD) between cross-linked residues. A cutoff of 35 Å was chosen. SASD was measured between Cb atoms while ignoring all side chains beyond their Cb atoms. UCSF Chimera (27) with the Xlink Analyzer (28) plugin was used for a visualization of the identified cross-links on the PDB structures. A Euclidean distance cutoff of 35 Å was chosen for the link coloring. Cross-links consistent with the structures were colored blue, inconsistent cross-links red. The CRM data set was validated on the x-ray crystallography structure with PDB ID 3GJX (22) and the BSA data set was validated on chain A from the x-ray crystallography structure with PDB ID 4F5S. The ribosomal fraction data set was validated on a larger set of x-ray crystallography and cryo-EM structures. RESULTS Benchmark Results-In order to assess the performance of OpenPepXL, we compared it to five currently popular XL-MS search engines (StavroX (8), xQuest (9, 10, 12), pLink2 [13], Kojak (14) and XiSearch (15)) on a number of data sets. To the extent possible, the tools were used with settings as similar as possible (see supplemental Tables S1, S2 and S3 for all settings). The data sets used differ in size and complexity: Applying OpenPepXL to the more complex ribosomal fraction sample with thousands of proteins gives insights into sensitivity and performance, but we could only structurally verify about one third of the cross-links. Hence, a second comparison assesses both sensitivity and specificity on the highly purified sample of the CRM complex with a known threedimensional structure. Lastly OpenPepXL is applied to data generated with labeled cross-linkers and a different crosslinker chemistry to demonstrate its versatility. To assess the sensitivity of OpenPepXL compared with other tools, we ran a search on the ribosomal fraction data set. About 170,000 MS2 spectra were searched against a protein database of 128 target and 128 decoy proteins on a desktop PC with 8 GB of memory. The 128 target protein database was the largest database that OpenPepXL and XiSearch could handle in a reasonable runtime of less than 3 days. OpenPepXL identified 110 unique residue pairs (URPs), followed by pLink2 (13) with 102 (Fig. 3A). The calculated URP level FDR (URP-FDR) for OpenPepXL after applying the filters was estimated to be 8.8%. A Venn-Diagram showing the overlap of identifications between the tools is shown in supplemental Fig. S5. StavroX (8) could not finish the search because of computer memory requirements. xQuest (9) did not exceed the available memory, but the search was canceled after a week, because the projected remaining runtime under these conditions was unreasonable. Here it has to be noted, that xQuest can be parallelized and can run on a OpenPepXL: sensitive Identification of Cross-Linked Peptides cluster, so in general finishing this search with the limited amount of computer memory is probably within its capabilities. It can also analyze most data sets with labeled crosslinkers within feasible runtimes. The sensitivity of OpenPepXL comes at the cost of a full search of the squared search space and the increased runtime associated with that. To analyze the ribosomal fraction data set using one CPU core pLink2 took 15 min, Kojak about 3 h, OpenPepXL 28 h and XiSearch 36 h (Fig. 3B). OpenPepXL can also be installed on Linux computing clusters and a speedup by a factor of 15 can be achieved by running the tool on 25 cores (supplemental Fig. S1). A structural validation of the ribosomal fraction data set proved to be difficult because most of the identified cross-links linked residue pairs that were not resolved in existing PDB structures. Results from the links that could be validated are shown in supplemental Figs. S3 and S4. Curiously, none of the links found by any of the tools were inconsistent with the structures. To show that the cross-links reported by OpenPepXL are useful for modeling protein structures and that the high sensitivity does not result from just reporting more false-positives, a highly purified sample of the trimeric CRM complex with known three-dimensional structure was measured and analyzed by all compared tools. OpenPepXL and pLink2 each reported 78 URPs in total, Kojak reported 61. The cross-links that could be mapped on the structure were 45 URPs for OpenPepXL, 41 URPs for Kojak, followed by pLink2 with 40 URPs (Fig. 4A). The calculated URP-FDR for Open-PepXL after applying the filters was estimated to be 12%. These URPs were validated by calculating the solvent accessible surface distance (SASD) between the linked residues and applying a cutoff of 35 Å. The SASD was calculated on the structure with PDB ID 3GJX. OpenPepXL reported one URP that is inconsistent with the structure, Kojak only reported consistent URPs and pLink2 reported 2 URPs that are inconsistent with the structure (Fig. 4A, Fig. 5). The error rate of OpenPepXL on this data set is approximately equal to that of other tools with comparable sensitivity. OpenPepXL identified 14 URPs, that were not identified by any of the other tools and 6 of them were structurally validated (supplemental Fig. S6). The annotated spectra of the highest-scoring CSMs for each of those 14 URPs are shown in supplemental Figs. S9-S22. To assess the sensitivity of OpenPepXL on data with labeled cross-linkers and different linking reaction chemistries, the BSA data sets cross-linked with the labeled cross-linkers DSS-d 0 /d 12 and PDH-d 0 /d 10 were analyzed with OpenPepXL and xQuest. xQuest has been developed especially for stable isotopically labeled cross-linkers. The score of OpenPepXL was calibrated using HCD fragmented MS2 spectra recorded by orbitrap instruments. Also, the spectrum alignment and deisotoping algorithms in OpenPepXL rely on high-resolution fragment spectra. Meanwhile, xQuest is mostly used for CID fragmented MS2 spectra recorded by ion trap instruments. xQuest also does not apply deisotoping but relies mainly on the stable isotope labels for denoising spectra and does not have a feature to correct for misassigned monoisotopic peaks. We obtained a data set, where equal samples were cross-linked with two different labeled cross-linkers and analyzed using both instrument types. For this data set with a very simple target system we chose to search not only for lysine and N-terminal DSS cross-links, but also included serine, threonine and tyrosine as potential linking sites. For the samples cross-linked with PDH we set aspartic acid, glutamic acid and the C terminus as potential cross-linking sites. OpenPepXL identified 65 DSS and 22 PDH URPs in the HCD fragmented orbitrap spectra, whereas xQuest identified 57 DSS and 9 PDH URPs in the CID fragmented ion trap spectra (Fig. 4B). The calculated URP-FDR for OpenPepXL after applying the filters for the DSS orbitrap and PDH orbitrap data sets were estimated to be 1.5% and 7.1% respectively. These URPs were validated using chain A of the structure with PDB ID 4F5S. OpenPepXL reported three DSS URPs and one PDH URP exceeding the SASD cutoff of 35 Å. xQuest identified one DSS URP exceeding the cutoff (Fig. 4B, Fig. 6). Structural validation of cross-links using rigid structures cannot account for protein dynamics and the formation of nonspecific cross-links. Additionally, cross-links have a high tendency to form in regions of proteins for which we have no structural data, e.g. in very flexible or unstructured regions. A good example of this can be seen in our structural validation of cross-links for the ribosomal fraction data set (supplemental Fig. S3). Therefore we chose to assess OpenPepXL on a synthetic peptide data set that allows objective validation of reported cross-links independent from available structural information. For this data set all data except for OpenPepXL was taken from Beveridge et al. (25) and therefore xQuest OpenPepXL: sensitive Identification of Cross-Linked Peptides was omitted from the comparison, because it was not considered in that publication. For Kojak results were only available at the unique cross-link level from the 5% CSM-FDR search. From the several available FDR control methods available for Kojak, the results from Percolator with using only unique cross-links was chosen for the comparison. By design of this synthetic data set, it only has two levels of comparison: CSMs and unique links. In this case, unique cross-linked peptide pairs, unique cross-links, and unique residue pairs (URPs) are equivalent. The results are shown in Fig. 7 Table S7). This data set has a much stronger overlap in reported URPs between the tools compared with the other data sets in this study (supplemental Fig. S7). At 5% FDR OpenPepXL finds 22 URPs that are not found by either pLink2, StavroX or XiSearch. 17 of those were also identified by the Kojak search with a very high average calculated FDR of 22.7%. Looking at the difference between the 5 and 1% FDR searches, the 5% FDR search results show a clear pattern in the validated links between the replicates (Fig. 7). For each tool the second replicate has the most reported links and the third replicate the fewest. The 1% FDR search results look noisier (supplemental Fig. S8, supplemental Tables S6 and S7). The pattern in the differences between the replicates is almost unrecognizable. Although pLink2 reported the highest numbers of cross-links, it also had an unusually high calculated FDR that reached 4.1% at CSM level and 11.6% at URP level for the third replicate. We also looked at the numbers of spectra utilized by OpenPepXL and the other tools (supplemental Fig. S8). 5022 MS spectra were recorded for the first of the three replicates. OpenPepXL assigned a result to a total of 4185 spectra. 2029 of those were targets and 2156 were decoys. Validated cross-links above the 5% FDR cutoff were assigned to 822 spectra. OpenPepXL reported 80 validated URPs below the cutoff. pLink2 assigned a result to a total of 1389 spectra. 1006 of those were targets and 384 were decoys. Validated cross-links above the cutoff were assigned to 639 spectra and below the 5% FDR cutoff 4 additional URPs were reported. XiSearch assigned a result to 4363 spectra. 1686 of those were targets and 2677 were decoys. 491 were assigned to validated cross-links above the cutoff and 11 additional URPs were reported below the 5% FDR cutoff. Although XiSearch assigned results to the most MS spectra, it assigned four times as many decoys as targets. This probably makes it very stringent compared with the other tools. Its calculated FDR values on this data set on CSM and URP level are lower than for Kojak and pLink2, but not very different from OpenPepXL (supplemental Tables S4 and S5). pLink2 seems to assign results to very few spectra, even without applying an FDR cutoff. That is partly because it uses several heuristics to filter out spectra and peptides before the actual search and many potential candidates are not kept long enough to reach the FDR estimation step. This approach makes it very fast, but it also means that some of the correct CSMs were probably already filtered out even before the FDR was estimated and the small number of decoys might be the reason for its slightly less stringent FDR control compared with OpenPepXL, XiSearch and StavroX. OpenPepXL had an almost 1:1 distribution of targets and decoys. Among the compared tools it assigned the most targets and validated cross-links to spectra. Many of the correctly assigned CSMs are based mostly on the precursor mass without enough fragment matches for a confident FIG. 7. Results from the analysis of the synthetic peptides data set at a 5% FDR cutoff. All three replicates R1, R2 and R3 are shown. The blue bars show the number of valid CSMs/cross-links and the red bars on the negative y axis show the number of false-positive identifications. All data except for OpenPepXL was taken from Beveridge et al. (25). xQuest was omitted because it was not considered in that publication. A, Number of reported CSMs. The exact numbers are in supplemental Table S4. B, Number of identified URPs. The exact numbers are in supplemental Table S5. identification. These are then filtered out after FDR estimation and represent the 80 validated URPs below the 5% FDR cutoff. At the same time this also leads to more correct CSMs and URPs being reported above the cutoff. In respect to this comparison, OpenPepXL assigned the most correct identifications to spectra, but there might still be room for improvement in separating correct from incorrect CSMs. OpenPepXL Features-OpenPepXL can be installed on most current computing environments based on current versions of Windows, macOS and Linux. It is applicable to all labeled and label-free noncleavable cross-linkers. It makes use of labeled linkers to constrain the search space to improve runtimes and denoise MS2 spectra, in a similar way as xQuest does. OpenPepXL is to our knowledge the only tool that is able to effectively combine the match confidence of high resolution orbitrap fragment spectra with the additional benefits from stable isotope labeled cross-link spectra preprocessing. To move the field of XL-MS toward maturity, it is necessary for as many analysis tools as possible to support standardized file formats that are agreed upon by members of the community. OpenMS supports most of the open file formats specified by the HUPO-PSI like mzML for raw MS data and the MS identification data format MzIdentML. This support was extended to include the XL-MS data extension of the MzIdentML 1.2 specification (19). The OpenMS Proteomics Pipeline (TOPP) contains many additional tools for MS data processing and analysis, including correction of monoisotopic peak assignment and several quantification methods. OpenPepXL is fully integrated into this pipeline and can be easily combined with many of these tools to build complex processing pipelines. TOPP includes the graphical visualization tool TOPPView for spectra and peptide identifications. It was extended for XL-MS data and can visualize the MS1 features on an MS1 map, MS1 and MS2 peak spectra including the precursor isolation window of an MS2 spectrum, fragment annotations on matched MS2 peaks and the sequence coverage for both cross-linked peptides (Fig. 8). The spectrum visualization allows zooming and the peak labels are fully editable and movable to aid in manual validation and preparation of images for publication. Manually added or edited annotations can be saved in the OpenMS internal proteomics identification file format idXML. DISCUSSION & CONCLUSION OpenPepXL is a new XL-MS identification algorithm with improved sensitivity at feasible runtime. It is available as open-source software for all major operating systems and compliant with HUPO-PSI standard formats. In our benchmark, OpenPepXL turned out to be a very sensitive XL-MS FIG. 8. Visualization of Spectra with annotated matched peaks and peptide sequence coverage in TOPPView. On the right side is the table of identifications containing a description of the identified species and several match quality metrics. On the left side is the annotated spectrum with a sequence coverage indicator. A one sided arrow means the fragment starting at the marked residue and containing the rest of the peptide or peptide pair in the direction of the arrow was matched. A double arrow means fragments starting at the marked residue and containing the rest of the peptide or peptide pair in both directions were matched. identification algorithm. It is just as effective on labeled cross-linker data as on label-free data. Its error rate is also similar to other tools with comparable sensitivity. The increased sensitivity is most likely because of the unconstrained search on the complete, quadratic search space. The specificity of OpenPepXL is a consequence of a thorough spectrum matching algorithm that considers relative mass tolerances and ion charge states determined from isotopic patterns or preprocessing of spectra pairs from labeled linkers. The combination of the exploration of the entire search space, very strict criteria for matching peaks between theoretical and experimental spectra and efficient data structures and algorithms makes OpenPepXL a sensitive tool with feasible runtime and memory requirements. OpenPepXL is faster than XiSearch and xQuest, but falls behind Kojak and especially pLink2. Because of efficient data structures and built-in parallelization OpenPepXL achieves very good speedups even on large compute clusters and cloud services while maintaining its slim memory footprint. The increased computational effort for the complete exploration of the quadratic search space can thus be compensated in most cases. This is not the case for several of the other tools, as e.g. pLink2 is only available as a Windows executable and pLink2, StavroX and XiSearch depend on a GUI and are therefore not compatible with many remote computing environments. The implementation of OpenPepXL still has room for improvement and we are looking into ways to make it more efficient without sacrificing its unique sensitivity. Some concepts already common to proteomics data analysis like sequence tags and ion indices are already employed by several of the other XL-MS identification tools and we plan to implement these ideas into OpenPepXL in the future, as long as they are not detrimental to the final output. OpenPepXL is free to use, modify and redistribute for private, academic and commercial applications under the three clause BSD license. DATA AND SOFTWARE AVAILABILITY The MS proteomics data from the CRM data set, including search results from all tools compared in this study have been deposited to the ProteomeXchange Consortium (21) via the PRIDE (20) partner repository with the data set identifier PXD014359. The MS proteomics data from the ribosomal fraction data set (raw data originally from PXD006131 (23)), including search results from all tools compared in this study have been deposited to the ProteomeXchange Consortium (21) via the PRIDE (20) partner repository with the data set identifier PXD014520. The MS proteomics data from the BSA data set, including search results from OpenPepXL and xQuest have been deposited to the ProteomeXchange Consortium (21) via the PRIDE (20) partner repository with the data set identifier PXD014523. The MS proteomics data from the synthetic peptide data set (raw data originally from PXD014337 (25)), including search results from OpenPepXL have been deposited to the ProteomeXchange Consortium (21) via the PRIDE (20) partner repository with the data set identifier PXD021417. Software: OpenPepXL is free to use, modify and redistribute for private, academic and commercial applications under the three clause BSD license. Installers for Windows, macOS and Linux, as well as the source code are linked at https:// www.openms.org/openpepxl/.
9,724.4
2020-12-01T00:00:00.000
[ "Computer Science" ]
Degradation of Urban Green Spaces in Lagos, Nigeria: Evidence from Satellite and Demographic Data The study aimed to assess the potential of using Remote Sensing (RS) data to evaluate the changes of urban green spaces in Lagos, Nigeria. Landsat Thematic Mapper and Landsat 8 (Operational Land Imager) data pair of May 4, 1986, December 12, 2002 and January 1, 2019 covering Lagos Government Authority (LGA) were used for this study. Supervised image classification technique using Maximum Likelihood Classifier (MLC) was used to create base map which was then used for ground truthing. Random Forest (RF) classification technique using RF classifier was utilized in this study to generate the final land use land cover map. RF is an ensemble learning method for classification that operates by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification). Lagos census population data was also used in this study to model population projection. Extrapolation of the model was used to predict data for the years, 2020 and 2040. Results of the study revealed a reduction of urban green spaces due to agriculture and settlement. While the remote mapping revealed the gradual dispersion of ecosystem degradation indicators spread across the state, there exists clusters of areas vulnerable to environmental hazards across Lagos. To mitigate these risks, the paper offered recommendations ranging from the need for effective policy to green planning education for city managers, developers and risk assessment. These measures How to cite this paper: Twumasi, Y.A., Merem, E.C., Namwamba, J.B., Mwakimi, O.S., Ayala-Silva, T., Abdollahi, K., Okwemba, R., Lukongo, O.E.B., Akinrinwoye, C.O., Tate, J. and LaCour-Conant, K. (2020) Degradation of Urban Green Spaces in Lagos, Nigeria: Evidence from Satellite and Demographic Data. Advances in Remote Sensing, 9, 33-52. https://doi.org/10.4236/ars.202091003 Received: January 19, 2020 Accepted: March 28, 2020 Published: March 31, 2020 Copyright © 2020 by author(s) and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access Y. A. Twumasi et al. DOI: 10.4236/ars.202091003 34 Advances in Remote Sensing will go a long way in helping sustainability and management of land resources in Lagos. Introduction Urban green spaces such as parks and sports fields as well as woods and natural meadows, wetlands and other ecosystems provide several benefits. They serve as filters of pollutants and dust from the air, facilitate physical activity and relaxation, provide shade and lower temperatures in urban areas, and they reduce erosion of soil into the waterways [1]. Trees produce oxygen, and help filter out harmful air pollution, including airborne particulate matter [2]. Despite these benefits, urban green spaces in recent years have come under intense pressure due to increase in population growth. Increasing population could be attributed to the social and economic benefits associated with urban centers compared to rural areas [3]. Rapid urban growth, however, has major social and economic consequences including congestion and environmental degradation of green spaces. Published research by authors on environmental impacts of urban expansion shows pressure on undeveloped green spaces [3] [4] [5] [6]. Similarly, early work of Merem and Twumasi [7] on urban growth management in Central Mississippi region which is home to over half a million people revealed decline of the area's agricultural land resources due to urban development. Indeed, the impacts of land use change associated with the development and urbanization have well been documented [8] [9] [10]. Early work of Twumasi et al., [11] and [12] and Manu et al., [13] used satellite data to map urban green spaces in Accra, Ghana; Bamako, Mali and Niamey, Niger. Results of these studies showed that the decline in green spaces in these cities was associated with urban development and urbanization. In Lagos, Nigeria, anthropogenic activities that drive changes in land use and cover include urban development which is associated with urbanization and agricultural practices. Such activities have exerted much pressure through intense use of green spaces for residential and industrial purposes [14] [15] [16]. Several studies have employed remote sensing data to assess the integrity of green spaces and ecosystem in Lagos [17] [18] [19] [20] [21]. However, none of these studies integrate remote sensing data with demographic analysis. This calls for the need to find appropriate method to aid in identifying spatio-temporal changes in urban green spaces in Lagos. Perhaps, the most important element in these efforts is the need to integrate satellite data with demographic analysis to assess the status and trend of the urban green spaces. The primary objectives of this study were to couple remote sensing data with demographic data to evaluate the changes of urban green spaces in Lagos, Nigeria to enable planners and policymakers contribute to improved land administration and enhance their competence in decision-making ( Figure 1). The Study Area Lagos state is situated in the South Western Nigeria within latitudes 6 degrees 23'N and Longitudes 2 degrees and 3 degrees 42 E. As shown in Figure 2 and Figure 3, the state is bounded from the North and East by Ogun State, in the West by the Republic of Benin and the South by the Atlantic Ocean. The total land mass of the state stretches over 3345 kilometers. Like most African cities, Lagos, Nigeria, is experiencing a fast-sustained urban expansion ( Figure 3). According to 2019 World Population Review and Statistical Data, Lagos population is increasing at a startling rate. The current 2019 population is estimated at 13,903,620. In 1950, the population of Lagos was 325,218. This has grown by 1,664,414 since 2015, which represents a 3.24% annual change. While the state appears physically smaller, it is ranked as the most highly populated state in the country with an estimated population of about 14 million inhabitants representing 10% of the total population of Nigeria [22]. Data Acquisition This paper used satellite remote sensing and census-population data for the analysis. Image Data Acquisition and Processing Landsat Thematic Mapper and Landsat 8 (Operational Land Imager) images listed in Table 1 covering Lagos, Nigeria were acquired from the United States Geological Survey Earth Explorer free Online Data Services for land use land cover change classifications analysis [24]. The images were acquired with minimum cloud cover (<10%). The footprint of the Landsat data is shown in Figure 4. Image Processing To process the images, three tasks were performed. These include: Image preprocessing, rectification and image enhancement. Image Pre-Processing In image Image pre-processing, both visual and digital image processing were done, and prior to image processing, images were imported into ERDAS Imagine Image Processing Software for further processing. Since the images were in single bands, layer stack technique was performed to group the bands together. The stacked images were further exported to ArcGIS 10.6 software. Lagos Government Authority (LGA) shapefile was used to extract from the full scenes to subset the area of interest which is the Lagos Government Authority (LGA). Image Rectifications Image rectifications were performed in order to correct the data for distortion which may have been developed from the image acquisition process using the Impact toolbox developed by the European Union Joint Research Centre. To ensure accurate identification of temporal changes and geometric compatibility with other sources of information, the images were geocoded to the coordinate and mapping system of the national topographic maps. All the images were projected to the Universal Traverse Mercator (UTM) coordinates zone 31 North. The spheroid and datum was also referenced to WSG84. Image Enhancement Image enhancement was done in order to reinforce the visual interpretability of images, a colour composite (Landsat TM bands 4, 5, and 3) was prepared and its contrast was stretched using a standard deviation to further enhance visual interpretability of linear features like Rivers, and land use features like agricultural land, forests etc. Aside using Erdas Imagine Image Processing software to perform the layer stack of the images, all image processing was carried out using ArcGIS software and Impact toolbox. Preliminary Image Classification and Ground Truthing Supervised image classification using Maximum Likelihood Classifier (MLC) was used to create base map which was then used for ground truthing. The maximum likelihood classifier was selected since, unlike other classifiers it considers the spectral variation within each category and the overlap cover the different classes. Accordingly, the land use and land cover was classified into eight classes, namely forest, bushland, and agriculture with scattered settlements, grassland, bare soil, wetland, water and Settlements (Urban Area) ( Table 2). Final Image Classification Random Forest (RF) classification using RF classifier was utilized in this study to generate the final land use land cover map. RF is an esemble learning method for Table 2. Detailed description of land use land cover Lagos Government Authority. Land Use Land cover (LAND COVER) Categories Description Forest An area of land with at least 0.5 ha, with a minimum tree crown cover of 10% or with existing tree species planted or natural having the potential of attaining more than 10% crown cover, and with trees which have the potential or have reached a minimum height of 3 m at maturity in situ. It includes montane, lowland, mangrove and plantation forests, woodlands and thickets. classification that operates by construction of a multitude of decision trees at training time and outputting the class which is the mode of the classes (classification). The advantages of RF is that it is not sensitive to over-fitting; good at dealing with outliers in training data, and it is able to calculate useful information about errors, variable importance, and data outliers [25]. This information can be used to evaluate the performance of the model and make changes to the training data if necessary. Preparation of Land Cover and Land Use Maps Classified images were recorded to the respective classes. Classified images were then filtered using a majority-neighbourhood filter in order to eliminate patches smaller than a specified value and replace them with the value that is most common among the neighboring pixels. Change Detection and Assessment of the Rate of Change In this study Post classification, comparison was used to quantify the extent land use, land cover changes over a 30 year period (1986, 2002 and 2019). The advantage of post classification comparison is that it bypasses the difficulties associated with the analysis of the images that are acquired at different times of the year, or by different sensors and results in high change detection accuracy [26]. Estimation for the rate of change for different land use land cover was computed based on the following formulae. x i Area = ∑ = total cover area at the first and t years = period in years between the first and second scene acquisition date. Census-Population Data Acquisition and Processing Population data for this study was obtained from World Statistical Data website [22]. The Lagos population data was modeled using Microsoft Excel's statistical data analysis tool. Table 3 represents the data used for modeling. Also, linear time series slopes were analyzed to model population projection. Extrapolation of the model was used to predict data for the years, 2020 and 2040. This represents an overall decrease of 6.12 percent. Urban area experienced significant expansion for the whole area from 1986 to 2019, while the size of area covered by vegetation, which include coastal mangrove (wetland), forest, bushland and grassland areas experienced a significant decline from 1986 to 2019 ( Figure 5 and Table 4). Overall, the shrinking of the water resources (Water bodies and Wetlands) in the LGA (Tables 4-6) signifies worrying situation in LGA hence, needs serious intervention to rescue the situation. Over the past two decades, the landscape of LGA has witnessed changes in its land use/cover ( Figure 5, Table 4). The changes have been exhibited throughout the landscape. Demographic Analysis Results of census-population data analysis are shown in Table 7 and Table 8. Table 7 was generated by linear regression of 1950, 1960, 1970, 1980, 1990 2010 and 2019 data. The model explains about 90% of the data's variation and is statistically significant, p < 0.05 (Table 8). The trend of population using the linear model was compared to the actual population for the year range 1950-2019 and illustrated by Figure 6. As shown in Figure 6, the linear model's population projections from 2009 and beyond were lower than the corresponding actual populations. The model is presented in Figure 7 in an extrapolated form, illustrating clearly the increasing difference between the projected and actual populations. The red dots illustrate the actual data while the blue line represents the linear model projection the population. Also presented in Figure 7, are the model equation and its coefficient of correlation. Although it explains about 91% of the variation in the population data, its projection of 2019 population is approximately 12,000,000, which is about 2 million below the actual figure. When the data was fitted to a polynomial second order model by Microsoft Excel data analysis tool kit, the predictions approximated to the actual data ( Figure 8). The coefficient of correlation for this model is approximately 100%. Hence, the model explains about 100% of the variation in the population data. Based on its strength, this model was adopted projection the population of the city beyond the year 2019. Extrapolation of the second order polynomial (quadratic) model was used to predict data for the years beyond 2020. The population projections based on the second order polynomial model are presented in Table 9. To model the change in population per year for the years, 1950, 1960, 1970, 1980, 1990, 2000, 2010 and 2019, the population changes between each consecutive pair of data were computed and the difference divided by the corresponding span of time for the change. The computation was carried out as follows. The population growth/year for a 10-year interval Tn and Tn+10 was determined using the following formula. ( ) ( ) 10 10 year for interval 10 n n Tn Tn The following graph illustrates the rate of population change (number of people/year) per year versus time in years. As illustrated in Figure 9, the rate of population growth is positive and can be represented by the linear model, 5991.6 10000000 where y and x represent the population change/year and time in years respectively. The computed data is presented in Table 10. In Table 10, 1950,1960,1970,1980,1990,2000,2010 and 2019 data. 1950,1960,1970,1980,1990,2000 The percentage change in population between a span of time is the ratio of the population change over a time interval in years, to preceding population, multiplied by 100%. It was determined as follows. ( ) ( ) 10 10 Percentage change in population *100% where P t and P t−10 represent the populations for the years t and t − 10 (10 years earlier than t), respectively. The percentage changes in population for consecutive 10-year intervals, 1950-1960 and were computed using Equation (5) The percentage change in population per consecutive 10 years for Lagos population was computed for the data for 1950-2019 and illustrated in Table 11. Using curve fitting with Microsoft Excel, four possible models, exponential, polynomial (quadratic and cubic) and linear models were built to predict the percentage change in population per 10 years intervals. Each of the models is illustrated in the following graphs ( Figure 10, exponential, Figure 11, quadratic, Figure 12, cubic and Figure 13, linear). The polynomial second order (quadratic) model for predicting the change in % change in population/10 years versus years is given as, y = 0.0107x 2 − 44.019x + 45360. The coefficient of correlation R 2 is 90%. Hence, the model explains 90% of the variability in the population data. The cubic model is given as, Hence, x = ln(150)/−0.022, which has no solution. Both the quadratic and cubic models offer no feasible solutions for predicting the year for population stabilization since they either have complex or real solutions falling outside future time (years). Hence, they cannot be used to predict the year of population stabilization for the given data. The best model was the linear model ( Figure 13). The year for stabilization of population is computed from the linear model for % change in population as follows. The linear model is equated to 0 and then solved as follows: 1.5076 3073.7 0 Hence, the population is expected to stabilize during the time span 2029-2039. The population corresponding to the year 2039 was estimated by extrapolating the polynomial model illustrated in Figure 8 to the year 2039, using the format trendline option of Microsoft Excel. The resulting graph is illustrated in Figure 14. According to Figure 14, in 2039 the population of Lagos city will be 23,000,000, the stabilization population. The rate of change of population/year based on 10 years intervals rose linearly with respect to time (years) as illustrated in Figure 10. The lowest population growth/year was shown to have occurred during the time span 1950-1960. Interestingly, the highest % change in population occurred also during this time span (see Table 11). This is the only time span found to have a % change in population above 100%. The percentage increase in the city's population was found to be decreasing linearly with respect to time, in years ( Figure 10). However, the city's population still rose since the percentage increase was still greater than 0%. The population is expected to stabilize when the percentage change drops to 0%. Accord A factor that may influence the stabilization of a city's population is its urban carrying capacity. This refers to the maximum population able to survive in an urban environment, considering all other factors impacting the city's services and resources for sustainable development [27] [28]. A city's population is influenced by several factors. Among them are, space for construction of residential structure, design of city, shape of structures, capacity of structures, structural strength of structures, land cove/land use, land slope, hydrology of the city, climate, weather. Others are, politics quality of life, etc. These factors determine the sustainability and carrying capacity of a city. Some of the listed factors are fixed while others are dynamic. Although the model for population growth suggests continual increase with respect to time (years), the population of Lagos like other cities has a limit. The limit is reached when % change stops (equal to zero). According to this study, stabilization is expected when the population reaches 23,000,000. However, the number could be higher if factors affecting its carrying capacity favor the city to hold more residents sustainably. Policy Recommendations and Conclusion The rapid urbanization rate in the area not only created unprecedented consequences by diminishing the quality of the environment but it raised serious implications for land management in the region. Provision of green planning education for city managers, developers, and the public in the state is required. This would go a long way in raising awareness about the dangers of initiating future developments in areas deemed adjacent to sensitive natural habitats known for their ecological services for communities while familiarizing them of the risks of encroaching on ecologically fragile areas. There is also the need for use of information technologies in land administration. Although Nigeria has its own space administration with the goal of providing needed data in land management, the land administration has several challenges in use of information technologies such as multiples problems arising from lack of spatial information tools and infrastructure, inadequate training and lack of coordination between agencies [15] [17]. Use of information technogies will go a long way in helping sustainability and management of land resources.
4,530.4
2020-03-25T00:00:00.000
[ "Environmental Science", "Mathematics" ]
“South African Generation Y students’ behavioral intentions to use university websites” University websites are increasingly crucial in meeting the evolving digital demands of students. To effectively manage university websites, it is necessary to first determine students’ behavioral usage intentions of university websites and the factors that influence their intentions, which forms the purpose of this study. Data were collected at a single point in time and described the characteristics of the sample. This study, involving 319 Generation Y students registered at two South African university campuses (one traditional and one university of technology campus), utilizes structural equation modeling to explore the predictive relationships among information quality, system quality, playfulness, ease of use, trust, attitude, satisfaction, and behavioral intentions related to university website use. The study underscores the pivotal role of the univer-sity’s website in shaping student satisfaction, with information quality standing out as a significant positive influence. Additionally, playfulness significantly impacts both satisfaction and overall attitudes toward university websites. The system quality of the university website is also noteworthy, showing a statistically significant positive effect on ease of use and fostering trust among students. Furthermore, satisfaction is anticipated by ease of use, creating a cascade effect where satisfaction predicts trust and trust predicts attitudes. Ultimately, students’ attitudes emerge as a critical predictor for their behavioral intentions to use university websites. The model exhibits acceptable fit indices, demonstrating substantial explanatory power (SRMR = 0.1, RMSEA = 0.06, IFI = 0.94, TLI = 0.93, CFI = 0.94). These findings offer insights for university management and web designers to enhance online platforms, fostering student satisfaction, trust, and usage. INTRODUCTION Websites, as a means of establishing an online presence, have become essential for the survival and competitiveness of businesses, including universities (Mentes & Turan, 2012).That is because a website is a costefficient and timely platform to increase a business's market presence and broaden its reach beyond national borders (Ganiyu et al., 2017).As such, a high-quality and efficient website remains one of the universities' strategic priorities (Al-Debei, 2014), especially since it is a primary source of university-related information (Buang et al., 2016).University websites are a valuable and effective tool to convey information to several stakeholders, such as faculty and administrative staff, students, prospective students, and external stakeholders.Because of their flexibility, university websites are used as an avenue to distribute course-related information, information on the services offered, links to the library and support services, and serve as a platform for student queries, applications, and registration, among others (Saichaie & Morphew, 2014).For most prospective students, the university website is the first encounter that they have with the university (Schneider & Bruton, 2004) and a means to evaluate and compare different universities (Anctil, 2008) prior to selecting a university.Since the Covid-19 pandemic, universities have become more reliant on their websites and other online platforms (Bekele, 2021), which stimulated various technological advancements in education and, as a result, shifted the needs and expectations of students (EiffelCorp, 2022).Therefore, it is vital that universities cautiously plan and manage the functionality of their sites and the content displayed on their sites to meet the needs of their target market, namely students (Schneider & Bruton, 2004). LITERATURE REVIEW The current student population forms part of the Generation Y cohort, commonly called Millennials (Severt et al., 2013), who comprise individuals born between 1986 and 2005 (Bibby et al., 2019).Being brought up in the digital era made these individuals the first generation to access various multimedia platforms, devices, and technologies (Schlitzkus et al., 2010).Therefore, they are comfortable with the advancement of technology and are known to be tech-savvy and acquainted with various digital devices (Soyez & Gurtner, 2016) and platforms, including the Internet (Bilgihan, 2016).Consequently, the internet and technology have become an integral part of Generation Y individuals' lives and a key source of information for their purchase decision-making. Given the rising number of Generation Y students enrolled at HEIs in South Africa and the role of online platforms such as websites in these students' daily activities and decision-making, coupled with the role of websites in universities' success and growth, it is paramount to understand South African Generation Y students' intentions to use university websites and the factors that influence their usage intentions. The term behavioral intention, as outlined by Fishbein and Ajzen (1975), pertains to an individual's inclination or plan to engage in a particular action.In their theory of reasoned action (TRA), individuals' behavioral intention predicts their actual behavior.Since individuals' actual behavior is susceptible to factors such as sales promotions and behavioral intention, it therefore portrays their true preferences.Day (1969) recommends that an individual's behavioral intention should be measured rather than their behavior.Several international research studies within the context of digital platforms examined the influence of behavioral intention as well as the antecedents thereof. Attitude pertains to an individual's favorable or unfavorable impression of a concept (Hoyer et al., 2013).Within the context of a website, Chen and Wells (1999, p. 28) describe an individual's attitude as a "predisposition to respond favorably or unfavorably to web content in natural exposure situations."An individual's attitude could transform based on life experiences (Himansu, 2009) and the influences of external factors (Hanna & Wozniak, 2001).Under the theory of reasoned action, an individual's attitude toward an object forms the basis of their intention to engage in a particular behavior toward that object (Fishbein & Ajzen, 1975).This means that if an individual has a positive attitude toward a website, the individual is more likely to engage with the website (Limbu et al., 2012).Various digital-related studies have verified the relationship between attitude and usage intention.A study on web-based educational tools reported that students' attitudes toward using those tools affected their intention to continue to use those tools (Yaakop et al., 2020).Another factor that could have a direct or indirect influence on university website usage intentions is trust, which can be described as a person's readiness to be susceptible to another entity's actions (Mayer et al., 1995).Therefore, trust requires the individual to rely on the entity's character or capability.Trust plays a vital role in digital media and technologies, particularly when risk or uncertainty is involved (Shin, 2019).With web-based organizations, an individual has no direct control over the service provider's actions (Muda et al., 2016) and has no or minimal direct contact with the service provider.Consequently, the individual relies on website information to determine the organization's trustworthiness (O'Cass & Carlson, 2012).Trust is formed when the organization's behavior matches the individual's expectations.Within the context of websites, trust will be formed if the information on a website is perceived by a user as reliable, accurate, and credible (Choudhury & Karahanna, 2008).Consequently, higher trust in a website, online service, or digital media requires less effort from users to validate the legitimacy of the information or details of the service (Shin, 2019).As such, trust in a website will result in positive perceptions about the organization and its actions, leading to a positive attitude toward the organization.The influence of trust on attitude has been verified in several studies.It was reported that customers' trust in websites (Limbu et al., 2012) and mobile retail apps (Kaushik et al., 2020) influence their attitude toward that website or app.In a study on blockchain, Shin (2019) found that trust plays a significant role in users' attitudes toward blockchain.Consistent with these findings, this study suggests that Generation Y students' trust in university websites influences their attitudes toward these websites.It is also believed that satisfaction with university websites could influence trust in the website. Satisfaction represents an individual's judgment on whether a product, service, or organization met their expectations (Berbegal-Mirabent et al., 2016).This judgment is derived from discrepancies between an individual's expectations and the results.The more aligned the actual results and expectations, the higher the level of satisfaction (Alnaser & Almsafir, 2014).With each interaction an individual has with a product, service, or organization, new experiences and information are gained, which influences the individual's degree of satisfaction with the organization (Casaló et al., 2010).Satisfied individuals have higher intentions to purchase or use products or services and are more likely to make recommendations to other individuals (Ghane et al., 2011).In addition, the literature revealed that satisfied individuals have a higher level of trust in an organization (Fang et al., 2011) since individuals' satisfaction gives confidence that the organization will meet their obligations in the future (Kim et al., 2009), thereby depicting that the organization is trustworthy.Therefore, satisfaction has a direct influence on consumers' trust in an organization (Ghane et al., 2011).As a result, higher satisfaction with a product, service, or organization will lead to greater trust in the organization (Flavián et al., 2006).In terms of mobile banking, Febrian et al. (2021) discovered a direct relationship between users' satisfaction and their trust in mobile banking.Similar findings were reported by Ramadania et al. (2021), who found that users' satisfaction with an online academic service influences their trust in that online academic service.In keeping with these results, this study postulates that Generation Y students' perceived satisfaction with university websites affects their trust in these websites.In addition, ease of use of university websites could influence satisfaction with university websites. Ease of use describes the degree to which a user understands the structure of a website, how it functions, its content, and its interface (Casaló et al., 2008).In addition, it refers to the time and physical and mental effort required from a user to find the relevant information (Davis, 1989).Perceived ease of use is a key determinant of users' acceptance of information technology or systems (Davis, 1989).If a system is perceived to be straightforward, simple, and effortless, it is more likely that a user will form a favorable attitude toward the system (Renny et al., 2013).In addition, a system that is easy to use will likely result in higher user satisfaction.When considering ease of use within a university website context, it includes effortless website interaction and quick downloading of webpages.Several previously published studies across different contexts have reported a correlation between ease of use and user satisfaction.Chong (2021) reported that the ease of use of mobile short video applications directly influences users' satisfaction with the application.Similarly, Alkhateeb and Abdalla (2021) focused on students' satisfaction with university learning management systems.They found that the perceived ease of use of the system directly influences students' satisfaction with the system.Accordingly, this study theorizes that Generation Y students' perception of the ease of use of university websites directly impacts their perceived satisfaction with these websites.System quality could also impact perceptions of ease of use of university websites. System quality refers to a website's overall performance (Gorla et al., 2010) concerning its ability to capture, process, store, and retrieve information (Al-Debei et al., 2013).From the user's point of view, the system quality of a website becomes apparent through its interface and the ease with which users can navigate it.It includes appearance features such as colors, text fonts, graphics, layout (Aladwani & Palvia, 2002), and website security (Ahn et al., 2007).Regarding university websites, the system quality is assessed based on the website's functionalities that are controllable by the student.Universities must cater to Generation Y students regarding system functionalities and aesthetics since they grew up with technology and have certain expectations regarding websites.By ensuring that a website has adequate link structures and interfaces and navigates seamlessly, the website will be more user-friendly and, ultimately, enhance users' experience and trust (Kuan et al., 2008).Based on the literature, system quality positively influences the ease of use of information systems (Zhou, 2011) and users' trust in information systems (Van Deventer et al., 2017).Zhou and Zhang (2009) reported that system quality positively influenced the respondents' trust in e-commerce websites.Van Deventer et al. (2017) reported similar findings that perceived system quality directly and positively influences customers' trust in mobile banking.Consistent with these results, this study hypothesizes that university website system quality influences Generation Y students' perception of the ease of use of these websites and the trust Generation Y students have in these websites.Like system quality, information quality could also directly or indirectly impact university website usage intentions. Information quality represents individuals' subjective evaluations and judgments regarding the quality of information displayed online (Yang et al., 2005).A website's information quality is vital to its success since it is often a deciding factor when selecting from several service providers or products (Kuan et al., 2008).As such, good quality information on the website will likely attract new customers and assist in retaining existing customers.Therefore, businesses must take caution when deciding on the information to be displayed on their website (Rahimnia & Hassanzadeh, 2013).Detail, timeliness, accuracy, and reliability are critical factors in information quality (Ahn et al., 2007).These factors significantly impact the overall quality and effectiveness of the information provided.Detail refers to the level of comprehensiveness or completeness of the information provided.Accuracy pertains to the degree of correctness or precision of the information.Timeliness indicates how recent or up-to-date the information is, reflecting its relevance within a specific timeframe.Reliability relates to whether the information is consistent and dependable (Yang et al., 2005).Within the context of university websites, high information quality would signify that the information related to the academic calendar, faculty details, and publications is comprehensive, accurate, up-to-date, and reliable (Al-Debei, 2014), as well as the modules, courses, and other teaching and learning content is relevant, up-todate and accurate.The literature indicates that information quality predicts customer satisfaction.A study conducted by Dirgantari et al. ( 2020) on e-commerce customer satisfaction reported that information quality significantly influences satisfaction.Similarly, investigating users' satisfaction with online learning systems, Nuryanti et al. (2021) found that information quality significantly contributed to users' satisfaction with the learning system.Based on the results of these studies, this study posits that the information quality of university websites directly impacts the perceived satisfaction of Generation Y students. Another critical factor to consider in predicting university website usage intentions is playfulness.Playfulness can broadly be described as the level of enjoyment experienced by a user (Padilla-Meléndez et al., 2013).Moon and Kim (2001) explain that perceived playfulness is captured in three variables, namely concentration, curiosity, and enjoyment.Concentration refers to the de-gree to which a user is focused on an interaction.In contrast, curiosity refers to the inquisitiveness of the user, and enjoyment denotes the level of fun that the user experiences during the interaction.Playfulness results from a user's interaction with a situation and, therefore, is described by Serenko and Turel (2007) as a personal, irrational, and spontaneous activity.When an individual experiences a pleasant feeling during an interaction, this feeling will create a sense of playfulness and likely result in a positive attitude toward the interaction.Within a website context, users' perceived playfulness of a website will depend on their experience with the website, where a positive experience will likely lead to a favorable attitude toward the website.The correlations between playfulness and satisfaction, as well as attitude, have been confirmed in various studies.For example, it was reported that playfulness positively influences users' satisfaction with an e-book application (Liu et al., 2021).Moreover, it was found that playfulness influences students' attitudes toward participating in online education (Wang et al., 2021) and their attitudes toward brain-computer interface games (Wang et al., 2023).In keeping with these findings, university website playfulness may predict Generation Y students' perception of satisfaction with these websites and influence the attitudes Generation Y students display toward these websites. While previous research has been conducted to determine individuals' behavioral intention to use various digital platforms, limited research is available on students' behavioral intention to use university websites. Moreover, what factors drive their intention to use university websites is unclear. This study seeks to provide insight into the perceived value of university websites and guidance concerning the factors that universities should invest their time, effort, and resources in to increase the usage of their websites. Therefore, the objective of this paper is to assess how information quality, playfulness, system quality, ease of use, satisfaction, trust, and attitude toward university websites impact the intention of Generation Y students to utilize these websites. As per the literature and in a university website context, the following hypotheses are formulated: H1: Information quality positively influences satisfaction. H4: System quality positively influences ease of use. H5: System quality positively influences trust. H6: Ease of use positively influences satisfaction. METHOD The study aimed to target students from the Generation Y age group (between 18 and 24 years old) enrolled in two South African institutions of higher education (HEIs).The paper used judgment sampling to select two campuses (a traditional university campus and a university of technology campus) in Gauteng from the 26 HEIs available. To gather data, 200 questionnaires were distributed at each HEI using the mall-intercept survey method to a convenience sample of voluntarily participating students. The study used a two-section self-administered questionnaire to collect data from the sample.The first section gathered the demographic information of the participants, while the second section employed adapted scales from prior studies to assess the factors of information quality ( RESULTS Four hundred self-administered questionnaires were handed out, and 319 of them met the study's population specifications and were considered complete for data analysis.Therefore, the study achieved a response rate of around 80%.The sample mainly consisted of individuals who were 20 and 18 years old, in that order.Furthermore, the sample had more women than men participating, with the majority African.First-year students constituted the majority of study participants based on the study year.Table 1 presents a breakdown of the sample's demographics. The statistics used to describe the data, measure internal consistency, and assess relationships between factors were computed for each latent factor.These included Cronbach's alpha values for internal consistency and Pearson's product-moment correlation coefficients for relationships.The results for the summary statistics and correlations can be found in Table 2. The mean (X ̄) value of each latent factor was 3.5 or above, indicating that South African Generation Y students participating in the study view the quality of their university websites and the information provided as good, as determined by the six-point Likert scale.Additionally, the students find their university websites enjoyable and user-friendly and exhibit trust, positive attitudes, and satisfaction toward their university websites.They also intend to continue using their university websites.Each latent factor's internal consistency reliability is supported by Cronbach's alpha (α) values above the advised level of 0.70 (Malhotra, 2020). Significant positive correlations (p ≤ 0.01) were observed between all latent factor pairs, indicating nomological validity (Malhotra, 2020).The strongest association was observed between sat- isfaction and trust latent factors, with an r-value of 0.74.Additionally, none of the correlation coefficients exceeded the recommended threshold of 0.90, indicating no apparent issues of multicollinearity (Pallant, 2020).Consequently, a measurement model was proposed. Confirmatory factor analysis utilized the eight-factor measurement model.To evaluate the construct validity and composite reliability of the measurement model, composite reliability (CR), average variance extracted (AVE), and heterotrait-monotrait (HTMT) values were computed.The estimates for standardized loading, error variance, CR, AVE, and HTMT values are reported in Table 3. Table 3 reveals no problematic estimates.All latent factors displayed a CR value greater than 0.70, which indicates composite reliability (Malhotra, 2020).Additionally, there is evidence of convergent validity since all AVEs and standardized loadings surpassed the 0.50 cut-off level (Hair et al., 2014).Each factor pair's HTMT values were below the 0.90 threshold, indicating discriminant validity, as suggested by Henseler et al. (2015).According to Malhotra (2020), the construct validity of the measures was supported by the convergent and discriminant validity, as well as the nomological validity confirmed in Table 2. Various statistical measures were employed to evaluate the adequacy of the model fit.An acceptable model fit is indicated by an incremental fit index (IFI), Tucker-Lewis index (TLI), and comparative fit index (CFI) value exceeding 0.9, a root mean square of approximation (RMSEA) value of 0.08 or less (Malhotra, 2020), and a standardized root mean residual (SRMR) value below 0.1 (Hair et al., 2014).All the measurement model fit indices indicate a suitable fit. After confirming the satisfactory reliability, validity, and fit of the measurement model, a structural model was tested.The objective of the proposed structural model is to examine the predictive relationship between the factors.Playfulness is also found to influence both student satisfaction with university websites and their attitudes toward them.The quality of the university website's system has a statistically significant and positive impact on the university website's ease of use and students' trust in university websites.The students' satisfaction with university websites is significantly predicted by ease of use.Satisfaction, in turn, predicts trust in university websites, and trust subsequently predicts students' attitudes toward university websites.Ultimately, students' attitudes predict their behavioral intention to use university websites.Therefore, all hypotheses are supported.According to the coeffi-cients for squared multiple correlation (SMC), the model explains a substantial proportion of variance in each factor, ranging from 45% for ease of use to 80% for attitude and 63% for behavioral intention.Figure 1 DISCUSSION The study identified information and system quality, playfulness, ease of use, trust, attitude, and satisfaction with university websites as predictors of students' intention to use these websites.These findings are consistent with the findings of previous studies.For example, the influence of information quality on customer satisfaction was confirmed in an e-commerce study (Dirgantari et al., (2020), while playfulness was also found to have an influence on customer satisfaction (Liu et al., 2021) and attitude (Wang et al., 2021).In another study, system quality was found to be a predictor of ease of use (Zhou, 2011) Additionally, the study revealed that these factors could be reliably used to create a structural model that accurately predicts students' behavioral intention to use university websites. Based on the study's findings, university management can implement various strategies to positive-ly influence students' behavioral intention to use university websites.Universities are encouraged to improve the quality of information on university websites by ensuring that the information available is accurate, relevant, and up-to-date to increase students' trust and satisfaction with the website.Universities must enhance the playfulness of the website by adding interactive elements or gamification features.In doing so, universities can make the experience more enjoyable and engaging for students, which may increase their intention to use the website.Moreover, university management must keep system quality and ease of use in mind to ensure that university websites have a user-friendly interface and technical infrastructure that will warrant easy navigation and quick access to information.In addition, universities should foster trust through transparency and security measures.University websites should be transparent about their privacy policies and security measures to reassure students about the safety of their personal information.Lastly, university management should attempt to promote positive attitudes toward the website.University administrators and faculty members can actively promote university website use and emphasize its benefits, such as easier access to course materials, online registration, and academic resources. CONCLUSION This study aimed to identify the factors that predict the behavioral intention of Generation Y university students to use university websites.The results showed that several factors, including information and system quality, playfulness, ease of use, trust, attitude, and satisfaction with university websites, significantly influenced students' intention to use these websites.By identifying the predictors of students' behavioral intention to use university websites, universities can formulate effective strategies to enhance the website's quality, usability, and usefulness, increasing student satisfaction.By doing so, universities can recover the costs associated with website development and maintenance and increase the number of users and the website's value.This highlights the importance of understanding the factors that influence students' intention to use university websites to ensure a positive online learning experience for students and a good return on investment for universities. Furthermore, Huang and Chueh (2022) studied the intention to use mobile apps in a membership application.Lin et al. (2020) conducted a study on the adoption of the Nike+ run club app.Nyagadza et al. (2022) examined the intention to use chatbots in e-banking customer service.They all have reported similar results.In keeping with these findings, this study proposes that Generation Y students' attitudes toward university websites influence their usage intention. depicts the structural model.The goodness of fit of the structural model was assessed based on various model fit indices.Although the model's chi-square statistic was significant [528.46 (df = 236, p < 0.001)], it demonstrated acceptable model fit, as evidenced by the following indices: SRMR = 0.1, RMSEA = 0.06, IFI = 0.94, TLI = 0.93, and CFI = 0.94. Table 3 . Measurement model statistics Table 4 . Path analysis
5,626.6
2023-11-13T00:00:00.000
[ "Computer Science", "Education", "Sociology" ]
QGP universality in a magnetic field? We use top-down holographic models to study the thermal equation of state of strongly coupled quark-gluon plasma in external magnetic field. We identify different conformal and non-conformal theories within consistent truncations of N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N} $$\end{document} = 8 gauged supergravity in five dimensions (including STU models, gauged N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N} $$\end{document} = 2* theory) and show that the ratio of the transverse to the longitudinal pressure PT /PL as a function of T /B\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \sqrt{B} $$\end{document} can be collapsed to a ‘universal’ curve for a wide range of the adjoint hypermultiplet masses m. We stress that this does not imply any hidden universality in magnetoresponse, as other observables do not exhibit any universality. Instead, the observed collapse in PT /PL is simply due to a strong dependence of the equation of state on the (freely adjustable) renormalization scale: in other words, it is simply a fitting artifact. Remarkably, we do uncover a different universality in N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N} $$\end{document} = 2* gauge theory in the external magnetic field: we show that magnetized N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N} $$\end{document} = 2* plasma has a critical point at Tcrit/B\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \sqrt{B} $$\end{document} which value varies by 2% (or less) as m/B\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \sqrt{B} $$\end{document} ∈ [0, ∞). At criticality, and for large values of m/B\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \sqrt{B} $$\end{document}, the effective central charge of the theory scales as ∝ B\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \sqrt{B} $$\end{document}/m. Introduction and summary In [1] the authors used the recent lattice QCD equation of state (EOS) data in the presence of a background magnetic field [2,3], and the holographic EOS results 1 for the strongly coupled N = 4 SU(N ) maximally supersymmetric Yang-Mills (SYM) to argue for the universal magnetoresponse. While N = 4 SYM is conformal, the scale invariance is explicitly broken by the background magnetic field B and its thermal equilibrium stress-energy tensor is logarithmically sensitive to the choice of the renormalization scale. It was shown in [1] that both the QCD and the N = 4 data (with optimally adjusted renormalization scale) for the pressure anisotropy R, i.e., defined as a ratio of the transverse P T to the longitudinal P L pressure, 2 collapse onto a single universal curve as a function of T / √ B, at least for T / √ B 0.2 or correspondingly for R 0.5, see figure 6 of [1]. The authors do mention that the 'universality' is somewhat fragile: besides the obvious fact that large-N N = 4 SYM is not QCD (leading to inherent JHEP06(2020)149 ambiguities as to how precisely one would match the renormalization schemes in both theories -hence the authors opted for the freely-adjustable renormalization scale in SYM), one observes the universality in R, but not in other thermodynamic quantities (e.g., P T /E -the ratio of the transverse pressure to the energy density). So, is there a universal magnetoresponse? In this paper we address this question in a controlled setting: specifically, we consider holographic models of gauge theory/string theory correspondence [5,6] where all the four-dimensional strongly coupled gauge theories discussed have the same ultraviolet fixed point -N = 4 SYM. We discuss two classes of theories: • conformal gauge theories corresponding to different consistent truncations of N = 8 gauged supergravity in five dimensions 3 [7]; • non-conformal N = 2 * gauge theory (N = 4 SYM with a mass term for the N = 2 hypermultiplet) [7][8][9] (PW). In the former case, the anisotropic thermal equilibrium states are characterized by the temperature T , the background magnetic field B and the renormalization scale µ; in the latter case, we have additionally a hypermultiplet mass scale m. Before we present results, we characterize more precisely the models studied. CFT diag : N = 4 SYM has a global SU(4) R-symmetry. In this model magnetic field is turned on for the diagonal U(1) of the R-symmetry. This is the model of [1], see also [4]. See section 2.1 for the technical details. CFT STU : Holographic duals of N = 4 SYM with U(1) 3 ⊂ SU(4) global symmetry are known as STU-models [10,11]. In this conformal theory the background magnetic field is turned on for one of the U(1)'s. This model is a consistent truncation of N = 8 five-dimensional gauged supergravity with two scalar fields dual to two dimension ∆ = 2 operators. As we show in section 2.2, in the presence of the background magnetic field these operators will develop thermal expectation values. nCFT m : As we show in section 2.3, within consistent truncation of N = 8 five-dimensional gauged supergravity presented in [7], it is possible to identify a holographic dual to N = 2 * gauge theory with a single U(1) global symmetry. In this model the background magnetic field is turned on in this U (1). The label m ∈ (0, +∞) denotes the hypermultiplet mass of the N = 2 * gauge theory. CFT PW,m=0 : This conformal gauge theory is a limiting case of the nonconformal nCFT m model: Its bulk gravitational dual contains two scalar fields dual to dimension ∆ = 2 and ∆ = 3 operators of the N = 2 * gauge theory. As we show in section 2.3.1, in the presence of the background magnetic field these operators will develop thermal expectation values. JHEP06(2020)149 CFT PW,m=∞ : This conformal gauge theory is a limiting case of the nonconformal nCFT m model: Its holographic dual can be obtained from the N = 8 five dimensional gauged supergravity of [7] using the "near horizon limit" of [12], 4 followed by the uplift to six dimensions -the resulting holographic dual is Romans F (4) gauged supergravity in six dimensions [15,16]. 5 The six dimensional gravitational bulk contains a single scalar, dual to dimension ∆ = 3 operator of the effective CFT 5 . There is no conformal anomaly in odd dimensions. Furthermore, there is no invariant dimension-five operator that can be constructed only with the magnetic field strength -as a result, the anisotropic stressenergy tensor of CFT PW,m=∞ plasma is traceless, and is free from renormalization scheme ambiguities. Details on the CFT PW,m=∞ model are presented in section 2.3.2. The renormalization scheme-independence of CFT PW,m=∞ is a welcome feature: we will use the pressure anisotropy (1.1) of the theory as a benchmark to compare with the other conformal and non-conformal models. And now the results. There is no universal magnetoresponse. Qualitatively, among conformal/non-conformal models we observe three different IR regimes (i.e., when T / √ B is small): In CFT diag it is possible to reach deep IR, i.e., the T / √ B → 0 limit. For T / √ B 0.1 the thermodynamics is BTZ-like with the entropy density 6 [4] Both in CFT PW,m=0 and CFT PW,m=∞ (and in fact in all nCFT m models) there is a terminal critical temperature T crit which separates thermodynamically stable and unstable phases of the anisotropic plasma. Remarkably, this T crit is universally determined by the magnetic field B, (almost) independently 7 of the mass parameter m of nCFT m : See appendix D of [13] for details of the isotropic (no magnetic field) thermal states of N = 2 * plasma in the limit m/T → ∞. The first hint that N = 2 * plasma in the infinite mass limit is an effective five dimensional CFT appeared in [14]. 5 See [17] for a recent discussion. 6 We independently reproduce this result. 7 A very weak dependence on the mass parameter has been also observed for the equilibration rates in N = 2 * isotropic plasma in [18]. JHEP06(2020)149 i.e., the variation of T crit / √ B with mass about its mean value is 2% or less, see figure 7 (left panel). We leave the extensive study of this critical point to future work, and only point out that the specific heat at constant B at criticality has a critical exponent 8 α = 1 2 : where F is the free energy density, see figure 6. The CFT STU model in the IR is different from the other ones. We obtained reliable numerical results in this model for T / √ B 0.06: we neither observe the critical point as in the CFT PW,m=0 and CFT PW,m=∞ models, nor the BTZ-like behavior (1.2) as in the CFT diag model, see figure 3 (left panel). In figure 1 we present the pressure anisotropy parameter R (1.1) for the conformal theories: CFT diag (black curves), CFT STU (blue curves), CFT PW,m=0 (green curves) and CFT PW,m=∞ (red curves) as a function of 9 T / √ B. R is renormalization scheme independent in the CFT PW,m=∞ model, while in the former three conformal models it is sensitive to where µ is the renormalization scale. We performed high-temperature perturbative analysis, i.e., as T / √ B 1, to ensure that the definition of δ is consistent across all the conformal models sensitive to it, see appendix B. In the { top left, top right, bottom left, bottom right } panel of figure 1 we set {δ = 4 , δ = 2.5 , δ = 3.5 , δ = 7} (correspondingly) for R CFT diag , R CFT STU and R CFT PW,m=0 -notice that while all the curves exhibit the same high-temperature asymptotics, the anisotropy parameter R is quite sensitive to δ; in fact, R CFT diag diverges for δ = 2.5 (because P L crosses zero with P T remaining finite). Varying δ, it is easy to achieve R CFT diag , R CFT STU and R CFT PW,m=0 in the IR to be "to the left" of the scheme-independent (red) curve R CFT PW,m=∞ (top panels and the bottom left panel); or "to the right" of the scheme-independent (red) curve R CFT PW,m=∞ (the bottom right panel). In figure 1 we kept δ the same for the conformal models CFT diag , CFT STU and CFT PW,m=0 . This is very reasonable given that one can match δ across all the models by comparing the UV, i.e., T / √ B 1 thermodynamics (see appendix B) -there are no other scales besides T and B, and thus by dimensional analysis, 10 If we give up on maintaining the same renormalization scale for all the conformal models, it is easy to 'collapse' all the curves for the pressure anisotropy, see figure 2. We will not perform sophisticated fits as in [1], and instead, adjusting δ independently for each model, 8 The critical point with the same mean-field exponent α has been observed in isotropic thermodynamics of N = 2 * plasma with different masses for the bosonic and fermionic components of the hypermultiplet [19]. 9 We use the same normalization of the magnetic field in holographic models as in [1]. 10 The asymptotic AdS5 radius L always scales out from the final formulas. Figure 1. Anisotropy parameter R = P T /P L for conformal models CFT diag (black curves), CFT STU (blue curves), CFT PW,m=0 (green curves) and CFT PW,m=∞ (red curves) as a function of T / √ B. R CFTPW,m=∞ is renormalization scheme independent; for the other models there is a strong dependence on the renormalization scale δ = ln B µ 2 : different panels represent different choices for δ; all the models in the same panel have the same value of δ, leading to identical high-temperature asymptotics, T / √ B 1. we require that in all models the pressure anisotropy R = 0.5 is attained at the same value of T / √ B (represented by the dashed brown lines): Specifically, we find that (1.6) is true, provided In a nutshell, this is what was done in [1] to claim a universal magnetoresponse for R 0.5. Rather, we interpret the collapse in figure 2 as nothing but a fitting artifact, possible due to a strong dependence of the anisotropy parameter R on the renormalization scale. To further see that there is no universal physics, we can compare renormalization scheme-independent anisotropic thermodynamic quantities of the models: the entropy densities, see figure 3. The color coding is as before: CFT diag (black curves), CFT STU (blue curves), CFT PW,m=0 (green curves) and CFT PW,m=∞ (red curves). We plot the entropy densities relative to the entropy density of the UV fixed point at the corresponding tem- perature (see eq. (D.13) for the CFT PW,m=∞ model in [13]): (1.8) The dashed vertical lines in the left panel indicate the terminal (critical temperature) T crit / √ B for CFT PW,m=0 (green) and CFT PW,m=∞ (red) models which separates thermodynamically stable (top) and unstable (bottom) branches. Notice that s/s UV diverges for the CFT diag model as T / √ B → 0 -this is reflection of the IR BTZ-like thermodynamics (1.2); the dashed black line is the IR asymptote In nCFT m models it is equally easy to 'collapse' the data for the pressure anisotropy. In these models we have an additional scale m -the mass of the N = 2 hypermultiplet. In the absence of the magnetic field, i.e., for isotropic N = 2 * plasma, the thermodynamics is renormalization scheme-independent 11 [20]. Once we turn on the magnetic field, there is a scheme-dependence. In figure 4 we show the pressure anisotropy for N = 2 * gauge 11 Scheme-dependence arises once we split the masses of the fermionic and bosonic components of the N = 2 * hypermultiplet [20]. The dashed red curve represents the anisotropy parameter of the conformal CFT PW,m=∞ model, which is renormalization scheme-independent. In the left panel the renormalization scale δ = 4 for all the nCFT m models. In the right panel, we adjusted δ = δ m for each nCFT m model independently, so that the pressure anisotropy R nCFTm = 0.5 at the same temperature as in the CFT PW,m=∞ model, see (1.6). This matching point is denoted by dashed brown lines. In the right panel we show this for the model with m/ √ 2B = 1: the brown lines identify the critical temperature T crit / √ B and the relative entropy at the criticality s crit /s UV (these quantities are presented in figure 7). "Top" solid black curve denotes the thermodynamically stable branch and "bottom" dashed black curve denotes the thermodynamically unstable branch (see figure 6 for further details). As in conformal models, the entropy densities (which are renormalization scheme independent thermodynamic quantities) are rather distinct, see left panel of figure 5. The color coding is as in figure 4, except that we collected more data 12 in addition to (1.10): these are the dashed and dotted curves. The entropy density of the UV fixed point is defined as in (1.8). All the nCFT m models studied, as well as the CFT PW,m=0 and CFT PW,m=∞ conformal models, have a terminal critical point T crit that separates the thermodynamically stable (top solid) and unstable (bottom dashed) branches, which we presented for the m √ 2B = 1 nCFT m model in the right panel. The dashed brown lines identify the critical temperature T crit and the entropy density s crit at criticality. In figure 6 we present results for the specific heat c B in this model defined as in (1.3). Indeed, the (lower) thermodynamically unstable branch has a negative specific heat (left panel); approaching the critical temperature from above we observe the divergence in the specific heat, both for the stable and the unstable branches. To extract a critical exponent α, defined as we plot (right panel) the dimensionless quantity c 2 B /s 2 as a function of T / √ B. Both the stable (solid) and the unstable (dashed) curves approach zero, signaling the divergence of the specific heat at the critical temperature (vertical dashed brown line), with a finite slope -this implies that the critical exponent is (1.12) 12 To have a better characterization of the critical points. There is a remarkable universality of the critical points in nCFT m and conformal CFT PW,m=0 and CFT PW,m=∞ models. In figure 7 (left panel) we present the results for the critical temperature as a function of m/ √ 2B in nCFT m models (points). The horizontal dashed lines indicate the location of the critical points for the CFT PW,m=0 (green) and CFT PW,m=∞ (red) conformal models. In the right panel the dots represent the relative entropy, (1.14) One can understand the origin of the asymptote (1.13) from the fact that nCFT m models in the large m limit should resemble the conformal model CFT PW,m=∞ ; thus, we expect that γ ∞ ≈ γ CFT PW,m=∞ . Indeed, where we extracted numerically the value of s crit s UV for the CFT PW,m=∞ conformal model, used (1.8) to analytically compute the second factor in the first line, and substituted the numerical value for T crit / √ B of the CFT PW,m=∞ model in the second line. We now outline the rest of the paper, containing technical details necessary to obtain the results reported above. In section 2 we introduce the holographic theory of [7] and explain how the various models discussed here arise as consistent truncations of the latter: CFT diag in section 2.1, CFT STU in section 2.2, and nCFT m in section 2.3. The conformal models CFT PW,m=0 and CFT PW,m=∞ are special limits of the nCFT m model and are discussed in sections 2.3.1 and 2.3.2 correspondingly. Holographic renormalization is by now a standard technique [21], and we only present the results for the boundary gauge theory observables. Our work is heavily numerical. It is thus important to validate the numerical results in the limits where perturbative computations (analytical or numerical) are available. We have performed such validations in appendix B, i.e., when T √ B 1. We did not want to overburden the reader with details, and so we did not present the checks of the agreement of the numerical parameters (e.g., as in (2.23)) with the corresponding perturbative counterparts -but we have performed such checks in all models. There are further important constraints on the numerically obtained energy density, pressure, entropy, etc., of the anisotropic plasma: the first law of the thermodynamics dE = T ds (at constant magnetic field and the mass parameter, if available), and the thermodynamic relation between the free energy density and the longitudinal pressure F = −P L . The latter relation can be proved (see appendix A) at the level of the equations of motion, borrowing the holographic arguments of [22] used to establish the universality of the shear viscosity to the entropy density in the holographic plasma models. Still, as the first law of thermodynamics, it provides an important consistency check on the numerical data -we verified these constraints in all the models, both perturbatively in the high-temperature limit, to O B 4 T 8 inclusive, see appendix B, and for finite values of B/ √ T , see appendix C -once again, we present only partial results of the full checks. JHEP06(2020)149 Our paper is a step in broadening the class of strongly coupled magnetized gauge theory plasmas (both conformal and massive) amenable to controlled holographic analysis. We focused on the equation of state, extending the work of [1]. The next step is to analyze the magneto-transport in these models, in particular the magneto-transport at criticality. Technical details The starting point for the holographic analysis is the effective action of [7]: where the F (J) are the field strengths of the U(1) gauge fields, A (J) , and P is the scalar potential. We introduced The scalar potential, P, is given in terms of a superpotential In what follows we set gauged supergravity coupling g = 1, this corresponds to setting the asymptotic AdS 5 radius to L = 2. The five dimensional gravitational constant G 5 is related to the rank of the supersymmetric N = 4 SU(N ) UV fixed point as The models discussed below, i.e., CFT diag , CFT STU and nCFT m , have holographic duals which are consistent truncations of (2.1). It would be interesting to study the stability of these truncations following [23]. The holographic dual to the CFT diag conformal model is a consistent truncation of (2.1) with leading to where we used the normalization of the bulk U(1) to be consistent with [1]. This model has been extensively studied in [1,4] and we do not review it here. CFT STU The holographic dual to the CFT STU is a special case of the STU model [10,11,24], a consistent truncation of the effective action (2.1) with leading to 9) and the scalar potential We would like to keep a single bulk gauge field, so we can set two of them to zero and work with the remaining one. The symmetries of the action allow us to choose whichever gauge field we want. To see this, notice that the action (2.9) is invariant under F (2) µν = 0 for the gauge fields and with the scalar field redefinitions ρ → ν 1/2 ρ −1/2 and ν → ν 1/2 ρ 1/2 . Thus, we arrive to the holographic dual of CFT STU as where once again we used the normalization of the remaining gauge field as in [1]. Solutions to the gravitational theory (2.11) representing magnetic black branes dual to anisotropic magnetized CFT STU plasma correspond to the following background ansatz: 13 12) 13 Note that we fixed the radial coordinate r with the choice of the metric warp factor in front of dz 2 . JHEP06(2020)149 As shown in appendix B.1, the renormalization scheme choice (2.28) implies that in the high-temperature limit T 2 B, We can not solve the equations (2.16)-(2.20) analytically; adapting numerical techniques developed in [25], we solve these equations (subject to the asymptotics (2.21) and (2.22)) numerically. The results of numerical analysis are data files assembled of parameters (2.23), labeled by b. It is important to validate the numerical data (in addition to the standard error analysis). There are two important constraints that we verified for CFT STU (and in fact all the other models): • The first law of thermodynamics (FL), dE/(T ds) − 1 (with B kept fixed), leads to the differential constrain on data sets (2.23) (here = d db ): (2.30) • Anisotropy introduced by the external magnetic field results in P T = P L . From the elementary anisotropic thermodynamics (see [1] for a recent review), the free energy density of the system F is given by We emphasize that holographic renormalization (even anisotropic one) naturally enforces (2.27) (see [26] for one of the first demonstrations), but not (2.31). In appendix A we present a holographic proof 14 15 Technical details presented here are enough to generate the CFT STU model plots reported in section 1. 14 The proof follows the same steps as in the first proof of the universality of the shear viscosity to the entropy density in holography [22]. 15 Additionally, as in the nCFTm model with m/ √ 2B = 1 (see appendix C), we checked both relations for finite b. JHEP06(2020)149 As in the CFT STU model, r 0 is completely scaled out from all the equations of motion. Eqs. (2.36)-(2.40) have to be solved subject to the following asymptotics: in the UV, i.e., as x → 0 + , (2.41) in the IR, i.e., as y ≡ 1 − x → 0 + , (2.42) The non-normalizable coefficients α 1,1 (of the dimension ∆ = 2 operator) and χ 0 (of the dimension ∆ = 3 operator) are related to the masses of the bosonic and the fermionic components of the hypermultiplet of N = 2 * gauge theory. When both masses are the same (see [20]) Furthermore, carefully matching to the extremal PW solution [8,9] (following the same procedure as in [20]) we find B m 2 = 2b where m is the hypermultiplet mass. We find it convenient to use to label different mass parameters in nCFT m models, see (1.10). In total, given η and b, the asymptotics expansions are specified by 8 parameters: {a 2,2,0 , a 4,2,0 , α 1,0 , χ 1,0 , a 1,h,0 , a 2,h 0 , r h,0 , c h,0 } , (2.46) which is the correct number of parameters necessary to provide a solution to a system of three second order and two first order equations, 3 ×2+2×1 = 8. Parameters α 1,0 and χ 1,0 JHEP06(2020)149 correspond to the expectation values of dimensions ∆ = 2 (O 2 ) and ∆ = 3 (O 3 ) operators (correspondingly) of the boundary nCFT m ; the other two parameters, a 2,2,0 and a 4,2,0 , determine the expectation value of its stress-energy tensor. Using the standard holographic renormalization [27] we find: for the components of the boundary stress-energy tensor, for the entropy density and the temperature. Note that, as expected [27], [25], we solve these equations (subject to the asymptotics (2.41) and (2.42)) numerically. The results of numerical analysis are data files assembled of parameters (2.46), labeled by b and η. As for the CFT STU model, we validate the numerical JHEP06(2020)149 data verifying the differential constraint from the first law of the thermodynamics dE = T ds (FL) and the algebraic constraint from the thermodynamic relation F = −P L (TR): In appendix C we have verified FT and TR in the nCFT m model with m/ √ 2B = 1 numerically. Technical details presented here are enough to generate nCFT m model plots reported in section 1. CFT PW,m=0 The CFT PW,m=0 model is a special case of the nCFT m model when the hypermultiplet mass m is set to zero. This necessitates setting the non-normalizable coefficients α 1,1 and χ 0 to zero =⇒ η = 0 in (2.45). From (2.40) it is clear that this m = 0 limit is consistent with implying that the Z 2 symmetry of the holographic dual, i.e., the symmetry associated with χ ↔ −χ, is unbroken. In what follows, we study the Z 2 -symmetric phase of the CFT PW,m=0 anisotropic thermodynamics, 16 In appendix B.2 we verified FT and TR in CFT PW,m=0 to order O(b 4 ) inclusive; we also present O(B 4 /T 8 ) results for R CFT PW,m=0 and confirm that the renormalization scheme choice of κ as in (2.28) leads to (2.55) JHEP06(2020)149 JHEP06(2020)149 As before, r 0 is completely scaled out of all the equations of motion. Eqs. (2.68)-(2.71) have to be solved subject to the following asymptotics: in the UV, i.e., as x → 0 + , (2.72) in the IR, i.e., as y ≡ 1 − x → 0 + , In total, givenb, the asymptotic expansions are specified by 6 parameters: which is the correct number of parameters necessary to provide a solution to a system of two second order and two first order equations, 2 × 2 + 2 × 1 = 6. The parameter p 3 corresponds to the expectation value of a dimension ∆ = 3 operator of the boundary theory; the other two parameters, a 1,5 and a 2,5 , determine the expectation value of its stress-energy tensor. Using the standard holographic renormalization we find: for the components of the boundary stress-energy tensor, and for the entropy density and the temperature. Note that, There is no renormalization scheme dependence in (2.75), and the trace of the stress-energy tensor vanishes -there is no invariant dimension-five operator that can be constructed only with the magnetic field strength. The (holographic) free energy density is given by the standard relation (2.27). In (2.75)-(2.76) we used the subscript [5] to indicate that JHEP06(2020)149 the thermodynamic quantities are measured from the perspective of the effective fivedimensional boundary conformal theory; to convert to the four-dimensional perspective, we need to account for (2.65), see also [13], E, P T , P L = E [5] , P [5]T , P [5]L × 2k 3 As for the other models discussed in this paper, the first law of thermodynamics dE = T ds (at fixed magnetic field) and the thermodynamic relation F = −P L lead to constraints on the numerically obtained parameter set (2.74) (here = d db ): The proof follows the argument for the universality of the shear viscosity to the entropy density in holographic plasma [22]. Consider a holographic dual to a four dimensional 19 gauge theory in an external magnetic field. We are going to assume that the magnetic field is along the z-direction, as in (2.12). We take the (dimensionally reduced -again, this can be relaxed) holographic background geometry to be At extremality (whether or not the extremal solution is singular or not within the truncation is irrelevant), the Poincare symmetry of the background geometry guarantees that 19 Generalization to other dimensions is straightforward. JHEP06(2020)149 where R µν is the Ricci tensor in the orthonormal frame. Clearly, an analogous condition must be satisfied for the full gravitational stress tensor of the matter supporting the geometry Because turning on the nonextremality will not modify (A.3), we see that (A.2) is valid away from extremality as well. Computing the Ricci tensor for (A.1) reduces (A.2) to Explicitly evaluating the ratio of the const in (A.4) in the UV (r → ∞) and IR (r → r horizon ) we recover for each of the models we study. We should emphasize that the condition (A.2) can be explicitly verified using the equations of motion in each model studied. The point of the argument above (as the related one in [22]) is that this relation is true based on the symmetries of the problem alone. B Conformal models in the limit T / √ B 1 In holographic models, supersymmetry at extremality typically guarantees that equilibrium isotropic thermodynamics is renormalization scheme independent (compare the N = 2 * model with the same masses for the bosonic and the fermionic components m 2 b = m 2 f , versus the same model with m 2 b = m 2 f [20]). This is not the case for the holographic magnetized gauge theory plasma in four space-time dimensions, e.g., see [1] for N = 4 SYM. In this appendix we discuss the high temperature anisotropic equilibrium thermodynamics of the conformal (supersymmetric in vacuum) models. For the (locally) four dimensional models (CFT diag , CFT STU and CFT PW,m=0 ) matching high-temperature equations of state is a natural way to relate renormalization schemes in various theories. In the CFT PW,m=∞ model, which is locally five dimensional, magnetized thermodynamics is scheme independent. B.1 CFT STU The high temperature expansion corresponds to the perturbative expansion in b. In what follows we study anisotropic thermodynamics to order O(b 4 ) inclusive. Introducing JHEP06(2020)149 Using the results (B.12) and (B.13) (rather, we use more precise values of the parameters reported -obtained from numerics with 40 digit precision) we find at order n = 1: It is important to keep in mind that the value a 2,2,(2) is sensitive to the matter content of the gravitational dual -set of relevant operators in CFT STU that develop expectation values in anisotropic thermal equilibrium.
7,386.4
2020-06-01T00:00:00.000
[ "Materials Science" ]
A Comparison of Correlation Coe ffi cients via a Three-Step Bootstrap Approach In this paper we compare ten correlation coe ffi cients using a three-step bootstrap approach (TSB). A three-step bootstrap is applied to determine the optimal repetitions, B , to estimate the standard error of the statistic with certain degree of accuracy. The coe ffi cients in question are Pearson product moment ( r ), Spearman’s rho ( ρ ), Kendall’s tau ( τ ) , Spearman’s Footrule ( F t ), Symmetric Footrule ( C ), the Greatest deviation ( R g ), the Top - Down ( r T ), Weighted Kendall’s tau ( τ w ), Blest ( ν ), and Symmetric Blest’s coe ffi cient ( ν ∗ ). We consider a standard error criterion for our comparisons. However, since the rank correlation coe ffi cients su ff er from the tied problem that results from the bootstrap technique, we use existing modified formulae for some rank correlation coe ffi cients, otherwise, the randomization tied-treatment is applied. Introduction One may be interested in the relationship between two factors or two variables and would wish to represent this relationship by a number or even using a statistical technique to make an inference.This number is called a correlation coefficient.The most common and well known correlation coefficient is the Pearson moment product coefficient.Some people may use this coefficient immediately ignoring the bivariate normality assumption of the data.Others may use a nonparametric rank correlation coefficient such as Spearman rho or Kendall tau for the same purpose.However the Pearson coefficient examines different aspects compared to Spearman and Kendall.Pearson coefficient considers the linearity of the relationship whereas Spearman and Kendall study the monotonicity of this relationship. In some circumstances, we may have data with some outliers, in which case using the Greatest deviation correlation coefficient would be more suitable due to its robustness property against outliers.However, in other situations, assigning more emphasis on the top of the observations is required.For this purpose, several nonparametric coefficients are suggested, namely: the Top-Down, Weighted Kendall's tau, Blest, and Symmetric Blest's coefficient. Usually the variance of the correlation coefficients is derived under the assumption that the null hypothesis that there is no correlation is true.Otherwise, it is intractable to calculate the variance without this assumption.Surely one would prefer using the correlation coefficient which produces small variation, and, consequently less standard error.Borkowf (2000) presented a new nonparametric method for estimating the variance of Spearman's rho by calculating ρ from a two-way contingency table with categories defined by the bivariate ranks.His method is a computer method depending on the data at hand like the bootstrap and jackknife methods.There are claims that his method is more accurate than the bootstrap and jackknife methods.However, it is more complicated and the differences in accuracy are small in comparison with other methods. In this paper, we will apply the bootstrap method.Yet, the common issue is how many replications, B, should run to get the most accurate results required.One such approach is introduced by Andrews and Buchinsky (2000;2001;2002) which is called a three-step approach.We will use this approach to determine the optimal number of replications for different degrees of accuracy.In addition we will use it for comparison, precisely, to estimate the standard error of the estimators, in our case the correlation coefficients, without imposing the null hypothesis.However, by using the bootstrap technique, the rank correlation coefficients suffer from tied problem.Therefore, we will use the existing adjusted formula for Spearman and Kendall coefficients (Hollander & Wolfe, 1999); otherwise, the randomization tied-treatment is applied (Gibbons, 1971). Overviews of the correlation coefficients that are used in comparisons are given in Section 2. In Section 3, we will introduce some bootstrap notation and motivations.Then, in Section 4, we will summarize the three-step approach for estimating the standard error.The application of this approach on the correlation coefficients is introduced via the example in Section 5. Finally, conclusions are reported in the last section. In Table 1 we summarize these coefficients which are involved in our comparisons.In the second and third columns their formulae are stated, yet the fourth column contains the adjusted formulae for Spearman and Kendall coefficients for when ties occur.For others coefficients where no adjusted formula is found in current literature, we will use the randomization tool (Gibbons, 1971) to deal with tied problem .The justification behind using the randomization method is that it behaves randomly as the bootstrap does.However, the most important property is that the randomization methods do not affect the null distribution of the rank correlation coefficient, so we do not need to adapt the null distribution for these coefficients. Bootstrap Motivation and Notation The bootstrap technique was first introduced by Efron in 1979 (Efron & Tibshirani, 1994).It is a computer intensive method of statistical analysis that uses resampling to calculate standard errors, confidence intervals and significance tests.There are various applications of the bootstrap techniques in life sciences such as medical, social, and business science.They are applicable as a parametric or semi-parametric or nonparametric technique.In this paper we will consider the nonparametric bootstrap since the correlation coefficients (except Pearson) in question are nonparametric rank correlation coefficients. Actually, the most common problem in bootstrap literature is choosing the optimal number of repetitions, B. By choosing different, small values it can result in different answers.By choosing extremely large values it gives a more accurate result, however more costly.Andrews and Buchinsky (2000;2001;2002) introduced a three-step approach to determine the number of repetitions B with pre-fixed degrees of accuracy which is applied for many different bootstrap problems, such as estimating the standard error, confidence interval, and p-value for classical statistical techniques. The aim is to achieve the desired level of accuracy by choosing B. Accuracy is measured by the percentage deviation of the bootstrap standard error estimate, confidence interval length, test's critical value, test's p-value, or bias-corrected estimate based on B bootstrap simulations from the corresponding ideal bootstrap quantities for which B = ∞.A bound of the relevant percentage deviation, pdb, is specified such as the actual percentage deviation is less than this bound with a specified probability, (1 − δ) tends to one.That is, for given (pdb, δ), the optimal number of repetitions, B * , satisfies where λ is a quantity of interest (the standard error in our case), λ∞ is an "ideal" bootstrap estimate, and λB is the bootstrap approximation of λ∞ based on B bootstrap repetitions.Also, P * here represent the probability with respect to the randomness in the bootstrap samples. In the rest of this section we will present the notation which we will use in the bootstrap framework.Our observed data is a sample of size n: X = (X 1 , ..., X n ) , where X i = (x i , y i ); i = 1, 2, ..., n.Let X * = (X * 1 , ..., X * n ) be a bootstrap sample of size n based on the original sample X.When the original sample X is comprised of independent and identically distributed (iid) or independent and nonidentically distributed (inid) random variables, the bootstrap sample X * often is an iid sample of size n drawn from some distribution F. In this paper, F is the empirical distribution due to that we use the nonparametric bootstrap. Let θ = θ(X) be an estimator of a parameter θ 0 based on the sample X.One of the application is to bootstrap standard error estimates for a scalar estimator θ, in our case θ is a correlation coefficient.The quantities (se, ŝe ∞ , ŝe B ) in this case are the standard error, the "ideal" bootstrap standard error estimator, and the bootstrap standard error estimator based on B bootstrap repetitions, respectively, where E * represent the expectation with respect to the randomness in the bootstrap samples.These quantities are given as The three-step method, introduced by Andrews and Buchinsky (2000), depends on the estimation of the coefficient of excess kurtosis, γ 2 , of the bootstrap distribution of the parameter estimator, where By using the three-step method we aim to choose a value of B to achieve a desired accuracy of ŝe B for estimating ŝe ∞ using pre-specified values of (pdb, δ).The method involves the following steps: Step(1): Suppose γ 2 = 0, and a preliminary value of B, denoted B 0 where and int(a) be the smallest integer greater than or equal to a, z α represents the α quantile of the standard normal distribution. Example Let us consider the data from a study designed to ascertain the relative importance of the various factors contributing to tuna quality and to find objective methods for determining quality parameters and consumer preference (Hollander & Wolfe, 1999).Table 2 gives values of the Hunter L measure of lightness (X), along with panel scores (Y) for nine lots of canned tuna (also see Figure 1). The preliminary and the final values of the number of the replications for estimating the standard errors of our correlation coefficients are given in Tables 3 and 4. We will categorize our comparison in three blocks; a parametric correlation coefficient (Pearson), nonparametric rank correlation coefficients (Spearman, Kendall, Spearman's Footrule, Symmetric Footrule and the Greatest deviation coefficient).Finally, weighted nonparametric rank correlation coefficients (Top-Down, Weighted Kendall, Blest's coefficient and Symmetric Blest's coefficient). For example, let us focus on the situation when pdb = 5% and δ = 0.01, as in Table 5.Here the preliminary value of B 0 (for all correlation coefficients) is 1327 which is quite small as a starting point for any simulation, then the adjusted number of replications, B 1 , is given in the third column in Table 5, which varies from one correlation coefficient to another.For example, for the Pearson correlation coefficient, this number, B 1 is more than double the preliminary value (increased by 177%).However, for Footrule B 1 is just increased by 2.4% (from 1327 to 1527).For Kendall and Weighted Kendall, the number of replications, B 1 , increased almost the same (between 38.9% and 39.41%).Among the nonparametric correlation coefficients, Spearman has the highest increase in the number of replications, increased by 72.57%. According to the differences between the observed (calculated value from the data) and the bootstrap simulation estimate, we can see that for Kendall correlation coefficient, the difference between the actual value of the statistic (observed value) and the bootstrap estimate is small.However, the largest difference between the observed and the estimated values is for Weighted Kendall since we choose m = 5 which means we ignore about a half of the data but this difference gap will reduce and is equal to that of Kendall at m = n = 9.Also, the differences are quite similar for Pearson, Spearman, Footrule, and Greatest, which are all between 0.0312 and 0.0520. With respect to the standard error estimate, we found that the Pearson correlation coefficient has the smallest standard error, however, the Weighted Kendall has the largest value of standard error.In fact it is twice as much as the standard error of Pearson correlation coefficient.It is for the same reason that Weighted Kendall ignores some of the data (in our case 4 pairs of observations are ignored).In block 2, we found that the Greatest correlation coefficient has the smallest standard error which is expected since our data has some outliers (see Figure 1).However, Spearman and Footrule have large standard errors within this block. In block 3, when our interest is focused on the initial (top) data, we noticed that the Symmetric Blest's coefficient has a smaller standard error than its asymmetric version and is also smaller than the other weighted correlation coefficients.However, as we mentioned above, Weighted Kendall has the largest standard error.Generally, the standard errors for correlation coefficients in block 3 are larger than those in block 1 and 2. Conclusion To conclude, one should use the Pearson correlation coefficient if the data meets the normality assumption, otherwise, the Greatest deviation performs well especially when the data has outliers.However, when we want emphasis on the initial (top) data, the Symmetric Blest's coefficient has lowest standard error amongst other weighted correlation coefficients.Salama I. A. and Quade D. (2002) where g denotes the number of tied p groups, t i is the size of tied p group i, h is the number of tied q groups, and u j is the size of tied q group j (Hollander & Wolfe, 1999). Kendall's Tau where sgn(a) = 1 if a > 0, where g denotes the number of tied X groups, sgn(a) = 0 if a = 0 and t i is the size of tied X group i, h is the number sgn(a) = −1 if a < 0. of tied Y groups, and u j is the size of the tied Y group j (Hollander & Wolfe, 1999). Figure 1 . Figure 1.A simple scatter plot for Tuna Lightness and Quality data
2,997.4
2010-04-18T00:00:00.000
[ "Mathematics" ]
Examining the Impact of Legal Arizona Worker Act on Native Female Labor Supply in the United States Low-skilled immigration has been argued to lower the price of services that are close substitutes for household production, reducing barriers for women to enter the labor market. Therefore, policies that reduce the number of low-skilled immigrants who work predominantly in low-skilled service occupations may have an unintended consequence of lowering women’s participation in the labor market. This article examines the labor supply impact of the Legal Arizona Workers Act (LAWA), which led to a large decline in the low-skilled immigrant workforce of the state. The analysis shows no evidence that LAWA statistically significantly affected US-born women’s labor supply in Arizona. This finding is partly explained by an increase in native workers in household service occupations due to LAWA, which offset the decline in immigrants in these occupations and caused the cost of household services to be relatively uninfluenced by the passage of LAWA. Introduction Between 1970 and 2000, the participation rate of female labor force in the United States increased from 43.4% to 61% (Acemoglu et al., 2004). Despite this large increase, women are still spending much more time on household works compared to men (Cortes and Tessada, 2011). 1 At the same time, a recent work underscores the role of low-skilled immigrants who work predominantly in low-skilled service occupations in lowering the price of services that are close substitutes for household production (Cortes, 2008). As such, the influx of low-skilled immigrants has been argued to increase native women's participation in the labor market The analysis yields a few main results. First, the number of low-skilled immigrant workers in the labor force shrank substantially in Arizona due to the passage of LAWA. In the absence of LAWA, I estimated that the share of low-skilled immigrants in Arizona's workforce would be higher by approximately 1.1% points or 9% of its level in 2006. Despite this large decline, I found no evidence that LAWA significantly affects the native women's labor force participation rate or average weekly work hours. This result holds even among highskilled US-born women who are most likely to be affected by LAWA because of higher opportunity costs of spending time for household works compared to low-skilled native women. Perhaps surprisingly, I found no evidence that the implementation of LAWA led to a statistically significant increase in the average time use for housework, gardening, and caring for children among US-born women in Arizona. Further analysis shows that this result is driven partly by an increase in native workers in household service occupations due to the implementation of LAWA, which offset the decline in immigrants in these occupations and caused the cost of household services to be relatively uninfluenced by the passage of LAWA. This increase in native workers in household service occupations is consistent with relative task redistribution argument in which low-skilled immigration nudges US workers toward occupations that require higher communication skill to reduce downward pressure on their wages (Peri and Sparber, 2009; Peri and Sparber, 2011). As LAWA shrinks the low-skilled immigrant workforce in Arizona, there is less incentive for natives to specialize in occupa- This paper is constructed as follows. Section 2 describes the background of LAWA and conceptual framework. Section 3 discusses the empirical methodology and data used in the analyses. Section 4 documents the results. Section 5 concludes. Background and conceptual framework The universal E-Verify program such as LAWA can be traced back to the Immigration Reform and Control Act (IRCA) of 1986, which requires new hires to present documents verifying their eligibility to work legally in the US and imposes sanctions on employers knowingly hiring unauthorized immigrants. These measures to curb unauthorized employment in IRCA, however, have been argued to be ineffective because there was no reliable, quick way to verify the authenticity of the documents used to prove identity and work authorization (Cooper and O'Neil, 2005). To address this shortcoming, the E-Verify system was rolled out to several states in 1997 under the name of Basic Pilot. Participating employers enter the new hire information from the employment eligibility form (Form I-9), and E-Verify system checks that information with Social Security Administration and Department of Homeland Security databases. If there is a discrepancy, the employer is notified of a tentative nonconfirmation, and the new worker has 8 federal working days to contest the discrepancy. While the discrepancy is being contested, the employer is not allowed to fire the new hire because of the discrepancy. However, the employer has to terminate the employment of the new hire if the discrepancy is not resolved after that period. For authorized workers, the inaccuracy rate of E-Verify is approximately 1%, while for unauthorized workers, the error rate is approximately 54% (Westat, 2009). The Legal Arizona Worker Act was signed into law in July 2007 and implemented on January 1, 2008. It is the first law of its kind that requires all businesses in a state to verify the employment authorization of new hires through the federal E-Verify system. The literature provides guidance on how LAWA may adversely affect the labor supply of U.S.-born women. Recent works by Cortes (2008) and Cortes and Tessada (2011) argue that low-skilled immigration lower opportunity cost of working by reducing the price of services that are close substitutes for household production. If LAWA leads to higher cost of household services, mainly because of the decline in low-skilled immigrants predominantly working in this sector, the labor supply of US-born women would be adversely affected as it increases the opportunity cost of working. It is worth noting that LAWA might not necessarily increase the cost of household services. Recent works have documented that immigration nudges US workers toward occupations that require higher communication skill to reduce downward pressure on their wages (Peri and Sparber, 2009; Peri and Sparber, 2011). A more recent study in Europe found that native European workers are more likely to experience upward mobility in their occupation status in response to immigrants' influx (Cattaneo et al., 2015). It follows that immigration restriction policy such as LAWA might lead to downward mobility in US-born workers' occupation status, resulting in more US-born workers filling low-status occupations such as household services. If this is the case, the cost of purchasing household services would be relatively unaffected by the passage of LAWA. I examined if this is indeed the case in the following analyses. Empirical methodology and data To examine the impact of LAWA, I used the SCM pioneered by Abadie and Gardeazabal (2003) and further developed by Abadie et al. states indexed by j = 0, 1,..., J. Let the value j=0 correspond to Arizona, while the rest of the states (j = 1,..., J) are candidate contributors to the control group (i.e., the donor pool). Let G 0 be a (k × 1) vector whose elements are equal to the values of the pretreatment characteristics of Arizona that we want to match as closely as possible. Similarly, let G 1 be a (k × J) matrix collecting the values of the same variable in the donor pool. The SCM identifies the vector of weights W * = (w 1 , …, w j ), that minimizes the difference between G 0 and G 1 W: Table 1). In terms of its share of the labor force, the share of low-skilled immigrants in Arizona's workforce declines by 1.1% points relative to its synthetic control after the passage of LAWA (Figure 2a and Panel A of Table 2). To find out if this decline occurred simply because of chance instead of LAWA implemen- Indeed, the implied P-values (i.e., the probability that we observe a difference-in-differences estimate that is as large -negatively, in this case -as Arizona) of the impact of LAWA on both the size of the low-skilled immigrant workforce and its share in Arizona's workforce are 0.023 and 0.045, respectively (Panel A of Table 2). A concern is that the adoption of LAWA closely coincided with the Great Recession, and therefore, these findings may simply be driven by the economic downturn at the time. However, the SCM approach used already accounts for any changes that affect the country as a whole, and unless the Great Recession affects the Arizona labor market differently than the rest of the country, it will not threaten the validity of my findings. A concern is that one of the industries that were hit hardest by the Great Recession, construction, is a leading employer of low-skilled workers in Arizona. As noted above, however, I created synthetic Arizona that minimizes the difference in the employment share of the construction industry with actual Arizona, which should mitigate the possible bias arising from this concern. Furthermore, recent studies (Bohn There is also a concern that another controversial Arizona state bill, SB 1070, which gave local law enforcement agencies more power in enforcing immigration laws and passed in 2010, may bias the impact of LAWA. However, before the law was supposed to take effect, a federal judge issued a preliminary injunction that blocked its most controversial provisions, and by 2012, the Supreme Court had struck down many of these provisions. It is unlikely, therefore, DD, difference in differences; ACS, American Community Survey. that Arizona state bill SB 1070 had much impact on reducing the low-skilled immigrant workforce in the state. Indeed, a recent study by Amuedo-Dorantes and Lozano (2015) found that the SB 1070 had a "minimal to null" impact on the share of noncitizen Hispanics in Arizona. 8 Since LAWA was adopted in Arizona, six other states have implemented similar universal E-Verify programs. However, none of them have led to a decline in the low-skilled immigrant workforce as observed in Arizona (Panels B-G of Table 2 and Figures A1-A12). The answer to why we only observed a significant impact of universal E-Verify program in Arizona is outside the scope of this study. Nevertheless, there are a few possible reasons. For example, the scope of LAWA is broader because it requires all employers in Arizona to run new hires through the E-Verify system, while in some states such as Georgia, only businesses that use more than Table 3 show the impact of LAWA on the participation rate of highskilled native females with at least some college education. Because high-skilled women have higher opportunity costs for the time spent on household work compared to low-skilled women, the response to LAWA should be stronger among this group. Contrary to the expectation, shows that there is no evidence that LAWA led to a statistically significant reduction in women's weekly work hours across the four quartiles (Panel B of Table 3) Table 3 and Figures A17 and A18). Although the analyses so far have mainly focused on women, conceptually LAWA might also affect the labor supply of US-born men because they also consumed household services in practice. Despite the negative estimates, suggesting that LAWA reduced the labor supply of US-born men, this decline is not statistically significant at the conventional levels (Panels C and D of Table 3 and Figures A19-A22). 9 To summarize, there is no evidence that native women's participation rate and average (Table 4). For low-skill US-born women, the result instead suggests a negative relationship between LAWA and the daily time spent on household works among this group. Qualitatively similar findings were also observed among US-born men ( Figures A23 and A24 and Table 4). The in 2007 using the average wage in household service occupations as a proxy. Figure 9 shows a SCM analysis on the average hourly wage of workers employed in household service occupations in Arizona before and after the adoption of LAWA. 11 There is no evidence that LAWA statistically significantly increased the average hourly wage of workers employed in these occupations. The difference-in-differences estimate shows that the average hourly wage in household service occupations increases by approximately 1.2% (Panel A of Table 5), but this increase is not statistically significant with a P-value of 0.400. The result that the average hourly wage in household service occupations was not increased by LAWA is surprising, especially because the theory of equilibrium wages based on a standard labor demand and supply framework implies that the reduction in workers in household service occupations should increase the wages of workers in these occupations. The next step of answering why the native female labor supply is relatively unaffected by LAWA would then be to examine whether LAWA led to a significant reduction in the aggregate supply of workers in household service occupations. LAWA and household service occupations' workforce The analysis in the previous section shows that the average hourly wage in household services occupation was not statistically significantly increased by the passage of LAWA. One explanation is that a labor market adjustment in Arizona caused the aggregate supply of workers in household service occupations to remain at a similar level after the passage of LAWA in 2007. To examine this, I first analyzed if LAWA has indeed led to a decline in the size of the immigrant workforce in household service occupations. 12 Figure 10 shows that this is the case. In the absence of LAWA, the number of immigrants is projected to keep increasing to a level above 60,000 workers, while this number declined to approximately 50,000 in actual Arizona. Indeed, the permutation test shows that it is very unlikely that this decline happens simply by chance because there are no other states in which such a deviation between a state and its synthetic control was observed. The difference-in-differences estimate shows that there would be approximately 9,724 additional immigrant workers in household service occupations in the absence of LAWA (Panel B of Table 5). If the passage of LAWA led to a significant decline in the number of immigrant workers in household service occupations, the theory predicts that LAWA must have increased the number of native workers in these occupations as to leave the average wage in these occupations unaffected by LAWA. Figure 11 shows that this is indeed the case. After LAWA was adopted in 2007, the increase in native workers in household service occupations is significantly larger relative to what was projected in the absence of it. The difference-in-differences estimate shows that the number of natives in these occupations would be lower by approximately 7,636 workers in the absence of LAWA (Panel B of Table 5). Comparing this estimate with that of immigrant workers, a large share of the impact of LAWA (~75%) is compensated for by the increase in native workers. A rather interesting finding is that this increase is driven by US-born men. This result mainly reflects the finding that LAWA induced more male immigrants in household service occupations to leave (and not coming to) Arizona after its passage in 2007 (Panel B of Table 5 and Figures A25-A28). To see if this increase in native workers in household service occupations is indeed large enough to leave the size of the workforce in these occupations unaffected by the passage of LAWA, I repeated the analysis for the overall number of workers (foreign and US born) in these occupations. Figure 12 shows that the overall number of workers in household service occupations is relatively unaffected by the passage of LAWA. The difference-in-differences estimate supports the evidence from the graphical observation that the size of household service occupations workforce is not statistically significantly affected by LAWA (Panel B of Table 5). A question remains to be answered: why LAWA increased the number of native workers in household service occupations? One possible answer to this may lie from the fact that native Table 6 and Figures A29 and A30). 13 The analysis indeed shows evidence that occupational income score of low-skilled US-born males decreases due to LAWA, suggesting that these workers would be in a higher paid occupation in the absence of the policy. The difference-in-differences estimate shows that LAWA reduces the occupational income score of low-skilled US-born males by 0.37 (1.34% relative to pre-LAWA score). This finding is similar to that of Lee et al. (2019) who also found evidence of occupational downgrading by US natives following Mexican repatriations in the 1930s. 14 Sensitivity checks The key finding in this paper is that LAWA reduced the number of foreign-born workers in Another concern to the validity of the results presented in this study is the potential exis- Table 7, and the main findings hold under this alternative specification. Conclusion The influx of low-skilled immigrants has been argued to reduce the price of household services and alter the optimal time allocation between household production and market work for women (e.g., Cortes, 2008; Cortes and Tessada, 2011; Barone and Mocetti, 2011). As such, policies that lead to a decline in low-skilled immigrants may have an unintended consequence of reducing women's participation in the labor market. In this study, I examined the impact of the LAWA of 2007, which requires all employers in Arizona to verify if a worker is authorized to work in the United States through the federal Availability of data and material The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Competing interests The author declares that he has no competing interests. Funding Not applicable. Author's contributions Not applicable. Table 3 Table A6 Weights used in construction of synthetic Arizona for household service sector analysis in Table 5 Donor states Hourly wage Native supply Native male Native female
4,060
2020-03-01T00:00:00.000
[ "Law", "Political Science", "Economics" ]
Cultural Differences on Chinese and English Idioms of Diet and the Translation Idioms is a special culture which is shaped in the daily lives of the local people, particularly the idioms of diet has a close relation with various elements, such as the eating custom , history, fairy tales, geographic situations. Also, different ways of translation on different diet idioms in English and Chinese will be analyzed in this article. As a result, it will be very important to know the great culture contents in the idioms of diet in order to do a better job in the research of cross-culture communication and the translation. Introduction Idioms are shaped in a community after a long period's living of the local people, and it is the reflection and expression of the culture of a certain race, because of this, the differences on geography, history, custom and living habits will be reflected in the word-using in idioms especially in the idioms of diet.Using idioms when speaking is a usual way for the local people to express their ideas clearly and lively to others from another nation, so in the course of cross-culture communication, it is an important thing to use them correctly in order to make us better understood by the people with different cultural backgrounds.Unfortunately, only the people who are very good at speaking that kind of language can use adequately and to the point use idiomatic expressions in their speech, and the reason is that most of the time, people know little about the history and the culture behind these idioms.Some scholars point out that idioms are often colloquial metaphors---terms which require users to have some foundational knowledge, information, or experience, to use only within a culture where parties must have common reference.As the development of new technology and the shape of the global country, the need of strengthening conversation between different nations becomes more and more urgent.But the research in such a field is still limited and only during recent twenty years more scholars began to pay attentions to the field of folk-custom and contact this with the cross-culture communications research, but recent researches always combine all idioms together without any logical and systematic convey and classification.The following contents will start from the diet idioms in both Chinese and English, and set up a basis in order to introduce more idioms to the people with different cultural backgrounds, this part includes large numbers of idioms which contain words that are used to describe food and the reasons which lead to the differences of word-using in the two languages, besides this, many examples are used to prove these opinions. The Difference of Word-Using in Diet Idioms Related With People's Eating Habits in English and Chinese In most languages, idioms are created by the laboring people and the local people during their daily life, so naturally, the words that describe the necessities people use or the food they eat are used frequently in idioms.Because of the special living conditions and the geographical situations, the western people like eating the food that can provide them enough energy, calories and high nutritious.Meat is the favorite food of the English people, they prefer beef, mutton, chicken, and game meat, but eat less pork than Chinese people, for Islamism is one of their major religion, and people from such a group refuse to eat pork.They are used to preparing fruits in every meal, and they also like drinking wines such as beer, grape wine and liquor.Toasted bread is made as their main course, and if there are pudding, soup, ham and fresh vegetables, that would be a wonderful dinner.Except various wines, the western people also like drinking milk and tea.They even make such a kind of habit as a part of their life.They like to drink the black tea from China but the way of drinking is different from the Chinese people.Fixed time is one of their characters in every afternoon; English people will choose a suitable place and enjoy the beautiful afternoon with some friends under the warm sunshine with a cup of mixed milk-tea.All of these eating habits have a close relation with the particular geographical conditions.The Great Britain is an island country with large area of oceans around, and the warm temperature and marine climate make this place good for the growth of the grass, and for the development of stockbreeding.Because of the long coastline of England, their lives can not be kept without the Marine Fisheries.The fast pace of the modern life makes the fast food popular in American's life.Hamburgers and hotdogs can be found in every restaurant and snack stall with steaks, fried chicken, seafood and salad.After every meal, dessert will be prepared, and this part always contains: apple-pie, cheese cake, chocolate, ice-cream, sundae and so on. In china, rice and wheat are considered the materials for the main course.Their interests are on the taste of food, so the kinds of dished and cooking techniques are various, such as pan-frying, stir-frying, quick-frying, deep-frying, stewing, and smoking.These differences reflect on the use of words in idioms.These words not only can be used to describe food but also own the characteristic of a certain nation.That is to say, these words not only own their conceptual meaning, but also have abundant figurative meaning and reflected meaning. 2.1.The Favorite Food of the Western People and the Related Idioms 2.1.1.Idioms Which Contains "Bread" Like rice in the life of the Chinese, bread also plays a very important way in the western people's daily life; so there are large numbers of idioms that contains the word "bread", such as: A. Bread and Butter The meaning of this idiom is the way of somebody to earn his living, and the second meaning is "common things".For example: It is a bread and butter diamond.It means a common diamond.Although there are differences between the two meanings, they are still very similar.But it is not always used that way; a bread-and-butter letter is no longer a common letter but a kind of letter to show the thanks of the guests to the host's warm reception. B. Earn One's Bread This idiom has the same meaning as "earn one's living", for example: He now earns his bread by doing odd jobs. C. Hope is poor man's bread. The initial meaning of this sentence is that if a poor man wants to survive he should not give up his hope toward life. Idioms Which Contains "Butter" Butter is also a necessary part in the meal of the western people.Except the examples listed above, there are other expressions, like: A. Butter Up This idiom means to praise or flatter somebody excessively to try to change someone's mind by doing things for him or her and being really nice so he or she will do what they want.This saying comes from the simple act of buttering a piece of plain bread which is like making it look and taste better, this is the same as flattering a person, for example: He began to butter up the boss in hope of being given a better job.But "to butter up somebody" means to entrap or ensnare someone.The two meaning of the idiom is contrary to each other. Idioms Which Contains "Potato" The custom of eating potato in America has been kept for a long time and that can be dated to the foundation of the country.Although it is said that potato is originally planted in Holland, it has already become a part of English.In the colonial period, the culture of potato has also been passed to America.As a result, the Americans also express their interests on potatoes in idioms: A. "A hot potato" refers to a trouble or a difficult problem which can not be solved easily.The meaning can be easily understood from the image described in the idiom.B. "A big potato" is used to describe an important person. C. "A couch potato" is used to describe a kind of person who lives a life with minimum effort or an inactive TV addict.Television was invented in the 1940s'; this idiom dates from the 1970's, and has a close relation with the television.Some people say that because the Americans like keeping themselves in the sofa and eating potato chips when they were watching TV. Idioms Which Contain "Cheese" "He is a big cheese in the company" means he takes an important part in the company. "My brother and I are as different as chalk and cheese" means my brother and I are completely different from each other, because cheese and chalk are apparently two different kinds of things. Idioms Which Contain "Egg" The daily life of housewives is trivial and ordinary, but the idioms they created are not less good than those by their husbands, as they have convenient access to the colorful grassroots of life and they have rich experience of household life. A." Over-Egg the Pudding" It is to spoil something by trying too hard to improve it.To add too many eggs to a pudding, or even to add more to the instant cake makes it unnecessary, is to go too far, to be excessive, hence the current meaning of "to exaggerate", for example: As a director, I think he has a tendency to over-egg the pudding with a few too many gorgeous shots of the countryside. B .An Egg-Head This idiom comes from the ancient Greece story and it refers to the intellectual people, for in English, "egg" is usually used to express the positive meaning. Except above, people use "Good Egg" to describe a good person and "golden eggs" means great benefits. 2.1.6.Idioms Which Contain "Cake" Western people like cooking and baking different deserts, which not only reflect their eating habits but also their high quality lives. A. A Piece of Cake It means it is easy to solve the problem. B. "Take the Cake" means to be the best or to be the first.For example: I have heard of a lot of crazy stories, but this one takes the cake. C. To have one's cake and eat it.This idiom means "to have the advantages of two things which contradict each other".This saying first turned out in the proverbs collected by John Heywood.The cake will disappear after one eats it, and it is impossible to keep it and eat it at the same time.This saying can be seen in many languages like in French, people will use "you can not have the cloth and keep the money" to replace that, for the modern and fashionable cloth is the unique character of France, and in Italy, they will ask: "Do you want to eat your cake and still have it in your pocket?".These expressions can also reflect that: behind every idiom, there are different histories and cultures of different nations. 2.1.7.Idioms Which Contain "Cream" and "Cheese" The developing and advancement of animal husbandry in English-speaking countries make milk products popular in these countries, and this is also reflected in idioms. A. Cream of the Crop It means the outstanding people or the best of everything.Cream rises to the top, and the top is associated with the best.And cream is considered the best part of un-homogenized milk.For example: We are looking to hire only the cream of the crop. B. A Big Cheese It means an important person but it is usually said sarcastically.Some cheese has a very noticeable smell.A big cheese will be noticeable.The sarcasm comes from the fact that the smell is sometimes unpleasant.For example: If you want a raise in pay, you have to be nice to the big cheese. Peanuts In the United States, you can buy lots of peanuts for a dollar, so each peanut is worth very little.That is why this idiom means be a very small amount of money. For example: I am glad that you worked hard all summer selling lemonade and saved five hundred dollars, but to be honest, that is peanuts when it comes to paying for your college education. 2.1.9.Idioms Which Contain "Salt" Salt was an expensive necessity in the past time, and it is also an indispensable sauce in the meal.In the middle ages, in the upstream society, if the host invited guests to have dinner at home, they would put the salt pot in the right middle of the table, and ask the honored guest to sit in front of the salt pot in order to show his or her respect.Others would sit on the sides of the long table. 2.1.10.Idioms Which Contain "Milk" A. "Cry over spilled milk" means to be regretful to the fault that you made but you can not change the result.For example: I really feel regretful about it now, but there is no use crying over spilled milk.B. "Milk and Water" means boring things and persons.It has the same meaning as "the Plain Water" in the Chinese saying.The difference between the two languages is decided by the different living style of the two races. Idioms Which Contain "Fish" From the ancient time till now, people who live in the countries on the sea depend on the fishing industry, most of their living resources are fish.So idioms containing fish are many: A. Every little fish would become a whale.This sentence is used to encourage people to become a successful man. B. Cut no fish till you get them. C. Never offer to teach fish to swim.This idiom has the same meaning as the Chinese idiom "ban men nong fu".D. Who would catch fish must not mind getting wet. E. A Big Fish in A Small Pond. This saying also has the similar meaning in Chinese and the difference is that they compare people with different animals, for example: the English sentence, "She was the kind that would rather be a big fish in a small pond" can be translated to "Ta shi na zhong ning wei ji tou bu zuo feng wei de ren". 2.1.12.Idiom Which Contain "Wine" Western people can not have a dinner without wine, whisky, grape wine, and various beers.These are all their favorite beverages, which can be reflected in idioms. A. "There are less to every wine".This sentence means nobody is perfect in the world.G. Bacchus hath drowned more men than Neptune. In this sentence, Bacchus is the Roman name for Dionysus, god of wine, son of Zeus and Semele, a daughter of Cadmus.Neptune is the name of the god of sea in the Roma mythology, so this sentence can be translated directly as "wine drowned more men than the ocean".h.In wine there is truth. The Words of Diet Used in Chinese Idioms Compared with the warm temperate maritime climate of the Great Britain, the continental climate of China makes it an agricultural country.Because of the particular national conditions, the large numbers of people and the shortage of average natural resources make the need for food more urgent.Grains are planted in abundance in all directions of the whole country, among them, corns, wheat, rice are the most popular ones.Although meat, especially pork is Chinese people's favorite food, the expensive price can not be accepted by most of the people in the past.As a result, the kinds of dishes are abundant, and the shortage of food makes the Chinese people explore their intelligence and invented many cooking methods to cook food, which make Chinese dishes famous both at home and abroad. Rice The importance of rice in China can be compared to bread in English.It is the main course in almost every meal of all the families especially in the southern part of China.There are a lot of idioms which contain the symbol of "rice", that is "mi"in Chinese, such as: A. Qiao fu nan wei wu mi zhi chui It means no matter how clever a housewife is, she can not cook without materials. B. Bu wei wu dou mi zhe yao It is used to describe a person who never gives up his principles to be a better man when he is faced with benefits. C. Bai yang mi yang bai yang ren It means everyone is different in their minds, appearances and the way they treated others. Other vegetables Except rice, Chinese people also like eating vegetables, bean curd, lotus root, and sauce, so the idioms of diet are: A. Dao zi zui, dou fu xin It means although the person likes scolding others, actually he is a kind man.For example: Although his mouth is sharp as the knife, his heart is soft like the bean curd. B.Huang hua cai dou liang le It means everything is late, "huang hua cai " is a kind of vegetable which is common in Chinese dishes, but here it is a metaphor, and it is compared to somebody or something which is late to do something or which is delayed and can not be solved any more. C .Luo bo bai cai, ge you suo ai In this sentence there are two common vegetables, and it is traditionally used to talk about people who have different opinions and could not agree with the others.But it is usually translated as "every man has his hobbyhorse", which can not be directly translated to "everyone loves his cabbage and radish". D. Ou duan si lian In Chinese literature works, authors choose this idiom to describe a special relationship between two people, especially a man and a woman who still keeps contact with each other after they break up."ou"is a kind of vegetable planted in the south of china, the character of it is when it is cut apart, there are still fiber connect each part of it.In translation, it can not be translated directly into "lotus root" because it is not familiar to the western people. Completely Similar Structures A. Idioms Which Have the Same Structure but Can Not Be Translated Directly a. "Kill the Goose" that Lays the Golden Eggs At the first sight of this idiom, if you are familiar with Chinese idioms, it is easy to find the sentence that has the same structure in Chinese that is "sha ji qu luan", except this, by considering the habit of word-using of the western people, it can be easier for you to understand the meaning of the phrase or sentence, because the word "goose" in English always has the same meaning as "chicken" in Chinese.b. "A Piece of Cake" can be easily translated into "xiao cai yi die" if you know this Chinese idiom and the Chinese culture on dishes and diet.c. "As a man sows, so shall he reap" This idiom can be translated into "zhong gua de gua, zhong dou de dou".In both of English and Chinese there are similar expressions, so it is easy to translate this kind of idioms.d.In the idiom "to have one's cake and eat it" can be translated into "yu yu xiong zhang liang zhe jian de" for in Chinese idiom here is a saying: "yu yu xiong zhang bu ke jian de".The meaning is when you are in such a condition, you must choose one and give up the other one .They both have the same meaning.But in most of the time, because of the great differences in English and Chinese culture we can not always find the idioms which have both similar structures and similar meanings, For example: From the structure of the idiom "one cannot make a silk pure out of a sow's ear" people will at once find an idiom which has the same structure in Chinese, that is "qiao fu nan wei wu mi zhi chui ", but in the following sentence, you will find problems: What is the use of a scholarship to that boy?He will never be a gentleman; you might as well try to make a silk purse out of a sow's ear. In this sentence, the meaning of this idiom is "you cannot make something good of what is by nature bad or inferior in quality".Apparently, "sow's ear" refers to "bad material", and silk purse refers to good things, so the Oxford Dictionary translate this idiom into "huai cai liao zao bu chu hao dong xi" which means one can not make productions with good quality from bad materials.On this point it has the similar meaning with the Chinese saying, "xiu mu bu ke diao". B. Idioms Which Has the Same Structure And The Same Figurative Meaning This kind of idiom takes a little part in languages.They have the same structures and the same figurative meaning, so translating them directly not only can make the readers understand but also does well for keeping the original style of the material.For example, "tang yi pao dan" can be translated directly into "sugar-coated bullets" and "sour grapes" can be translated directly into "suan pu tao", and the peculiar reason is that because of the communication between the western and eastern, some new words are introduced from one country to another and become one part of the local people as time passes by. Idioms Which Have Not Similar Structures Except above idioms, there are also idioms in English which have not the corresponding sayings in Chinese idioms, so we should translate them indirectly by using other words we usually use to express such ideas, for example, "Above the salt".From the literal meaning no one knows the figurative meaning of it, and we must know the idiom comes from an ancient story related to the eating custom of the middle age English people, and the habit of putting salt in front of honored people, so the meaning is being in a position of honor, when we translate the idiom we can say "bei zun wei shang bin". Another example is "spill the beans" which means to reveal or make known a secret or a piece of information, only translating the literal meaning will make people confused and cause troubles, so we can refer to the source of the idiom and found it come from an ancient Greek story about their selection system, as a result, it can be translated into "to make known a secret". Translation Methods from Chinese to English 3.2.1.Find the same structure in English, and change the improper word to the one they used to use, for example: "ning wei ji tou, wu wei niu hou"is one of the popular Chinese idioms, but in English, "chicken" is always used to describe a coward man, in Chinese, "ji tou" has the positive meaning, and in English "dog" has a positive meaning, so this idiom should be translated as "better be the head of a dog than the tail of a lion". The same case also happens in the idiom, "gua yang tou mai gou rou", it used to be translated as "cry up wine and sell vinegar" or "offer chaff for grain". 3.2.2.When the literal meaning is not used as frequent as its figurative meaning. Idioms should be added the figurative meaning after the literal meaning when translation. For example, "jiang hai shi lao de la" can be translated as "The old ginger is spicy".And then add "older people have more experienced" after that. Conclusion As an important part in languages, idioms take the heavy responsibility of spreading culture and put forward the civilization of a nation or a community.Each idiom contains a small part of the customs of the local people.This article sets a sight from the angle---the idiom which contains words of diet as an introduction of large numbers of idioms in both English and Chinese, to show the different cultures behind them and the different values reflected by these idioms. These differences are decided by the history background, different living habits and customs, even the geography environment, and their particular climates and reflected in every aspect in their lives.Language absolutely is not an exception.Idioms are largely shaped by the laboring people when they are working, in order to understand them, we should have enough knowledge about their daily life, their history and their customs.Another source is the influences from other country which contains the introduction of costume, festival, food, word, language and so on.The influence also comes from different religions, as Christ is the major religion in England and many other idioms came from the bible.Except above, idioms are also influenced by some mythologies in the ancient time, like the Greek mythology and Aesop's Fables.Chinese idioms are the same.Some words come from their daily life and some of them from the fairy tales and the long history.As the need of communication become more and more urgent between countries in the world, the exchange of culture and knowledge is also an important mission. Translation is a proper way of spreading culture to other countries, and the translation on idioms is no doubt a good action to put this idea into practice, for translation, enough knowledge of the history, geography, custom is necessary, what is more important is the skills of translation, we not only should express the ideas exactly to the both sides, but should master methods on translation, making translation a kind of art , except carrying forward the fine traditions of the former translators in this field, like keeping the principles on translation : keeping the style of the original material, expressing the original idea of the author of the original text, and keeping to the fact, later translators should make more efforts to summarize their own translation skills from their practices and experiences.Only by sticking to the translation principles and deep study and research on the culture can make idiom a more effective medium in the course of cross-culture communication. B . Wine and wenches empty men's purses.C. Wine is a turncoat, first a friend, then an enemy.D. Wine is old men's milk.E. Wine makes all kinds of creatures at table. F . Wine and judgment mature with age.
6,007.4
2010-02-03T00:00:00.000
[ "Linguistics" ]
A Testable Theory for the Emergence of the Classical World The transition from the quantum to the classical world is not yet understood. Here, we take a new approach. Central to this is the understanding that measurement and actualization cannot occur except on some specific basis. However, we have no established theory for the emergence of a specific basis. Our framework entails the following: (i) Sets of N entangled quantum variables can mutually actualize one another. (ii) Such actualization must occur in only one of the 2N possible bases. (iii) Mutual actualization progressively breaks symmetry among the 2N bases. (iv) An emerging “amplitude” for any basis can be amplified by further measurements in that basis, and it can decay between measurements. (v) The emergence of any basis is driven by mutual measurements among the N variables and decoherence with the environment. Quantum Zeno interactions among the N variables mediates the mutual measurements. (vi) As the number of variables, N, increases, the number of Quantum Zeno mediated measurements among the N variables increases. We note that decoherence alone does not yield a specific basis. (vii) Quantum ordered, quantum critical, and quantum chaotic peptides that decohere at nanosecond versus femtosecond time scales can be used as test objects. (viii) By varying the number of amino acids, N, and the use of quantum ordered, critical, or chaotic peptides, the ratio of decoherence to Quantum Zeno effects can be tuned. This enables new means to probe the emergence of one among a set of initially entangled bases via weak measurements after preparing the system in a mixed basis condition. (ix) Use of the three stable isotopes of carbon, oxygen, and nitrogen and the five stable isotopes of sulfur allows any ten atoms in the test protein to be discriminably labeled and the basis of emergence for those labeled atoms can be detected by weak measurements. We present an initial mathematical framework for this theory, and we propose experiments. Introduction "Is the moon there when we are not looking?" was Einstein's quip. We wish to approach the emergence of a classical world from the barest foundations of quantum theory: N entangled variables, with not even a basis chosen for measurement. Here, we take measurement to be real and to constitute an 'actualization' of the quantum state to yield Boolean true false variables. We have at our disposal in the interactions among these N variables: decoherence, recoherence, the Quantum Zeno Effect, and actualization. The "Dud Bomb" work [1], supports our new proposal that actualization interactions among the N variables can occur. We propose further that this enables the variables to collectively "look at" and mutually actualize one another. We call such a set of coupled variables a Collectively Actualizing Set, CAS. In such a CAS, the frequency with which each variable will be measured increases with the number of variables, most simply in a linear way. There is a parallel to a similar idea about the origin of life via 'Collectively Autocatalytic Sets' seen as molecular fossils in ancient prokaryotes. Ref. [2] Such small bases. Interactions between two or more emerging bases can also be studied using systems prepared in an initial superposition for those two or more bases, as further explained below. Using the three stable isotopes of carbon, oxygen, and nitrogen and the five stable isotopes of sulfur, any ten detectably isotope labeled atoms can be located at any location in such quantum ordered, critical, or chaotic peptides, allowing for the precise analysis of the emergence of bases and the classical world. These ideas and their initial mathematical formulation and experimental approaches outlined here constitutes our "testable theory for the emergence of the classical world". The paper is organized thus: Section 2 elaborates the conceptual framework, Section 3 provides the experimental set up for testing our central questions, and Section 4 concludes with further discussion. Materials and Methods We suggest here verifiable experiments and establish the salient features as follows: We start with classifying the environment as a heterogeneous ensemble of mutually actualizing sets composed of dissipative quantum systems. Hence, one such mutually actualizing set can be perceived as a system to be 'measured' by the rest of the ensemble. Here, by measurement we mean interaction, rather than any special metaphysical position provided to an 'observer'. We retain the essence of relational QM by suggesting any variables within a mutually actualizing set can work as an observer in relation to others. Hence, all states generated in such interactions are relative states, and such interactions are 'relative facts'. Environmental decoherence can lead to 'stable facts'; such facts need not be labelled against any particular system. Generally, as the extensive literature [15,16] suggests, the environment 'measures' a quantum dissipative system, where two scenarios may emerge. One is standard decoherence where, via Einselection, only stable states (or stable facts here) survive and emerge as 'classical' observables in two preferred basis: The pointer bases of position and momentum. Such a process, which is generated by entanglement with the environmental degrees of freedom of the system, can be approximated as anti-QZE. The other possible case, there can be continuous projective measurement on the system by the environment, and in the limit, the survival probability of the system's initial state (say a pure state) tends to unity, which is QZE. It is the intermediate case that is of interest to us here. Our framework differs in a fundamental and new way: We suggest that the mutually actualizing sets themselves generate many body QZE that are each an actualization in some single basis, while the CAS simultaneously decoheres due to coupling with environment, where such couplings can be non-uniform across such sets/systems. The important point is: In the target collectively/mutually actualizing set of the subsystems, no preferred basis is yet chosen, while many body QZE happens, where, as the first approximation, the rest of the environment has a definite basis. Hence, decoherence driven by environmental coupling might generate a pointer bases for the target system. The total Hamiltonian is expressed as H = H(S) + H(E) + H(Int). The symbols carry the typical meanings. H is the total system Hamiltonian, H(S) is the system Hamiltonian, H(E) is the Hamiltonian representing the environment, and H(int) is the interaction Hamiltonian. Here S denotes system, E denotes environment, and Int denotes coupling. For states generated via the mutual actualization (many body QZE) to emerge as stable facts/pointer states, such states need to be Eigen states of the H in general. Here, however, we have three regimes, first where H(S) is the dominating part, second, where H(int) is the dominating part, and then the third intermediate regime. We may simplify the scenario by assuming that to start with the system's state and the environment state is in a tensor product state, on which the evolution happens. Hence, we may calculate the reduced density matrix of the system by tracing over the combined state for environmental degrees of freedom. Such a reduced density matrix then follows the von Neumann equation [17]. Many Body QZE Interest in many body QZE is rather a recent phenomenon [18], where many body QZE generating entanglement phase shifts has been studied. Our collectively/mutually actualizing set of systems based many-body QZE differs in some points from the extant framework. We present the salient features of the mechanism below: • Continuous (in the limit) projective/POVM measurements simultaneously acting on the system of many bodies (for example, all lattice points), in effect localizes the evolution of the composite state. However, in our case the frequency with which a given subsystem is measured is proportional to the number of subsystems in the collectively actualizing set, so as that number goes up any subsystem is measured proportionally more frequently. • In the extant literature, we have continuous (in limit) projective or POVM on the entangled many-body dynamics. Such a measurement process would generate 'entanglement' phase shifts, which might be expressed through suitable Hamiltonians of the system, for example that of standard Ising model or a spin glass model [19]. • In our framework, patterns of entanglement among the same set of variables within a collectively actualizing set change, due to successive actualizations. • POVM measures rather than projective measures are weak; however, such measures can also create stochastic 'back action' on the mutually act set, also generating entanglement phase shifts. • Such phase shifts are observed as single quantum many body states. • Hence, we preserve the overall competition between unitary many body dynamics and the stochastic measurements of single members of an entangled subset of the CAS. • Authors have observed [20,21] that such processes involve degrees of freedom that do not commute with each other, for example different components of spin (directions). Starting with a simple one dimensional Ising model, such many body QZE can produce a frozen/localized many body state when the coupling is above critical threshold strength. As that threshold is approached, a behaviour resembling quantum criticality emerges. • Explicitly, a quantum many body dynamics |ϕ(t where sigmas are the Pauli matrices working on the ith lattice site, and the sigmas go over x, y, z. • M is the measurement operator, representing the simultaneous measurement carried over all sites, with a probability, p, per site. Such a measurement can be represented by the KRAUSS operator. • M = TENSOR product over M_i, where Mi s are composed of the one D projectors on the sites. Using the K operator does not carry out sequential measurements since it represents simultaneous measurement on all sites of the Lattice. We might begin using such a picture as an approximation. Later, we need sequential measurements of variables in any one of the entangled subsets of variables in the CAS. The probability of M measures on site i is proportional to n. Again, captures our assumption that frequency with which a subsystem becomes actualized is proportional to the number of surrounding subsystems. The tradeoff between QZE and decoherence: interaction Hamiltonian plays a central role in both the decoherence framework and the stochastic quantum hydrodynamics framework, which we describe later. However, since we are interested how mutual actualization (approximated by many body QZE) helps in the emergence of a specific basis, while the decoherence is present in the background environment, an intermediate regime where neither QZE nor decoherence dominates each other is of importance to the framework. Since, by definition, QZE (as we describe later, which can be thought as a sequence of weak measurements in general) would freeze unitary evolution, whereas decoherence is an approximate process of entanglement of the system (here, for example, the collectively actualizing set of bodies) with the environmental degrees of freedom to generate 'improper mixed states'. A Prospective Maximum Amplitude for Each Basis as a Function of the Rate of QZE This is new physics. We do not know if-and if so, how-amplitudes for the different possible bases may emerge, how repeated actualizations in any single bases may affect the emergence of that basis, how these different emerging bases may interact, or if, between actualization events, an emerging basis decays-if so, at what rate-and how decoherence by coupling to an environment alters all this. Our fundamental proposal envisions a Collectively Actualizing Set, CAS, in which the N variables interact-actualize episodically with one another. Each actualization is in some single basis. We propose that when one among an entangled subset of the N actualizes in some basis, the 'commitment to' that basis, or 'amplitude for' that basis, increases in a stepwise fashion by a fixed increase in amplitude for all the entangled variables in that subset. Importantly, during temporal intervals in which no actualization events occur among the subset of the N entangled variables, we propose that the commitment to, or amplitude for that basis decays at some fixed exponential rate for the entire subset of the N entangled variables. These assumptions imply a relation between the frequency of measurement and the maximum amplitude that can be attained for any basis. As we have assumed an exponential rate of decay of an amplitude for any basis, and a fixed increase in that amplitude for each actualization event in that basis, an equilibrium maximum amplitude for each rate of measurement, F, must exist where the exponential rate of loss of amplitude for that basis equals the mean rate of increase of amplitude for that bases by actualization events in that basis. As we assume that the frequency of episodic interaction-actualization events increase linearly as the number of variables, N, increase, the maximum amplitude attainable for any basis should increase linearly with the number of atoms in the system; hence, the number of amino acids in the test peptides. If we posit that temperature, classical noise, erases any accrued amplitude for any basis, increasing temperature, with the other variables fixed, should inhibit the emergence of any single basis and the classical world. Given the collective dynamics of our system, the effect of increasing temperature may appear as a phase transition. Our experiments below should be able to prove or disprove these qualitative predictions. We hope to study the emergence of a bias for one among a large set of bases as a function of the ratio of decoherence to the Quantum Zeno effect. We propose to measure the ratio of decoherence to QZE, as it is approximated by the use of a relative entropy measurement, for example relative von Neumann measure. Here, we start with a mixed state of one such CAS, when certain individual particles are actualized or they become disentangled, the state of the CAS changes, and hence the entanglement entropy measure: The von Neumann entropy measure can be used to measure the change in the degree of entanglement brought about by the QZE. The ratio of QZE/DECOHERENCE would then be correlated with the oscillation of the von Neumann entropy measure. The maximum divergence occurs when NE(pure state maximally mixed state) where the maximally mixed state = 1/d, and where d = dim H. Hence, NE (pure state||1/d) = ln d. Stochastic Quantum Hydrodynamics Approach (SQHM) Chiarelli and Chiarelli [22], while exploring the literature on quantum hydrodynamics, observed that quantum hydrodynamic equations can play a critical role in describing emergence of the classical world from the underlying quantum domain. The main problem they approach is; however, the fact that the dependence of the dynamics of systems on mass create disconnections in a smooth passage from quantum superposition to classical domain. The authors also observe that many different interpretations (or even different theories than standard QM, for example Bohmian mechanics or different dynamic collapse models, which contain different parameters than QM) of QM has been used to reach at a solution. However, our framework is not directly related to the mass density problem. Now, some insights from Madelung's version of quantum hydrodynamics help us in explaining our proposed experiments. In Madelung's framework, we have a wave function expressed as ϕ = |ϕ|e 2πiS h , which is equivalent to the dynamics of a mass density |ϕ| 2 , with momemtum p i = ∂S ∂q i . One important observation on Madelung's framework (where the dynamics converges with that of Schrödinger's equation) is that when Planck's constant is set to 0, i.e., we take a classical limit, we recover classical dynamics. Hence, such a framework can be used to describe the transition from the quantum to the classical world. Another natural outcome of the framework is non-locality due to the presence of the quantum potential, which also means the presence of trajectories and physical reality independent of measurement. Now, this feature may not be compatible with the collective actualization process we are suggesting here, since the relational ontological view is implicit in our framework, which would imply relative and stable facts. However, for the existence of both, we either need relative states between systems or decoherence-led emergence. Hence, some insights we may draw from Madelung's stochastic quantum hydrodynamics (SQHM) are as below: 1. In SQHM [Appendix], the range of the non-local quantum potential depends on the strength of the interaction Hamiltonian, which shows that systems sufficiently weakly interacting can generate classical behavior in the macroscopic classical limit. In our framework, we also have the central importance of weak measures (which is approximated by weakly interacting subsystems in a collectively actualizing set), and AN interaction Hamiltonian; however, we do not have quantum non-local potentials. Hence, the process of emergence varies between our and Maudling's SQHM. 2. SQHM and decoherence are compatible to each other, while SQHM provides limits when a global macroscopic dissipative quantum system acquires classical behavior, decoherence provides an approximate process of emergence of 'impure mixed' states, which mimics classical reality, even when the underlying global system is quantum. The central assumption in case of decoherence is that the recurrence time is absurdly large. Hence, as emphasized in the paper also, decoherence is not a solution to measurement or collapse problem, unlike dynamic collapse theories with non-linear Schrödinger equations. Since SQHM is a special case of Bohmian mechanics, we have included a technical note on the same in the appendix. Experimental Approaches In order to detect the emergence of a single bases from a larger set of available bases requires an experimental means to initiate a system with more than a single basis available, of which none has as yet been "chosen". A means to accomplish this consists in making use of the capacity to prepare a system in an initial state with two superimposed bases. This can be done among the following five pairs: position-spin, position-polarization, spin-polarization, momentum-spin, and momentum-polarization. We wish to test experimentally a possible relation between the number of variables, N, in a proposed Collectively Actualizing Set, CAS, and the ratio of decoherence to Quantum Zeno effects as the N variables actualize one another, upon the emergence of one or the other of two initially superposed bases, for example, 'position-spin', or 'spin-polarization' by a symmetry breaking among the these two possible bases. We have at our disposal the experimental creation of linear and ring peptides with a tunable number of amino acids per peptide, and choice of which of the standard 20 amino acids occurs at each site in the peptide polymer. Furthermore, by using the three stable isotopes each of carbon, oxygen, and nitrogen as well as the five stable isotopes of sulfur, we can uniquely isotope label any ten atoms at any ten chosen position in a peptide. Hence, by timed weak measurements, as described in detail below, we can assess the onset of a bias towards either of the two superposed bases, the time course of that emergence, and the stability of that emerging bias toward one of the two bases by a perturbing weak measurement of the other basis. Our proposed experiments seem to probe entirely new physics. From the hoped-for data, it may be possible to construct a clean theory of collective symmetry breaking among any pair of bases or the entire set of 2 N bases of such a quantum system. We propose using quantum ordered, critical and chaotic peptides as test objects. It is relatively straight forward to test computationally or experimentally, if a given peptide is ordered, critical or chaotic. The distribution of energy differences between adjacent absorption bands falls off exponentially for ordered peptides. If a given peptide is chaotic, the distribution is a broad single peak given by random matrix theory. The distribution for critical peptides lies between these two distributions [13,14]. In order to study the time course of emergence of one among the two superposed bases, requires a single weak measurement to assess the amplitude for a given basis at different time intervals after time 0. To study the interaction of one emerging basis if the other basis is perturbed by a weak measurement requires only one additional weak measurement. That is, if we have already established the time course of the emergence of either of the two bases alone by single measurements at increasing times after time T 0, we can, in a separate experiment, test the effects of weak measurement of the one basis on the emergence of the other basis. The two bases may emerge independently of one another, or the emergence of one basis may inhibit or enhance the emergence of the other basis. If we can conduct two measurements during the available time, we can test the effect of perturbing one emerging basis on the emergence of the other basis in a single experimental system. Thus, a slower time scale of decoherence in ordered and critical may better allow study of the temporal emergence of bases choice and the interactions between the emerging bases. We stress that our mathematical framework, extended to study non-synchronous projective measurements among the N variables, and weak measurements is needed for more precise predictions. More Experimental Details The basic experimental approach is to create libraries of ordered, critical, and chaotic ring or linear peptides of tunable number of amino acids from 2 to perhaps 100 amino acids or more. Ring peptides are more confined in their structure than linear peptides; however, both have more flexible degrees of freedom than crystals such as C 60 Buckyballs. The flexibility will increase with the number of amino acids in the ring or linear peptide. As noted, a single amino acid has, on average, 10 atoms. We can hope to study single amino acids, 10 atoms, which is less than the C 60 Buckyballs that still show inference, to polypeptides of 100 amino acids and 1000 atoms. Presumably "classicality" increases as the number of atoms in the system increases. Decoherence in peptides and proteins as a function of time can be assessed by known techniques, such as those used in studying light harvesting molecules [23][24][25]. It is not, at present, clear how to directly study the intensity of Quantum Zeno effects within a peptide, however it is reasonable to propose that if the atoms in a peptide form a collectively actualizing set, that QZE interactions will increase monotonically with the number of atoms in the system. This should be directly testable. Some measures we can think of are based on the literature of quantum thermodynamics as follows: On one hand we have through QZE on entangled N elements (approximation of collective actualization process) the emergence of certain variables in any one basis among 2 N bases. On the other hand, we have the emergence of preferred pointer basis, position and momentum, via decoherence. Since we can tune the ratio of QZE and decoherence by use of ordered, critical and chaotic peptides, we can study the effect of tuning this ratio on the trade-off between an emergence of a pointer basis via decoherence alone and the possible emergence of other bases by the QZE and our proposed symmetry breaking. In short, we identified some ratios earlier for the relative intensity of many body QZE to decoherence; hence, in the intermediate regime when neither of the two processes significantly dominate each other, there are some dynamics that can be studied experimentally. Generally, if two states |ϕ 1 > and |ϕ 2 > are perfectly distinguishable, i.e., orthogonal, then we have |< ϕ 1 |ϕ 2 > 2 = 0, and the overlap is 1 when these states are perfectly indistinguishable; hence, we can measure [20], first of all, the average time taken for one state (say in our case this is one state initially formed of QZE or collective actualization) to transform into another in general non-orthogonal state before decoherence takes over. This can be shown as the time average of 1 − |< ϕ t |ϕ t >| 2 . Here, t and t means these states are emerging in different moments. Second, we can also measure in the same vein the amount of state evolution that has happened in any time interval, generally we have the measure 1 − 1 T 2 |< ϕ t |ϕ t >| 2 dt dt where the limits of integrals are between 0 and T. More on time measurements in our experiments: if we prepare systems in inter or intra entangled states, then we need to be able to define time 0 and the interval between that and the end of the decoherence process (on the order of femtoseconds or nanoseconds), since this should be the time interval in which a specific basis emerges through the process of collective actualization. Then, decoherence takes over, which might allow only the so-called pointer bases to survive. Here, we can use the above defined formulas for measuring the amount of quantum evolution within that time scale. We can obtain some reference from the experiments on light harvesting molecules, where the time scale for decoherence is on the order of femtoseconds. Recent studies observe that QZE can be achieved by a series of arbitrary weak measurement (for example, [26]); hence, in the collective actualization process that is approximated by the many body QZE, we may assume series of WMs happing between the bodies. Non-equilibrium quantum thermodynamics can also be cited as another area for such applications [27]. In the standard measurement scenario, we start with |Ψ> as the pure state of the system before measurement. For convenience, we assume the state can be expanded in computational basis. We can only calculate the probabilities of finding a definite value m, say prob (m), which is one Eigen value. In general, we can start with positive operators, M, which are defined as ∑ m MM D = I, where D is a symbol we use to signify the Hermitian conjugate operation. We have Born's probability to obtain a specific Eigen value, p(m) = < Ψ MM D Ψ >for the mth eigenvalue. If the Ms are also projective operators, then MM D = M, and we obtain a projective measurement. Again, the state of the system after measurement in general will be M|Ψ>. Call that |Ψm>. Say we start with |Ψ>= 1/ √ 2[|0 > + |1 > ] , where these states are the eigenstates for the Pauli matrix S in the Z direction S (z) = 1 0 0 −1 . A simple projection measurement on the state would be either via M = |0><0| projection on to |0> or M = |1><1| projection onto state |1>. We can also measure the probability of the outcome of measurements by using POVM. With a slightly different completeness criterion: ∑ m E m = I, where p(m) = <Ψ|E m |Ψ>. We propose using weak measurements (WM), [26][27][28], to assess the emergence of a basis. This requires creating the initial state at T = 0 in a chosen mixed basis, then assessing the possibly increasing amplitude for one of the bases over the subsequent decoherence process. In this framework, all systems are quantum, and thus generally, interactions are between quantum systems (in concordance with RQM). In the first step we set up a weak coupling between the target quantum system and the quantum measuring device. (Here, we recall the collective actualising set of particles are peptides, so any one of the atomsincluding isotope labelled atoms-in the set can be an observer.) In the second step we measure strongly with the quantum measurement device. The outcome of projective measurement on the quantum measuring device is the weak measurement (WM) outcome. This weak measurement outcome assesses the amplitude for the bases used in the weak measurement. By this means, we can assess whether the amplitude for any basis increases or not over the interval from the moment, T = 0, of preparation in the chosen mixed basis, position-spin, position-polarization, or spin-polarization. Using the above combination of weak coupling followed by strong measurement in a chosen basis following T 0, it is possible to assess: (i) The emergence of an amplitude for a given basis. (ii) The consequences for the emergence of an amplitude for one basis of a strong measurement of the other mixed basis. Strong measurement of the second basis may have no effect on the emergence of the first basis, enhance or inhibit that emergence. We note again that the emergence of a basis can be assessed in any one of the 10 isotope labelled atoms located arbitrarily in our test peptide. Mathematical necessary condition for a measurement to be weak is that standard deviation of the measurement has to be larger than the differences in the Eigen values of the system. The capacity to carry out the experiments above depends on the ratio of decoherence and Quantum Zeno effect. By using quantum ordered, critical, or chaotic peptides we can tune decoherence time from a very slow process extending over a nanosecond for ordered and critical peptides to a short femtosecond decoherence time for chaotic peptides. Experiments with Mixed Basis Entanglements As mentioned earlier quantum variables can be prepared in mixed bases, for example entanglement between two variables between spatially separated quantum systems (polarization of one photon with spin of another, for example), or entanglement between two degrees of freedom of the same system (intra system entanglement). Such systems cannot be considered to have a definite 'identity'; however, measurements can be well performed on different pairs of entangled variables in two mixed bases. Also, there can be protocols built where there can be swaps between intra and inter system entanglements. In our experimental set-ups, we can prepare peptides/amino acid molecules in such entangled states. Particularly, we assume a maximally entangled N systems such that each of the N variables is further represented in a mixed basis state, i.e., for each individual variable, there is no definite basis. For the entangled set of N such individual variables, there is no definite basis. Overall, we have both intra-system entanglement and intersystem entanglement. We can set up an experiment such that each of N variables is prepared in the same two mixed bases entangled, say position and polarization; hence, if a weak measurement is performed in one of the bases of an individual of N, then due to intersystem entanglement, the same basis for the N − 1 elements would emerge. Again, we may again deploy the measures of different ratios that we mentioned earlier. However, we can also prepare a system with some pairs of variables sharing two mixed position spin bases, while other pairs of variables share two mixed position polarization bases, while yet other pairs share two mixed spin polarization bases. Using at least 10 stable isotopes, see next, we can entangle a set of atoms in arbitrary combinations of the three different mixed bases and study their detectable behaviors. The combinatorial possibilities for different patterns of entanglement of our isotope labeled atoms is very large. As noted, there are three stable isotopes of carbon, nitrogen, oxygen, and five for sulfur. One of each of the three is the normal isotope, so is in all the amino acids of our test protein. The other two isotopes of C, N, and O are not. Thus, we have two detectable isotopes each of C, N O, and four detectable isotopes of sulfur. Again, as noted, that is 10 detectable signals that we can put into any atoms in the peptide length N. If In sum, any of the 45 pairs of labeled atoms can be in 11 different entanglement relations. Therefore, using only paired atoms that are both labeled, there are 11 45 different patterns of entanglement among 10 labeled atoms at different specific locations in the test peptide. This should allow detailed assessment of basis emergence as a function of temperature, N, and the ratio of QZE to decoherence. One of the central strengths of the experiments we are suggesting is a plethora of different classes of entanglements, which can result out of mutual actualization processes; it is well known in the literature of multi-particle entanglement that when we move from two Qubits to three Qubits entanglement states, we have different classes of entanglements generated, for example W and GHZ. Along with these different classes, we have both intra and inter entanglements. Conclusions It is widely supposed that, as the number of atoms of a system increase, it should become more 'classical'. Well established work has studied aspects of classicality as the number of atoms increase. Buckyballs with C 60 still show interference. We propose here these SEVEN new ideas: (i) Collectively Actualizing Sets, whose variables interact with one another and thereby actualize one another. (ii) Such collective interaction induces a symmetry breaking among two or more mixed bases to a single preferred basis. (iii) An initial consideration of the joint effects of decoherence and internal Quantum Zeno Effects within such a CAS upon the emergence of the classical world. (iv) The use of quantum ordered, critical, and chaotic peptides of lengths from one up to 100 amino acids or more as the test objects. (v) The use of the three stable isotopes each of oxygen, carbon, nitrogen and five of sulfur to uniquely and identifiably isotope label any ten atoms placed in arbitrary positions within the test peptide. (vi) The use of mixed base entanglement among the pairs: position-spin, position-polarization, spin-polarization, momentum-spin, and momentum-polarization to study symmetry breaking between these two bases within a collectively actualizing set. (vii) The use of weak measurements of isotope-labeled atoms in such peptides to assess the emergence and stability of one or the other among each of these five pairs of entangled bases. We hope our proposal is seen as a continuation of a long tradition. The basic concepts seem reasonable. Creating a real mathematical framework is a large further task and so is assessing the feasibility of the proposed experiments. A possible alternative mathematical framework: our framework of mutually collectively actualizing set is close to a very recently proposed framework for relational QM, Fact-Nets. The main proposal for fact-net framework is to recover the standard conditional probability measures in QM without having any quantum state as a physical entity, which is also the central proposition of relational QM, since the founders have thought that the major confusion in interpretations of QM has been due to placing any ontological weigh on the wave function. Fact-nets also do not need the Hilbert space formalism; however, the central features of such a formalism can be recovered. Here, we are pragmatic about the interpretation problem, and mention that, in subsequent progress with our project, we might use Fact-nets as a possible coherent mathematical framework. Mathematical Appendices I. a. Hilbert space formulation In QM Hilbert spaces provide one of the suitable mathematical foundations of QM. Hilbert space is a complex linear vector space, with a norm and a scalar product defined on it. Hilbert spaces are complete and separable. Separable means, as we have seen, having a countable basis, and complete means every Cauchy sequence has a limit within that space. An example of a Hilbert space used in QM is L2, which is the space of square-integrable functions. This is required for defining probability densities and ensuring probability conservation. For example, integrals of type ψ*ψ dx are finite and can be normalized, Ψ(x,t) being the wave function defining the state of the quantum system and satisfying the Schrodinger equation. In the properly normalized form, it is represented as a ray in a Hilbert space H. ψ can be expanded linearly as ψ = ∑ c i ∅ i where the ϕi s are a complete set of orthonormal basis states and moduli squared of the coefficients are the probabilities of observing specific eigenstates. This is the famous Born rule. One of the simplest quantum mechanical systems is a particle in a box, such as a single electron constrained to move in a straight line between parallel walls at a distance L apart. Here, the state of the particle is described by a wave function ψ, which is an element of L 2 ([0. L]). This description is in sharp contrast to Newtonian mechanics where only two numbers are required for specifying the particle state, namely the position [0, L] and velocity v R. If we want to find out the probability that the particle is within a range [a, b], which is a subinterval of [0, L], then the famous Born rule is invoked: The probability that the particle is in [a, b] at the time instant t = b a |ψ| 2 dx/ L 0 |ψ| 2 dx. Many thorny questions already arise from this brief discussion. For example, what is the status of ψ? Is it a physical reality? Or, is it simply a mathematical device to compute probabilities? In other words, is ψ ontic or epistemic? This question has dominated the history of the foundations of quantum mechanics. Composite systems in QM are generally described by Tensor products of Hilbert spaces representing the individual systems. Tensor product is strictly larger than the Cartesian product of spaces. Hence, at times the tensor product feature is termed as quantum holism: whole being strictly greater than the sum of parts. Operations on Hilbert Spaces As in the case of complex Euclidian spaces, given two or more Hilbert spaces, one can generate larger spaces by taking direct sums or tensor products. Hilbert spaces are called separable if and only they have countable orthonormal bases. In physics we mostly use separable Hilbert spaces, and all infinite dimensional separable Hilbert spaces are isomorphic to each other. Reflexivity is a very important property of a Hilbert space H. If ϕ is an element of H*(the dual space), then there exists a unique u in H for which ϕ(x) = <u,x> for all x in H. Bounded and Unbounded operators in Hilbert space Continuous linear operators on Hilbert spaces, which map bounded sets to bounded sets are called bounded operators. The norm of such an operator A is defined as ||A|| = Sup {||Ax||: ||x|| ≤1}. The set of all continuous linear operators on a Hilbert space with addition and composition operations, the norm and the adjoint operation forms a C* algebra. An element U of this set is called unitary if its inverse exists and is given by U*, such that < Ux, Uy > = < x, y > for all x, y in H. A linear operator that is defined all over a Hilbert space is necessarily bounded. However, linear operators that are defined only over a proper subspace of H are called unbounded operators. Unbounded self-adjoint operators are of great importance in quantum mechanics as 'observables'. Examples are differential operators, such as −id/dx and multiplication by x. b. Pure states are rays in Hilbert spaces, which can be described as linear superposition of basis-elements, provided that a complete orthonormal basis exists. Generally in quantum information theory, a computational basis is used (|0> and |1> with their appropriate matrix representations); however, linear combinations of such basis elements are also used for a legitimate basis. The orthodox formulation of QM was done based on the pure states, and various features, such as entanglement, were defined in terms of pure states earlier. However, mixed states has appeared as more general and practical conception. Mixed states can be considered as statistical probability distributions over pure states, and represented by density operators. Mixed state entanglements are widely used currently, Neumann master equation is often used for describing evolution of density matrix states. c. POVM measures: A positive operator valued measure (POVM) is a family of positive operators {M j } such that ∑M j = I, where I is the identity unit operator. It is convenient to use the following representation of POVMs: Mj = V* j Vj, where Vj: H → H are linear operators. A POVM can be considered as a random observable. Take any set of labels α1, . . . , αm, e.g., for m = 2, α1 = yes, α2 = no. Then, the corresponding observable takes these values (for systems in the state ρ) with the probabilities p(αj) ≡ pρ(αj) = TrρMj = TrVjρV* j. We are also interested in the post-measurement states. Let the state ρ was given, a generalized observable was measured and the value αj was obtained. Then, the output state after this measurement has the form ρj = VjρV * j/(TrVjρV* j). II. Here, we discuss some further details of weak measures: Describing the 'measuring device': we need to describe the wave function of the measuring device in a specific basis, say position basis, this is before the strong measurement on the measuring device, say |∅ >= ∅(x)|x > dx, where we define, we also define a position operator, X|x>=x|x>, ∅(x) such that its normally distributed around 0, with a σ. We later strongly measure |∅ > to obtain a reading on the device, which is the outcome of the WM. We need a conjugate operator to X, say P, s.t. [X,P] = ih/2π. System/body on which the measurement happens: we can decompose the state of the system in a given Eigen basis corresponding to a Hermitian operator say A acting on the system. Such that A|a k >= a k |a k >; hence, for the system's state, |Ψ>= ∑ k a k |a k > . Hence, we consider the interaction Hamiltonian for the dynamics. This is between the system and the measuring device: H(int) = g(t)A ⊗ P, where g(t) is the coupling impulse function T 0 g dt = 1. For the 'measurement' process, the vector of relevance is |Ψ>⊗|φ>. Then, we have the dynamics of this weakly coupled state e −iHt/h |Ψ>⊗|φ>. Now, we need to compare the variances of the wave functions, if φ has a larger variance than the Ψ the waves would overlap and we have a scenario ready for weak measurement, otherwise we will have strong measurement. The next step is a strong projective measurement on the 'measuring' device, which reveals the information about the initial system/body's state with a slight bias. Two state vector formalism ( [28,29]): This is a formalism for describing WM and post selections. Introducing 'post selection' in the framework of WM results in various strange results, even negative probabilities. Say we prepare an ensemble of systems prepared in the state |ϕ in > , then we weakly measure such an ensemble by a device such that the initial state of the device is |∅(x) > , and the interaction Hamiltonian is H int = g(t)A⊗P, where A is a Herm operator on the S and P is the conjugate operator on device. If we want to have the amplitude for the final state vector ϕ f in > , we can compute it by the transition probability rule. This means that perform measurement with H with a no of copies of the system, and choose only those results that have a state in the direction of fin. Create operators P 1 = ϕ f in > < ϕ f in ⊗ I d and a sum form operator. Then, with the help of such operators do a strong measurement on composite state We also assume that P d has a lower variance as compared to X d ; hence, weak coupling between system and device is possible. Now, the above vector approximates to The above can be simplified as Now, we can compute probability of post selection < ϕ w P D P ϕ w >, where P is P 1 actually. where ϕ w is defined early as the state: e −iHT/h |ϕ in > ⊗|∅(x) > ; hence, if we plug in all these in the above expression, we obtain P () = < ϕ f in ϕ in > 2 Weak measurements as universal POVM are general measurements: they may capture many phenomena not revealed by projection measures: extra randomness in the measures or incomplete information in measures. We start with a density matrix description of initial state, which undergoes random updating: ρ to ρ j Such that ρ j = P + j ρP j tarce P + j ρP j , where the denominator is the probability of the jth outcome. Now, a unitary transformation can be decomposed to sequence of weak unitary transformations. Any generalised measurement as a sequence of weak measurements. Weak measures can be termed as those whose measures do not impact the initial state significantly. There are other definitions for example measure, which generates large change in state with a small probability. Here, we see P j = q j I+ ∈ j , where q and epsilon are operators such that q (0,1) and epsilon is an operator with a very small norm. Weak measurements can be found in systems under continuous monitoring. III. Here, we describe the 'mixed basis entanglement': There is a strong literature in quantum as well as classical optics (where we still consider Maxwell field equations rather than any quantum degrees of freedom, for example light quanta or photon) where states can be produced, which expresses entanglement between multiple degrees of freedom. One such case is path-polarization entanglement, and such states have been found to violate Bell or CHSH inequalities [30] Hence, such states cannot be considered to have any determinate basis. Some authors [31,32] hold that classical entanglement is based on intra system or such path-polarization type entanglements, whereas genuine quantum entanglements are inter-system, for example EPR pairs. IV. Non-Hermitian Hamiltonian: Earlier, we mentioned Ising or a possible Spin Glass Hamiltonian for approximating the dynamics of our experimental peptide framework. Evidence supports the use of tunable rugged spin glass models, called NK models, for proteins. Given a choice of Hamiltonian we observe that overall there are two broad choices for describing an open system Markovian dynamics. One, where we can use GKSL master equations, under several assumptions, or where we use an effective non-Hermitian Hamiltonian. In the literature of many body QZE, non-Hermitian Hamiltonians have been used. A substantial body of literature [33,34] onward has observed that Non-Hermitian Hamiltonians can also exhibit real Eigen values, provided PT (parity and time reversal) symmetry is embedded in such a formation. Exceptional points emerge in such Non-Hermitian dynamics where given PT symmetry Eigen values change from real to complex in general, a phase transition from so-called unbroken PT symmetry to broken PT symmetry. The overall assumption of such dynamics is that though the underlying full Hamiltonian is Hermitian, the effective system Hamiltonian can be treated as non-Hermitian. V. Bohmian mechanics: basic quantum hydrodynamics approach Modern renditions of Bohmian mechanics (for example here [35]) present it as a first order theory, where velocity, which is the rate of change of positions, is fundamental. Velocity is provided by the so-called guiding equation. Second-order concepts, such as force or acceleration, do not contribute in this version. However, in the original version, Bohm conceived of a second order theory, where forces derive from a non-local quantum potential. The main technique is to rewrite a wave function in the polar form, which is ϕ = R exp 2πiS/h, where R and S are real valued functions. Then, re-writing the Schrödinger's equation in terms of these new variables, one obtains two coupled equations, one, a continuity equation ρ = R 2 , and another, a modified Hamilton-Jacobi equation for S., the modified equation has an extra potential term U = − ∑ k (h 2 /2m k )∂ 2 k R/R, which is termed as the quantum potential. Particle trajectories are then shown to be resulting from quantum potential in addition to the usual forces.
11,034.8
2022-05-30T00:00:00.000
[ "Physics" ]
Instantaneous Self-Powered Sensing System Based on Planar-Structured Rotary Triboelectric Nanogenerator Self-powering electronics by harvesting mechanical energy has been widely studied, but most self-powering processes require a long time in the energy harvesting procedure, resulting in low efficiency or even system failure in some specific applications such as instantaneous sensor signal acquisition and transmission. In order to achieve efficient self-powered sensing, we design and construct an instantaneous self-powered sensing system, which puts heavy requirements on generator’s power and power management circuit. Theoretical analysis and experimental results over two types of generators prove that the planar-structured rotary triboelectric nanogenerator possesses many advantages over electromagnetic generator for the circumstances of instantaneous self-powering. In addition, an instantaneous driving mode power management circuit is also introduced showing advanced performance for the instantaneous self-powering sensing system. As a proof-of-concept, an integrated instantaneous self-powered sensing system is demonstrated based on Radio-Frequency transmission. This work demonstrates the potential of instantaneous self-powered sensing systems to be used in a wide range of applications such as smart home, environment monitoring, and security surveillance. Introduction With the development of economics and information technologies, peoples' demands on personal consumer electronics are becoming larger and broader. The major factor that limits the application and functional evolution of electronics is their dependence on batteries. For most cases, batteries are the guarantee of electronics' normal functioning but also maybe the trouble under some specific scenarios, such as electricity exhausting in resources limited areas [1] and irreversibly battery aging [2,3]. What is more, to ensure enough electricity, a bulky battery must be carried that severely limits the further reduction of electronics' size and increases cost, which are harmful to the convenience tendency of electronics. A promising route that makes electronics battery-free is self-powering electronics by converting environmental energy into electricity, such as nanogenerator [4], electromagnetic generator [5], piezoelectric nanogenerator [6], and thermoelectric generator [7]. Among them, triboelectric nanogenerator (TENG) is expected to be an excellent solution due to its high-power density on a per unit volume and mass [8,9]. In recent years, self-powered sensing system based on TENG has been comprehensively studied [10,11], but in most previous studies, a relatively long time is wasted on harvesting mechanical energy before a self-powered sensing process can be successfully deployed [12]. On the one hand, it increases the sensor's working period and thus causes low working efficiency. On the one hand, it increases the sensor's working period and thus causes low workin efficiency. On the other hand, it limits applications in some specific scenarios, such as SO signal acquisition and transmission in emergency disasters [13,14]. In this work, we put forward the concept of the instantaneous self-powered sensin system. With this self-powered sensing system, the sensor's working does not rely on battery, and the sensor's signal can be real-time acquired. It is realized due to two mai points. The first point is that the planar-structured rotary TENG (pr-TENG) is selected a the mechanical energy harvester due to its high-power density. The second point is tha an instantaneous driving mode power management circuit (PMC) is constructed with longer duration of discharge current and higher electricity transfer efficiency. In order t put the concept of instantaneous self-powered sensing system into practice, we furthe design and build an instantaneous self-powered Radio-Frequency (RF) transmission sys tem that is driven by the pr-TENG. The pr-TENG, the instantaneous driving mode PMC and the RF transmission circuit are integrated into a door shell. Along with an instantane ous period of doorknob rotating by hand, the pr-TENG can harvest the rotational mechan ical energy for emitting a self-powered remote-control command. The reported self-pow ered sensing system in this work shows a wide range of potential applications in smar home, environment monitoring, and security surveillance. The Design Considerations for the Instantaneous Self-Powered Sensing System A self-powered sensing system usually consists of three major sub-systems: an en ergy conversion unit, a power management unit, and a sensing unit, as diagrammed i Figure 1. In the energy conversion unit, environmental mechanical energy or thermal en ergy is harvested and converted into electricity by the generator. In the power manage ment unit, the unstable output of the generator is regulated to supply stable voltage an current for sensors. The instantaneous self-powered process means the energy harvesting process an the sensor-powered process shall be simultaneously taken place, and the whole processe are completed in a short period. In order to achieve this goal, two requirements for th system should be met. The first requirement is that the generator's output power should be high enough This is because most electronics' power is in the range of milliwatt (mW) to watt level. Fo a generator with a power of 0.1 mW, it takes at least 10 s to harvest energy and complet a successful driving process for the electronics with a power of 1 mW. For these low The instantaneous self-powered process means the energy harvesting process and the sensor-powered process shall be simultaneously taken place, and the whole processes are completed in a short period. In order to achieve this goal, two requirements for the system should be met. The first requirement is that the generator's output power should be high enough. This is because most electronics' power is in the range of milliwatt (mW) to watt level. For a generator with a power of 0.1 mW, it takes at least 10 s to harvest energy and complete a successful driving process for the electronics with a power of 1 mW. For these low-power generators, the most common method to realize self-powered mW electronics is to harvest energy in a long period, mostly lasting seconds or even minutes [12,15]. The second requirement is that the power management circuit (PMC) should possess high electricity management efficiency, which means less energy will be wasted. Most previous studies on self-powered electronics divide the self-powered processes into two steps, the energy-harvesting process and the energy-releasing process. The harvested energy is firstly stored in a battery or a capacitor. Then, the stored energy is released to drive electronics when enough energy is harvested. The divided processes are usually controlled by a switch. We call this kind of working process the non-instantaneous driving process because it extends the sensor's working period to seconds or even minutes, which cannot be considered as an instantaneous action. Conversely, in the instantaneous driving process, the energy-harvesting process and the energy-releasing process occur simultaneously (within one second), which will dramatically decrease the driving period and improve the working efficiency. In the following text, we will talk about the design of the high-power generator and instantaneous driving mode PMC to meet the requirements for the instantaneous self-powered sensing system. Design of the High-Power Generator for Instantaneous Driving Various energy harvesting technologies have been used for building the self-powered sensing system, as shown in Appendix A, Table A1. Among them, the most common generators that possess high-power with mW level are triboelectric nanogenerator [9,16,17] and electromagnetic generator (EMG) [18]. The most promising TENG is the planarstructured rotary TENG (pr-TENG) [9]. The high power of pr-TENG benefits from the high density of periodic grid design. EMG has the same periodic grid design and high power as the pr-TENG. However, as to realize instantaneous driving, pr-TENG and EMG have big differences. As shown in Figure 2a, the pr-TENG essentially consists of a rotor and a stator. The stator has three components: an electrification layer, an electrode layer, and a substrate. The electrification layer is made of poly tetra fluoroethylene (PTFE) that has the opposite triboelectric polarity against the rotor. The electrode layer is composed of electrode A and electrode B that have complementary planar grids separated by fine gaps. The rotor has the same pattern as one of the electrodes. The rotor and electrode layer are fabricated by the printed circuit board (PCB) manufacturing technology. The key design that makes the pr-TENG possessing high power density is the rotor's and electrodes' radial-arrayed periodic grids design. By using the PCB manufacturing technology, the grid number can be increased to hundreds. When driven by a mechanical force, the rotor spins along the axis in the center. The conversion process of mechanical energy into electricity is illustrated through a structural unit as the rotor spins from position state (i) to state (iii), as shown in the diagram at the bottom of Figure 2a. Under the effect of triboelectrification, the negative charges transfer from the rotor to the electrification layer when the two materials with different triboelectric polarity contact with each other, remaining the positive charges on the rotor. Due to the law of charge conservation, the surface charge density on the rotator is twice as much as that on the stator because the surface area of the rotor is half as much as that of the electrification layer. At state (i), rotor is aligned with electrode A. If the two electrodes are electrically connected, namely, on the short-circuit condition, free charges will redistribute on electrodes due to electrostatic induction: part of negative charges accumulate on electrode A to neutralize with the positive charges on the rotor, and the same quantity of positive charges accumulate on electrode B to neutralize with the negative charges on the adjacent section of the electrification layer. At state (ii), the rotor is situated at the intermediate position of electrodes, and electrons flow from electrode A to electrode B to form a new electrostatic equilibrium. On state (iii), the rotor is aligned with electrode B, exhibiting a position symmetry and charge distribution symmetry with that of state (i). From state (i) to state (iii), electrons flow from electrode A to electrode B to generate the output current. When driven by a mechanical force, the rotor spins along the axis in the cen conversion process of mechanical energy into electricity is illustrated through a st unit as the rotor spins from position state (i) to state (iii), as shown in the diagra bottom of Figure 2a. Under the effect of triboelectrification, the negative charges from the rotor to the electrification layer when the two materials with different t tric polarity contact with each other, remaining the positive charges on the rotor the law of charge conservation, the surface charge density on the rotator is twice as that on the stator because the surface area of the rotor is half as much as th electrification layer. At state (i), rotor is aligned with electrode A. If the two electr electrically connected, namely, on the short-circuit condition, free charges will r ute on electrodes due to electrostatic induction: part of negative charges accum electrode A to neutralize with the positive charges on the rotor, and the same qu positive charges accumulate on electrode B to neutralize with the negative charge adjacent section of the electrification layer. At state (ii), the rotor is situated at t As another kind of high-power generator, which has been widely used in our daily life, EMG has many similarities comparing to pr-TENG in the device structure, as shown in Figure 2b. The pole N and pole S of the magnet correspond to the rotor with positive triboelectric polarity and electrification layer with negative triboelectric polarity. The radialarrayed periodic magnet units and coil units correspond to the radial-arrayed periodic rotor grids and electrode grids. The electron flowing in coils corresponds to the electron flowing between electrode A and electrode B. What is more, the origins of output current in pr-TENG and EMG also have similarities. The open-circuit voltage of EMG V oc is proportional to the changing rate of the magnetic flux in each coil loop (dΦ/dt) and the number of total loops (N), which can be expressed as below: Considering the total resistance R in EMG, the current in R can be expressed as V oc /R, that is, Apparently, the current in R is from the total changing rate of the magnetic flux in coils. In TENG, Zhong Lin Wang has initially pointed out that the current in TENG is from the Maxwell displacement current [19], which can be expressed as D total is the total electric displacement vector in the space between electrification layers. No matter whether in EMG or in TENG, the changing rate of the magnetic flux or the electric displacement vector is both from the external mechanical force-induced relative position changing between the two sectors with different polarity; one is from the magnet, and the other is from the electret. The simulation by COMSOL can visualize the similarities above, as shown in Figure 2c,d. The geometric models used in the simulations are the same as models in Figure 2a,b, and parameters of these models can be seen Appendix B. For the pr-TENG, the rotor is set to rotate at a uniform angular speed of 2π/s. The simulated potential distribution and the potential difference between electrode A and electrode B (open-circuit voltage V oc ) are presented in Figure 2c. The peak-to-peak value of the V oc curve is about 500 V, which is heavily dependent on the triboelectric charge density on the surface of the electrification layer, and the frequency of the curve is 4 Hz, heavily dependent on the rotation speed and the number (n) of grids on the rotor and the electrodes (here n = 4). The simulation results of the EMG have similarities but also differences compared with pr-TENG, as presented in Figure 2d. The frequency of the open-circuit voltage is the same as that of pr-TENG because the magnet and coil unit number (n = 8) is twice as much as the grid number in pr-TENG, but rotation angular speed is half (here rotation angular speed is π/s). The peak-to-peak value of the curve is only about 0.1 V, much lower than that of pr-TENG. This difference mainly comes from the working mechanism of the two generators; one's open-circuit voltage is from magnetic flux, and the other is from the electric displacement vector. This feature reveals the excellent advantages of the pr-TENG in harvesting mechanical energy. Electrical Outputs of Generators To further validate the simulation results, an integrated pr-TENG is fabricated, as presented in Figure 3a. Photographs (i) and (ii) present the electrode layer and the rotor of the pr-TENG. The rotor, electrode A, and electrode B have the same grid number, diameter, and thickness. The grid number is 120. The diameter is 120 mm. The thickness is 0.4 mm. Schematic diagram (iii) presents the assembled pr-TENG. A hand knob is set to be coaxial with the pr-TENG for the purpose of showing the potential application of the pr-TENG in harvesting mechanical energy induced by hand motion like opening a door. The electrical output of the pr-TENG is measured as a slight rotation angle of π/2 occurs at an average rotating speed of approximately 50 rpm. The V oc of pr-TENG is measured by an electrometer (Keithley 6514), as presented in Figure 3b. The V oc exhibits a triangular wave with a peak-to-peak value up to 450 V, which is much like the simulation result in Figure 2c. The frequency of V oc curve is about 104 Hz, much higher than the simulation result of 4 Hz. This is because the grid number of the real pr-TENG is dramatically increased to 120. It clearly points out an easy way to improve the output power of the pr-TENG, which is to improve the grid's integration level. The situation is different for the EMG. A commercial three-phase alternator is used for comparison, as presented in Figure 3c, and its detailed size can be seen in Appendix C. The EMG has 4 gears as the rotation axis. The first gear named Gear0 is coaxial with the rotor of EMG. Gears named Gear1 to Gear3 are set to improve the rotation speed of the rotor by the coupling of adjacent gears. The same rotation speed of 50 rpm is employed on Gear0 and Gear3 by a motor. Figure 3d presents the measured V oc of the EMG. We can see that when Gear0 is set to be the rotation axis for the EMG, the frequency of V oc curve is 12 Hz, and its maximum peak-to-peak value is 0.7 V. When Gear3 is set to be the rotation axis, the frequency of V oc curve is 132 Hz. The frequency enhancement is due to the multistep speed changing on the coupling of 4 gears that makes the rotation speed of the rotor increase 11 times; the use of gears to improve rotor's rotation speed has been previously reported for the TENG [20]. Meanwhile, the maximum peak-to-peak value increases to 9.2 V. Actually, the power of the EMG is usually improved by coupling of multiple gears. Because for the EMG with a confirmed size, the magnet unit number and coil loops usually reach their limits of manufacturing. The only simple and effective method to improve power is to increase the number or transmission ratio of gears. However, this method will bring two adverse effects. First, it will increase the total volume of the generator, which is harmful to the sensing system's integration. Second, it will increase the damping between two adjacent gears, leading to a decrease in energy conversion efficiency. The Voc of pr-TENG is measured by an electrometer (Keithley 6514), as presente Figure 3b. The Voc exhibits a triangular wave with a peak-to-peak value up to 450 V, w is much like the simulation result in Figure 2c. The frequency of Voc curve is about 104 much higher than the simulation result of 4 Hz. This is because the grid number o real pr-TENG is dramatically increased to 120. It clearly points out an easy way to imp the output power of the pr-TENG, which is to improve the grid's integration level. situation is different for the EMG. A commercial three-phase alternator is used for c parison, as presented in Figure 3c, and its detailed size can be seen in Appendix C. EMG has 4 gears as the rotation axis. The first gear named Gear0 is coaxial with the r of EMG. Gears named Gear1 to Gear3 are set to improve the rotation speed of the r by the coupling of adjacent gears. The same rotation speed of 50 rpm is employe Gear0 and Gear3 by a motor. Figure 3d presents the measured Voc of the EMG. We ca that when Gear0 is set to be the rotation axis for the EMG, the frequency of Voc curve The electrical output measurements of the generators validate the simulation results in Figure 2c,d. It shows the performance of high output voltage for the pr-TENG compared with EMG, as having been reported by previous works [9,21]. At the same time, it shows an effective and simple way to improve the output power of the pr-TENG by increasing the grid number on the rotor and electrodes. However, this method is not suitable for EMG. The effective method for the EMG's output power is to increase the number of gears, but it will bring new problems. Comparison of the Two Generators in Increasing Their Power The similar structure and working mechanism of the pr-TENG and EMG make them possess similar routes to improve power. The comparison is summarized in Table 1. Firstly, the measurements of electrical output in Figure 3 have shown an effective way for improving the power of the pr-TENG by increasing the grid number on rotor and electrodes, which has been reported by Guang Zhu [9]. Changbao Han has made the grid number to be 180 by PCB manufacturing technology [17]. The power of EMG can also be improved by increasing the number of magnet unit. However, due to the size limit of the magnet and coil, their numbers hardly rise to tens or hundreds. Secondly, increasing the diameter of pr-TENG will dramatically improve its power. This is because the open-circuit voltage of pr-TENG is proportional to the surface area of the electrification layer [22] and absolutely proportional to the square of the diameter. However, increasing radial dimension will decrease the effective magnetic field strength in unit volume for the EMG because effective magnetic field strength only exists in the gap between magnet and iron core, and the useless volume inner the rotor increases quadratically with the increasing of diameter. Thirdly, increasing the surface charge density on the electrification layer by fluorinated surface modification can improve the power of the pr-TENG [23], which is similar to increasing the magnetic field strength of the magnet for the EMG. However, the magnetic field strength of materials in normal pressure and temperature is hard to improve at present. Lastly, improving the rotation speed is also an efficient way to improve the generator's power. It can be achieved by multistep gears' coupling. It is equivalent for the pr-TENG and EMG in this way. In summary, from the aspect of improving power, pr-TENG shows excellent advantages in increasing size, improving integration level, and optimizing material's properties comparing with EMG. In addition, the pr-TENG has advantages of low cost, lightweight, and flexibility [8,24]. These make it much suitable for applications in an instantaneous self-powered sensing system. Performance of the Instantaneous Driving Mode Power Management Circuit Because the operating voltage and current of electronics should be regulated at a stable and safe level, a power management circuit (PMC) is necessary for regulating the electrical output of the generator. As mentioned in Figure 1, one crucial requirement for the instantaneous self-powered sensing system is the PMC with high performance. Here, by comparing two types of PMCs, we point out that the PMC with high performance should possess two main characters. A transformer is first used to reduce the impedance of the pr-TENG to match the impedance of the PMC. With the use of a transformer, the peak-to-peak value of the opencircuit voltage is decreased from 450 to 30 V, as shown in Figure 4a, and the maximum peak-to-peak value of the short-circuit current is enhanced from 0.5 to 10 mA, as shown in Figure 4b. The voltage and current are all measured by an electrometer (Keithley 6514). The first type of PMC is sketched in Figure 4c. The electrical output after transformation is rectified through a full-wave diode bridge, and a 10 µF capacitor is then used as an energy storage unit to reserve the electrical energy. Once the rotor of the pr-TENG finishes a rotation circle by a mechanical driving force, the switch is subsequently turned on. The capacitor then provides sufficient electricity as a power source for a load (510 Ω). The discharge current through the load is first measured to characterize the performance of the PMC. Figure 4d presents the measured discharge current at a fixed rotation angle of π/2 with different rotation speeds (120, 150, and 180 rpm). The measured current is a typical capacitor discharge curve, and the discharge current amplitude increases along with the increasing rotation speed. This is because higher rotation speed brings higher electricity energy density and larger quantity of charges accumulated in the capacitor. It is noted that in our case, the driving processes are divided into two steps by a manual mechanical switch: the energy harvesting process before switch on and the charges releasing process after switch on. As mentioned in Figure 1, this type of divided-working-process PMC is called non-instantaneous driving mode PMC. energy storage unit to reserve the electrical energy. Once the rotor of the pr-TENG finishe a rotation circle by a mechanical driving force, the switch is subsequently turned on. Th capacitor then provides sufficient electricity as a power source for a load (510 Ω). Th discharge current through the load is first measured to characterize the performance o the PMC. Figure 4d presents the measured discharge current at a fixed rotation angle o π/2 with different rotation speeds (120, 150, and 180 rpm). The measured current is a typ ical capacitor discharge curve, and the discharge current amplitude increases along with the increasing rotation speed. This is because higher rotation speed brings higher electric ity energy density and larger quantity of charges accumulated in the capacitor. It is noted that in our case, the driving processes are divided into two steps by a manual mechanica switch: the energy harvesting process before switch on and the charges releasing proces after switch on. As mentioned in Figure 1, this type of divided-working-process PMC i called non-instantaneous driving mode PMC. As an improvement, the second type of PMC is achieved, as sketched in Figure 4e. A logic chip (LTC3330, Linear Technology) is adopted to substitute the manual mechanical switch in the non-instantaneous driving mode PMC. The LTC3330 integrates a full-wave bridge, a buck-boost converter control chip, a hysteresis comparator, and a buck-boost power switch. It functions as an intelligent switch, which can automatically release stored charges in a capacitor once the voltage of the electricity-stored capacitor reaches a pre-set threshold voltage 5 V; similar works have been previously reported [25][26][27]. The detailed working mechanism can be seen in Appendix D. At the same measurement conditions of the non-instantaneous driving mode PMC, the amplitude of the discharge current through a load increases from 5.5 to 7 mA as the rotation speed varies from 120 to 180 rpm, as shown in Figure 4f. The discharge current is composed of plenty of small peaks, as shown in the magnified inset diagram. This is the result of two processes that occur alternately. As the stored energy in capacitor C2 is being released, the electrical energy is simultaneously replenished from the pr-TENG and stored in capacitor C2. Once the voltage of capacitor C2 reaches a pre-set threshold voltage of 5 V, a new energyreleasing circle begins. The energy-harvesting process and energy-releasing process occur simultaneously. Once the rotation speed increases, the energy-replenishing rate becomes bigger; thus, more charges are accumulated, and the discharge current increases. Once the rotation stops, the energy-replenishing process stops, and the discharge current drops like a capacity discharging process. The energy harvesting-releasing is an alternately succeeding process that apparently differentiates the divided harvesting-releasing process in the non-instantaneous driving mode PMC. This type of PMC is called instantaneous driving mode PMC. From the measurement results, we can find that the instantaneous driving mode PMC possesses two major benefits compared with the non-instantaneous driving mode PMC. The first is about the electricity transfer efficiency. At the rotation speed of 120 rpm, the accumulated charge in the instantaneous driving mode PMC is about 900 µC, presenting over 12-fold enhancement compared to the non-instantaneous driving mode PMC (73 µC). It indicates the higher electricity transfer efficiency for the instantaneous driving mode PMC. The second is about the long duration of discharge current. At the rotation speed of 120 rpm, the duration of the discharge current is approximately 0.4 s, presenting over 20-fold enhancement compared to the case in the non-instantaneous driving mode PMC (0.02 s). The longer duration time indicates the broader application scenarios for the instantaneous driving mode PMC. Many electronics need long driving times, such as the RF emitter. These two significant benefits indicate that the instantaneous driving mode PMC is more suitable for the instantaneous self-powered sensing system. Demonstration of an Instantaneous Self-Powered Sensing Application At last, as a proof-of-concept, an integrated instantaneous self-powered sensing system is constructed. The pr-TENG is selected as the energy harvester, an RF transmission circuit as the sensor which will emit a control command, and the instantaneous driving mode PMC as the energy management circuit to supply stable output for the RF transmission circuit. The detailed system design is presented in Figure 5. The RF transmission circuit is composed of an encoder and an emitter. An 8-bit low-power STM8S003F microcontroller unite (MCU) is used as an encoder to generate a clock/data controlling signal for the emitter. The emitter CC115L is a low-power sub-GHz RF emitter that operates in the frequency ranges of 300-348 MHz, 387-464 MHz, and 779-928 MHz. This integrated instantaneous self-powered sensing system is constructed as a smart home prototype, in which the pr-TENG, the instantaneous driving mode PMC, and the RF transmission circuit are built in a door shell. The assembly process of the self-powered RF transmission system is shown in Supplementary Content (Figure S1). When the doorknob is rotated, the pr-TENG harvests hand rotational energy and converts it into electricity. The RF transmission circuit is then powered, and it successfully emits a modulated carrier signal. A receiver 60 m away receives the modulated carrier signal (bottom right inset in Figure 5), demodulates the encoding information from the carrier, and executes a command, such as switching on/off a lamp. The whole process is shown in a video in the Supplementary Content (Video S1). This integrated instantaneous self-powered sensing system is constructed as a smart home prototype, in which the pr-TENG, the instantaneous driving mode PMC, and the RF transmission circuit are built in a door shell. The assembly process of the self-powered RF transmission system is shown in Supplementary Content ( Figure S1). When the doorknob is rotated, the pr-TENG harvests hand rotational energy and converts it into electricity. The RF transmission circuit is then powered, and it successfully emits a modulated carrier signal. A receiver 60 m away receives the modulated carrier signal (bottom right inset in Figure 5), demodulates the encoding information from the carrier, and executes a command, such as switching on/off a lamp. The whole process is shown in a video in the Supplementary Content (Video S1). The demonstration shows an instantaneous self-powered data transmission process. The hand-induced mechanical energy that the pr-TENG harvests can act as a power supply for the RF data transmission. The term "instantaneous" here means the energy harvesting and the RF transmission driving are both completed in an instantaneous period (about 0.25 s). Environmental information, such as temperature, humidity, and wind velocity, etc., can be measured by a sensor network, and these data can be transmitted by this instantaneous self-powered sensing system for remote control applications. Conclusions To summarize, in this work, we construct an instantaneous self-powered sensing system. For this purpose, we first point out that two requirements should be met, one is the generator's output power should be high enough, the other is the power management circuit should possess high electricity management efficiency. For the first requirement, we introduce the pr-TENG and EMG as the high-power energy harvesters and demonstrate that the pr-TENG possesses significant advantages in improving power compared to EMG, such as hundreds of grids design. Taking other advantages into consideration, we conclude that pr-TENG is very suitable for instantaneous self-powered sensing systems. For the second requirement, we design an instantaneous driving mode PMC based The demonstration shows an instantaneous self-powered data transmission process. The hand-induced mechanical energy that the pr-TENG harvests can act as a power supply for the RF data transmission. The term "instantaneous" here means the energy harvesting and the RF transmission driving are both completed in an instantaneous period (about 0.25 s). Environmental information, such as temperature, humidity, and wind velocity, etc., can be measured by a sensor network, and these data can be transmitted by this instantaneous self-powered sensing system for remote control applications. Conclusions To summarize, in this work, we construct an instantaneous self-powered sensing system. For this purpose, we first point out that two requirements should be met, one is the generator's output power should be high enough, the other is the power management circuit should possess high electricity management efficiency. For the first requirement, we introduce the pr-TENG and EMG as the high-power energy harvesters and demonstrate that the pr-TENG possesses significant advantages in improving power compared to EMG, such as hundreds of grids design. Taking other advantages into consideration, we conclude that pr-TENG is very suitable for instantaneous self-powered sensing systems. For the second requirement, we design an instantaneous driving mode PMC based on logic chip LTC3330. It presents a longer duration of discharge current and higher electricity transfer efficiency. At last, we design a real instantaneous self-powered sensing system based on the planar-structured rotary TENG and successfully achieve instantaneous RF transmission self-powered by the TENG. Meanwhile, we construct a smart home prototype based on the self-powered RF transmission system. The pr-TENG, the instantaneous driving mode PMC, and the RF transmission circuit are integrated into a door shell. Along with an instantaneous period of doorknob rotating by hand, the pr-TENG harvests the rotational mechanical energy, and a self-powered remote-control command is subsequently emitted. Considering that the transmitter (CC115L) is the main load of the instantaneous selfpowered RF transmission system and the current consumption in the transmitter varies from nA to mA for different operation modes, we will take further steps to achieve an efficient instantaneous self-powered RF transmission system in our following studies, such as optimizing the instantaneous driving mode PMC to achieve an efficient way to operate the transmitter. This work not only demonstrates a clear route to achieve higher working efficiency and a shorter working period for the self-powered sensing system but also explores a broader range of potential applications in the smart home, environment monitoring, and security surveillance. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10.339 0/s21113741/s1, Figure S1: The assembly process of the self-powered RF transmission system. Conflicts of Interest: The authors declare no conflict of interest. Appendix A Table A1. The comparison of different environmental energy harvesting technologies in self-powered sensing system applications. Environmental Energy Harvesting Technology Advantages Disadvantages Applications in Self-Powered Sensing System Triboelectric nanogenerator based high power; great advantage in low-frequency mechanical energy harvesting; low cost; lightweight; flexibility energy loss induced by friction damping; difficulty in power management circuit design environmental monitoring [28]; self-powered pressure sensor [29]; data acquisition/transmission [11] Electromagnetic generator based high power; great advantage in high-frequency mechanical energy harvesting bulky; hard to integrate with electronics vibration monitoring [30]; traffic monitoring [31] Piezoelectric nanogenerator based high output voltage; light weight; high integration level fragile; low power data transmission [6]; self-powered pressure sensor [32] Thermoelectric generator based thermal energy harvesting; flexibility; wearable low power; not applicable in mechanical energy harvesting health monitoring [33]
7,896
2021-05-28T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
On the Borel summability of WKB solutions of certain Schr\"odinger-type differential equations A class of Schr\"odinger-type second-order linear differential equations with a large parameter $u$ is considered. Analytic solutions of this type of equations can be described via (divergent) formal series in descending powers of $u$. These formal series solutions are called the WKB solutions. We show that under mild conditions on the potential function of the equation, the WKB solutions are Borel summable with respect to the parameter $u$ in large, unbounded domains of the independent variable. It is established that the formal series expansions are the asymptotic expansions, uniform with respect to the independent variable, of the Borel re-summed solutions and we supply computable bounds on their error terms. In addition, it is proved that the WKB solutions can be expressed using factorial series in the parameter, and that these expansions converge in half-planes, uniformly with respect to the independent variable. We illustrate our theory by application to a radial Schr\"odinger equation associated with the problem of a rotating harmonic oscillator and to the Bessel equation. Introduction and main results In this paper, we study Schrödinger-type differential equations of the form where u is a real or complex parameter, z lies in some domain D of a Riemann surface, and the potential function f (u, z) is an analytic function of z having the form f (u, z) = f 0 (z) + f 1 (z) u + f 2 (z) u 2 . It is assumed that the coefficient functions f n (z) are analytic functions of z in the domain D, bounded or otherwise, and are independent of u. Furthermore, we suppose that f 0 (z) does not vanish in D. We are interested in asymptotic expansions of analytic solutions of (1.1) when the parameter u becomes large. It is convenient to work in terms of the transformed variables ξ and W (u, ξ) given by 1 The path of integration, except perhaps its endpoint z 0 , must lie entirely within D. In the case that z 0 is a boundary point of D, it is assumed that the integral converges as t → z 0 along the contour of integration. The transformation (1.2) maps D on a domain G, say. In terms of the new variables, equation (1.1) becomes where the coefficients A ± n (ξ), which are analytic functions of ξ in G, can be determined recursively by (1.5) A ± n+1 (ξ) = − with the convention A ± 0 (ξ) = 1. The constants of integration in (1.4) and (1.5) are arbitrary. An alternative method for the evaluation of the coefficients A ± n (ξ), which avoids nested integrations, can be found in Appendix A. The formal solutions (1.4) are usually referred to as WKB solutions, in honour of physicists Wentzel [37], Kramers [24] and Brillouin [7], who independently discovered these solutions in a quantum mechanical context in 1926. To be strictly accurate, the WKB solutions were discussed earlier by Carlini [8] in 1817, by Liouville [25] and Green [18] in 1837, by Strutt (Lord Rayleigh) [31] in 1912, and again by Jeffreys [21] in 1924. The reader is referred to the book of Dingle [12, Ch. XIII] for a historical discussion and critical survey. In general, the series in (1.4) diverge, and the most that can be established is that in certain subregions of G they provide uniform asymptotic expansions of solutions of the differential equation (1.3). Olver [30,Ch. 10, §9] demonstrated the asymptotic nature of the expansions (1.4) via the construction of explicit error bounds. In this paper we take a different approach and study the WKB solutions (1.4) from the point of view of Borel summability. Our main aim is to show that the expansions (1.4) are Borel summable in well-defined subdomains Γ ± of G, provided that φ(ξ) and ψ(ξ) satisfy certain mild requirements. We shall construct exact, analytic solutions W ± (u, ξ) of the differential equation (1.3) in terms of Laplace transforms (with respect to the parameter u) of some associated functions, called the Borel transforms of the WKB solutions. The formal expansions in (1.4) will then arise as asymptotic series of these exact solutions. By carefully analysing the Borel transforms, we will derive new, computable bounds for the remainder terms of the asymptotic expansions. In addition, it will be shown that the WKB solutions can be expressed using factorial series in the parameter u, and that these expansions converge for u > 0, uniformly in the independent variable ξ. The present work is a first step towards the global analysis of the WKB solutions (1.4). Our main result demonstrates that the WKB solutions are Borel summable provided that one stays away from Stokes curves emerging from transition points of f 0 (z) (see Remark (i) following Theorem 1.1). In particular, the problem of global connection formulae for WKB solutions across Stokes curves is not studied in this paper. A possible approach to tackle such problems is discussed briefly in Section 7. Over the past several decades, there has been increasing interest in the study of Borel summability of asymptotic series, and this field is closely linked with exponential asymptotics (see, for example, [28]). Indeed, there have been extensive developments in summability of singular variable asymptotic expansions, with fewer general results currently existing for the more complicated singular parameter case. The relevance of Borel summability to the WKB analysis was first clearly observed by Bender and Wu [4]. In the seminal work [35], Voros showed how to analyse Borel re-summed WKB solutions and demonstrated the relationship between the singular points of the Borel transforms and the global connection formulae. Dunster et al. [13] studied differential equations of the type (1.3) in the case that φ(ξ) ≡ 0 (or, equivalently, f 1 (z) ≡ 0). Under suitable conditions on ψ(ξ), they established the Borel summability of WKB solutions away from Stokes curves and derived their convergent factorial series expansions. Their analysis was based on a method of Nørlund. Kamimoto and Koike [22] considered the equation (1.1) in the special case that f (u, z) = f (z) is meromorphic and independent of u, and proved the Borel summability of the WKB solutions using WKB theoretic transformation series introduced originally by Aoki et al. [1]. Their results hold in appropriate neighbourhoods of Stokes curves issuing from either a simple turning point or a simple pole of f (z) (see also [32]). For further contributions, see, for example, [2,3,5,9,11,15,16,19,20,33,34] and the references therein. Before stating our results, we introduce the necessary assumptions, notation and definitions. Hereinafter, we will suppose that the functions φ(ξ) and ψ(ξ) fulfil at least one of the following two conditions. Condition 1.1. The functions φ(ξ) and ψ(ξ) are analytic in a domain ∆ ⊂ G, which has the property that there exist positive constants c and ρ independent of u and ξ, such that Condition 1.2. The functions φ(ξ) and ψ(ξ) are analytic in a domain ∆ ⊂ G, which has the property that there exist positive constants c and ρ independent of u and ξ, such that Often these conditions can be brought about by a suitable normalisation of the differential equation (1.3). (See the second example in Section 6, below.) We denote by Γ ± (d) any (non-empty) subdomains of ∆ that satisfy the following two requirements: (i) The distance between each point of Γ ± (d) and each boundary point of ∆ has a positive lower bound d (which is to be chosen independently of u). 2 (ii) If ξ ∈ Γ ± (d), then ξ ∓ x ∈ Γ ± (d) for all x > 0. Note that condition (ii) requires ∆ to contain at least one infinite strip that is parallel to the real axis. In particular, ∆ (and hence G) has to be unbounded. We shall often integrate along half-lines that are parallel to the real axis. For any given w ∈ Γ + (d), P(−∞, w) denotes the half-line that runs from −∞ + i w to w. Observe that by condition (ii), P(−∞, w) lies entirely in Γ + (d). Similarly, for any w ∈ Γ − (d), P(w, +∞) will denote the half-line that connects w with +∞ + i w. If the orientations of these paths are reversed, we will adopt the notation P(w, −∞) and P(+∞, w), respectively. 2 It will be assumed throughout the paper that d is always chosen so that none of the domains Γ ± (d) is empty. In (1.6), the contours of integration have to lie entirely within G and can be infinite provided the integrals converge. It is readily seen that if such solutions exist, they must be unique. We would like our solutions to posses asymptotic expansions of the form as |u| → +∞, uniformly with respect to ξ ∈ Γ ± (d). The coefficients A ± n (ξ) must satisfy recurrence relations of the type (1.5) with a suitable choice of integration constants. Note that (1.7) implies lim ξ→∓∞ A ± n (ξ) = 0. We will show in Section 2 that if A ± 0 (ξ) = 1 and for n ≥ 0 and ξ ∈ Γ ± (d), then the requirements lim ξ→∓∞ A ± n (ξ) = 0 are fulfilled. The A ± n (ξ)'s defined in this way will be the coefficients in the asymptotic expansions (1.8) of our solutions (1.6), and throughout the rest of the paper, unless stated otherwise, A ± n (ξ) will always refer to the coefficients specified by the above recurrence relation. Finally, we define, for any r > 0, Thus, U (r) consists of all points whose distance from the positive real axis is strictly less than r. We are now in a position to formulate our main result. (i) In order to gain a better understanding of the main result, consider the following special case. Assume that φ(ξ) and ψ(ξ) are (possibly multivalued) holomorphic functions in the ξ-plane, save for a countable number of singularities (which may include branch points), located at ξ = ξ k , k ∈ Z ≥0 . Let us focus on the Borel summability of the WKB solution W + (u, ξ). We introduce brunch cuts from the singularities ξ k to +∞ + i ξ k . Suppose that φ(ξ) and ψ(ξ) are analytic in the resulting domain and satisfy the estimates posed in Conditions 1.1 or 1.2. Then we can set The pre-image in the z-plane of each of the brunch cuts is a Stokes curve emanating from a transition point of f 0 (z) (see, e.g., [23] are also solutions of the differential equation (1.3) for all sufficiently large values of |u|. The right-hand side can be re-expressed in the form where, for each 0 < r < d, the functions f ± (t, ξ, β ± ) are analytic in (t, ξ) ∈ U (2r) × Γ ± (d), and the Laplace transforms converge for u > K ± + σ + σ, where σ is any positive number and K ± = K ± (σ, r) are the corresponding constants given in Theorem 1. To prove Theorem 1.1, we shall also need an inequality analogous to (1.13) but this inequality will be derived as a direct consequence of Conditions 1.1 or 1.2. In Theorem 1.2 below, we give explicit bounds for the error terms of the asymptotic expansions (1.12). To state the theorem, we require some notation. For any σ > 0 and ξ ∈ Γ ± (d), we define (1.14) and for any 0 < r < d, we set where f ± (t, ξ) are the Borel transforms appearing in Theorem 1.1. For an upper bound on the quantities C ± , see Section 4. With these notation, we have the following theorem. To obtain sharp bounds for R ± N (u, ξ), we may choose the parameters σ and r for each u and ξ separately. For example, one can take σ = 1 2 u and r to be any number that is less than the distance of ξ to the boundary of ∆. In the following theorem, we provide convergent alternatives to the asymptotic expansions (1.12). The coefficients in these expansions depend on an additional (positive) parameter ω and can be generated by the recurrence relations Note that the B ± n (ω, ξ)'s are polynomials in ω of degree n − 1 and are analytic functions of ξ in Γ ± (d). A different expression for these coefficients involving the Stirling numbers of the first kind is given by (5.3). which are absolutely convergent for u > σ > 0 (σ arbitrary), uniformly for ξ ∈ Γ ± (d). The errors committed by truncating the series (1.18) after N ≥ 0 terms do not exceed in absolute value The quantities C ± are defined by (1.15) with r = π 4ω . The remaining part of the paper is structured as follows. The proof of Theorem 1.1 consists of two steps. In Section 2, we prove that the formal Borel transforms of the WKB expansions converge in a neighbourhood of the origin. This is shown via some Gevrey-type estimates for the coefficients A ± n (ξ). The second step involves analytically continuing the Borel transforms in a strip containing the positive real axis. We construct the continuation as a solution of an integral equation in Section 3, following ideas of Dunster et al. [13]. In Section 4, we provide the proof of the error bounds given in Theorem 1.2. The convergent factorial expansions and the corresponding error bounds stated in Theorem 1.3 are proved in Section 5. In Section 6, we illustrate our results by application to a radial Schrödinger equation associated with the problem of a rotating harmonic oscillator and to the Bessel equation. The paper concludes with a discussion in Section 7. Pre-Borel summability of the WKB solutions In this section, we prove the pre-Borel summability of the WKB solutions, i.e., we show that the formal Borel transforms of the WKB expansions (1.8) are analytic near the origin. In order to prove Theorem 2.1, we shall establish a series of lemmata. Lemma 2.1. Let d > 0. Assume Conditions 1.1 or 1.2. Then there exists a positive constant c 1 , independent of u and ξ, such that for any non-negative integer j, the following estimates hold: Proof. We shall prove that Lemma 2.1 holds with and let j be a non-negative integer. To prove the first inequality in (2.2), we use Conditions 1.1 or 1.2 and Cauchy's formula, to deduce Here µ = 1 or µ = 1/2 according to whether we assume Conditions 1.1 or 1.2. Now, when |ξ| ≤ 1 + d, We also have which, together with (2.3), imply the first inequality in (2.2). The other bounds in (2.2) can be proved in a similar manner by applying the inequalities which hold for all t ∈ ∆. Here, again, we take µ = 1 or µ = 1/2 according to whether we assume Conditions 1.1 or 1.2. Then there exists a positive constant c 2 , independent of u and ξ, such that the following estimates hold: Proof. We shall show that Lemma 2.1 holds with c 2 = 4 max(c, Therefore, we have the following series of estimates: If, instead, Condition 1.2 holds, then Now, when ∓ ξ > 1, These two estimates, together with (2.5) or (2.6) and the choice of c 2 , imply the first inequality in (2.4). The third estimate in (2.4) can be obtained in a similar way by employing the inequality To prove the second inequality in (2.4), we can proceed as follows. Using Condition 1.1 and Cauchy's formula, we derive and when |ξ| ≥ 1 + d, Hence, from (2.7), we can infer that With our choice of the constant c 2 , this bound implies the second inequality in (2.4). If, instead, Condition 1.2 is assumed, then By the choice of c 2 , this bound implies the second inequality in (2.4). Proof. The identity can be proved by re-writing the left-hand side as a telescoping sum (with the convention that 1/(−1)! = 0): Proof of Theorem 2.1. We begin by proving that for ξ ∈ Γ ± (d), n ≥ 1 and m ≥ 0, it holds that for n ≥ 1. We proceed via induction on n. By definition, Employing Lemmas 2.1 and 2.2, together with the definitions of c 3 and C 1 (d), we deduce Now let m be an arbitrary positive integer. Differentiating (2.12) m times yields By an application of Lemma 2.1, and the definitions of c 3 and C 1 (d), we obtain Assume that (2.9) holds for all d m A ± k (ξ)/dξ m with 1 ≤ k ≤ n and m ≥ 0. By definition, (2.13) Using the induction hypothesis, Lemmas 2.1 and 2.2, and the definition of C n+1 (d), we deduce Now let m be an arbitrary positive integer. Differentiating (2.13) m times gives Using the induction hypothesis, Lemmas 2.1 and 2.3, and the definition of C n+1 (d), we obtain This completes the proof of the bound (2.9). It is straightforward to show that C n (d) grows at most polynomially in n (see Section 4). Hence, we obtain from (2.9) that lim sup for any ξ ∈ Γ ± (d). Therefore, by the Cauchy-Hadamard theorem, the power series (2.1) are convergent when t ∈ B(2d), and define analytic functions g ± (t, ξ) on B(2d) (for each fixed ξ ∈ Γ ± (d)). From (2.9) we can also infer that the series (2.1) converge uniformly on Γ ± (d) and hence the functions g ± (t, ξ) are analytic with respect to ξ for each fixed t. Since the functions g ± (t, ξ) are continuous in (t, ξ) and are analytic in each of their variables, they are analytic functions in B(2d) × Γ ± (d). Borel summability of the WKB solutions In this section, we shall show that the functions η ± (u, ξ) can be represented as Laplace transforms of some associated functions f ± (t, ξ) that are analytic in U (2d) × Γ ± (d). These associated functions will coincide with g ± (t, ξ) of Theorem 2.1 in their common domains of definition. The proof follows closely that in [13]: we make a Laplace "ansatz" to derive a pair of partial differential equations for f ± (t, ξ) with certain boundary conditions. Thus, we seek f ± (t, ξ) such that where u > 0 and ξ ∈ Γ ± (d). By partial integration one easily finds that if f ± (t, ξ) satisfy and the conditions and hence (1.6) satisfy the equation (1.3). We seek solutions of (3.1) in U (2d) × Γ ± (d) such that for all t ∈ U (2d). This is consistent with the requirement (1.7). Using (1.9) and Theorem 2.1, one can show that g ± (t, ξ) are solutions of (3.1) on B(2d) × Γ ± (d). In order to show that g ± (t, ξ) can be continued analytically to the whole of U (2d) × Γ ± (d) and are of exponential type in t as t → +∞ in U (2d), we will transform (3.1) into an integral equation. In what follows, we discuss the construction of f − , an analogous argument can be used to construct f + . We define temporarily Now, we integrate the equation (3.4) in τ from ζ to t and in w from ξ + 1 2 ζ to infinity. Applying the limit condition (3.5) and integrating once by parts, we obtain Here ζ ∈ B(2d), t ∈ U (2d), and ξ, ξ + 1 2 (ζ − t) ∈ Γ − (d). It is not easy to use this integral equation and the initial condition in (3.5) directly to prove the existence of a solution of (3.1) (satisfying (3.3)). Since we would like our solution to coincide with the solution g − (t, ξ) defined in Theorem 2.1 when t ∈ B(2d), and the only restriction on ξ is ξ ∈ Γ − (d), we express t as x + ζ, where x ≥ 0, replace ξ by ξ + 1 2 x, and f − ζ, ξ + 1 2 x by g − ζ, ξ + 1 2 x to obtain the following linear integral equation Note that if t ∈ B(2d), then g − (t, ξ) is a solution of (3.6). Also observe from the definition of for all x ≥ 0. Equation (3.6) will eventually be used to define f − (t, ξ). Let σ be an arbitrary (fixed) positive number. Denote by B σ the complex vector space of continuous functions h(x, ξ) on R ≥0 × Γ − (d) such that for each h ∈ B σ there exist a constant K (depending only on σ) such that where G − (σ, ξ) is defined in (1.14). Let us define the norm h of h(x, ξ) to be the infimum of all such K for which the inequality (3.7) holds. Equipped with this norm, B σ becomes a Banach space. In the following, we shall use the facts that for fixed ξ ∈ Γ − (d), s → G − (σ, ξ + s) is a monotonically decreasing function, lim s→+∞ G − (σ, ξ + s) = 0 and (We will assume that φ(t) ≡ 0. The case of φ(t) ≡ 0 can be handled in a similar manner.) Consider the operator acting on the space B σ . By the Cauchy-Schwarz inequality, we can assert that Consequently, We also have Thus, L is a B σ → B σ linear operator whose induced operator norm L ≤ 1 √ 6 + 1 12 < 1 2 . Now, for each ζ ∈ B(2d), define the operator T ζ , acting on the space B σ , via Since max(1, sgn( ξ) | ξ| ρ )g − ζ, ξ + 1 2 x ≤ max 1, sgn ξ + 1 2 x ξ + 1 2 x ρ g − ζ, ξ + 1 2 x and the right-hand side is bounded (cf. Theorem 2.1), T ζ h(x, ξ) belongs to the space B σ . We also have for any h 1 (x, ξ), h 2 (x, ξ) ∈ B σ . Consequently, T ζ is a B σ → B σ contraction. Therefore, for each ζ ∈ B(2d) there is a unique function f (x, ξ; ζ) in B σ defined on R ≥0 × Γ − (d) which satisfies To complete the construction of f − (t, ξ), it remains to show that for ζ ∈ B(2d) these functions of x can be combined to yield an analytic solution of (3.6) on U (2d) × Γ − (d). We first claim that when ξ ∈ Γ − (d), ζ ∈ B(2d), x ≥ 0 and x + ζ ∈ B(2d). This follows by uniqueness since each side is a solutions of (3.8), and adding the restriction x + ζ ∈ B(2d) to the definition of B σ and L does not increase the norm. Then, for all real numbers δ > 0 and ζ + δ ∈ B(2d), we find from (3.8) (with x replaced by x + δ) and (3.9) that (3.10) Now, regarding (3.6) as an equation satisfied by g − (x + ζ, ξ), and in that equation replacing x by δ and ξ by ξ + 1 2 x, we see that the first three terms on the right-hand side of (3.10) can be replaced by g − ζ + δ, ξ + 1 2 x . Then, if we replace τ by τ + δ in the third and fourth integrals and w by w + 1 2 δ in the fourth integral on the right-hand side of (3.10), we deduce Hence, by uniqueness, it follows from (3.8) and (3.11) that for all real x ≥ 0, δ > 0 and ζ, ζ + δ ∈ B(2d) since each is a solutions of the same integral equation. The equality (3.12) shows that is well defined for ξ ∈ Γ − (d), ζ ∈ B(2d) and x ≥ 0. Let t = x + ζ be a point in U (2d). We regard x ≥ 0 as fixed and let ζ varying in a neighbourhood of the origin. Since L is independent of ζ, and differentiation with respect to ζ commutes with L, it follows that f (x, ξ; ζ) is an analytic function of ζ. Hence f − (t, ξ) is analytic with respect to t in U (2d) (for each fixed ξ ∈ Γ − (d)), with ∂f − /∂t = ∂f /∂ζ. Since f − (t, ξ) is also continuous in (t, ξ) and analytic in ξ for each fixed t, it follows that Now, by Theorem 2.1, for each 0 < r < d, there exists a number C − > 0, independent of ζ, ξ and x, such that when ζ ∈ B(2r), ξ ∈ Γ − (d) and x ≥ 0. Thus, for each ζ ∈ B(2r), g − ζ, ξ + 1 2 x ∈ B σ and g − ζ, ξ + 1 2 x ≤ 1 2 C − for all σ > 0. Therefore, by (3.8), for ζ ∈ B(2r), ξ ∈ Γ − (d) and x ≥ 0. Upon expressing t ∈ U (2r) in the form t = t + ζ with ζ ∈ B(2r), we obtain (3.15) for all ξ ∈ Γ − (d). The finiteness of the supremum on the right-hand side is guaranteed by Lemma 2.2. This proves (1.11) and thus the convergence of (1.10). The validity of the asymptotic expansions (1.12) follows from Theorem 1.2 which we prove in the forthcoming section. Error bounds In this section, we prove the error bounds given in Theorem 1.2. Let N be an arbitrary positive integer. Integrating by parts N times in (1.10), we obtain (1.16) with Let 0 < r < d and σ > 0. From (3.13)-(3.15) and the corresponding results for f + (t, ξ), we can infer that if for all (t, ξ) ∈ U (2r) × Γ ± (d). Thus, by Cauchy's formula, for all t ≥ 0 and ξ ∈ Γ ± (d). Employing this estimate in (4.1) yields It is possible to obtain a simple upper bound for the quantities C ± by referring to the results in Section 2. From the definitions of C ± and g ± (t, ξ), and the estimate (2.9), we can assert that Further simplification is possible by bounding the C n (d)'s. To this end, we note that 1 + x < e x for all x > 0, and for all n ≥ 1 (cf. [30,Ch. 8,§3,Ex. 3.3]). Therefore, using the definitions (2.10) and (2.11), we deduce gives a computable upper bound on C ± . Convergent factorial series In this section, we prove the convergent factorial expansions and the corresponding error bounds stated in Theorem 1.3. Let d > 0 and ω > π 4d , and define r > 0 by ω = π 4r . Denote by Λ(2r) the region in the t-plane defined by According to Theorem 1.1, the functions f ± (t, ξ) are analytic in Λ(2r) × Γ ± (d) and satisfy the estimate (1.11) for any fixed σ > 0. Consequently, from the fundamental theorem on factorial series [36,Ch. 11,Theorem 46.2], the functions η ± (u, ξ) posses expansions of the form (1.18) which are absolutely convergent for u > σ > 0 (σ arbitrary), for each ξ ∈ Γ ± (d). It therefore remains to show that the convergence is uniform in ξ. To this end, we show that for ξ ∈ Γ ± (d), 0 < σ < ω and n ≥ 0, it holds that where the implied constants are independent of ξ and n. The coefficients B ± n (ω, ξ) are the following series expansion coefficients with t ∈ Λ(2r) and ξ ∈ Γ ± (d) (cf. [36, pp. 325-326]). Hence, by Cauchy's integral formula, where the path of integration is a small loop that encircles the origin in the positive sense. Next, on nothing the bound (1.11) (and assuming 0 < σ < ω), we deform the contour of integration by expanding it to the boundary of the domain Λ(2r). Then we split the resulting integral into two parts, the first from t = +∞ + π 2ω i to t = − 1 ω log 2, and the second from t = − 1 ω log 2 to t = +∞ − π 2ω i. On making the change of integration variables s = i log(1 − e −ωt ), we obtain 2 |s| π −σ/ω provided 0 < |s| < π, with the constants K ± being independent of s and ξ. Employing these estimates in (5.2) and appealing to Lemma 12.3 and Ex. 12.2 of Olver [30, pp. 99-100], we deduce the desired result (5.1). Therefore, if 0 < σ < ω, the estimates (5.1) and the known asymptotics for the ratio of two gamma functions [29, Eq. 5.11.12] give where the implied constants are independent of ξ and n. An appeal to the Weierstrass M -test establishes that the series (1.18) indeed converge uniformly for ξ ∈ Γ ± (d) provided u > σ. Applications In this section, we give two illustrative examples to demonstrate the applicability of our theory. 6.1. A radial Schrödinger equation. As a first application of the theory, we consider the radial Schrödinger equation which is associated with the rotating harmonic oscillator. In physical applications, the parameters u, λ and are real and non-negative, with an integer, and u large (see, e.g., [17]). We introduce a branch cut along the real axis from 1 to −∞ so that z is restricted to D = {z : |arg(z − 1)| < π}. Application of the transformation The domain D is mapped into the sectorial region G = {ξ : |arg ξ| < 2π} on the universal covering of C \ {0}. We remark that Dunster [14] considered the problem of finding rigorous asymptotic expansions of the eigenvalues of the rotating harmonic oscillator. His analysis relies on WKB theoretic methods applied to the equation (6.1), although his definition of the WKB solutions is different from ours. 6.2. The Bessel equation. As a second application, we consider the Bessel equation where w(ν, z) = z 1/2 H To overcome this obstruction we replace z by νz, and consider instead the equation which has particular solutions w(ν, z) = z 1/2 H (1) ν+κ (νz) and w(ν, z) = z 1/2 H ν+κ (νz). This equation can now be treated by means Theorem 1.1. Similarly to the first example, we make a branch cut along the real axis from 1 to −∞ and restrict z to the domain D = {z : |arg(z − 1)| < π}. The transformation The image of D under the mapping (6.5) can be determined by the following considerations: (i) When z > 1, ξ is purely imaginary; z = 1, +∞ corresponding to ξ = 0, i∞, respectively. Discussion We studied the Borel summability of WKB solutions of certain Schrödinger-type differential equations with a large parameter. It was shown that under mild requirements on the potential function of the equation, the WKB solutions are Borel summable with respect to the large parameter in vast, unbounded domains of the independent variable. We demonstrated that the formal WKB expansions are the asymptotic expansions, uniform with respect to the independent variable, of the Borel re-summed solutions and supplied computable bounds on their remainder terms. In addition, it was proved that the WKB solutions can be expressed using factorial series in the parameter, and that these expansions converge in half-planes, uniformly in the independent variable. The theory presented here is a first step towards the global analysis of the WKB solutions of the differential equation (1.1). The main result of the paper demonstrates that the WKB solutions are Borel summable provided that the independent variable is bounded away from Stokes curves emerging from transition points of f 0 (z). In particular, it does not give any information regarding connection formulae joining the WKB solutions at either side of the Stokes curves. In what follows, we discuss briefly a possible extension of the theory to turning point problems. Turning points are the simplest types of transition points. Consider the differential equation (1.1), and suppose that the functions f n (z) are analytic in a domain D and f 0 (z) vanishes at exactly one point z 0 , say, of D. For the sake of simplicity, we assume that z 0 is a simple zero of f 0 (z), i.e., z 0 is a simple turning point of (1.1). Following Olver [30,Ch. 11, §11], we make the transformations . The functions Φ(ζ) and Ψ(ζ) are holomorphic in the corresponding ζ domain H, say. The transformation (7.1) maps the simple turning point z 0 into the origin in H, and the three Stokes curves issuing from z 0 are mapped into the rays arg ζ = 0, ± 2π 3 i. The works of Boyd [6] and Olver [30,Ch. 11, §11] on turning point problems suggest that, under suitable assumptions on Φ(ζ) and Ψ(ζ), the set of all solutions of the differential equation (7.2) is of the form where Ai denotes any solution of Airy's equation. We expect that, for sufficiently large values of |u|, the coefficient functions A (u, ζ) and B(u, ζ) are holomorphic functions of ζ in an appropriate subdomain of H including ζ = 0 and the Stokes rays arg ζ = 0, ± 2π 3 i, and as functions of u are described asymptotically by descending power series in u. Consequently, the continuation formulae for solutions of the form (7.3) follow directly from those for the Airy functions. Then, with the aid of the transformations ξ = 2 3 ζ 3/2 , W (u, ξ) = ζ 1/4 W(u, ζ), connection formulae can be established for formal solutions of the form (1.4). It is also possible to use the foregoing analysis to extend the Borel summability results of the WKB solutions (1.6) to the vicinity of Stokes rays emanating from ξ = 0. First, we identify the solutions of the form (7.3) which correspond to W ± (u, ξ). Next, by employing Theorem 1.1 and techniques similar to those in [15], it is shown that the asymptotic expansions of the coefficient functions A (u, ζ) and B(u, ζ) are Borel summable with respect to the large parameter u, and that the corresponding Borel transforms are analytic functions of ζ in an unbounded domain containing the Stokes rays. The process is completed by an appeal to a theorem on the composition of Borel summable series [26,Ch. 5,Theorem 5.55] and the well-known Borel summability properties of WKB solutions of the Airy equation. In Section 3, we showed that the Borel transforms f ± (t, ξ) can be continued analytically, in the complex variable t, to a strip containing the positive real axis. The method, however, does not provide us with any information regarding the nature and location of the singularities of f ± (t, ξ) in the t-plane. Explicit knowledge of singularities is an important prerequisite to any investigation in resurgent analysis and exponential asymptotics. By looking at the functional equation (3.6), for example, it is reasonable to expect that the singular points of f − (t, ξ) are located at t = −2ξ + 2ξ k , where the ξ k 's, k ∈ Z ≥0 , are the singularities of the functions φ(ξ) and ψ(ξ). Hence, there seems to be a direct link between the singular points of f − (t, ξ) in the t-plane and those of φ(ξ) and ψ(ξ) in the ξ-plane. This conjecture may be verified for some specific examples where the Borel transforms are explicitly known (e.g., the Airy or the Legendre equation), but the general case requires further investigation. Finally, it would be of great interest to extend the results of the paper to differential equations of the type (1.1) in which the potential function f (u, z) admits a (Borel summable) uniform asymptotic expansion of the form f (u, z) ∼ The coefficients E ± n (ξ) are found by substitution of (A.1) into (1.3) and equating like powers of u. In this way, we find that for n ≥ 1, with the understanding that empty sums are zero. The constants of integration in (A.1) and (A.2) are arbitrary. Comparing (A.1) with (1.4), it is seen that the following pair of formal relations hold between the coefficients A ± n (ξ) and E ± n (ξ): Application of Ex. 8.3 of Olver [30, p. 22] then leads to the recurrence relations for n ≥ 1. Once the constants of integration in (A.2) are fixed, the A ± n (ξ)'s are uniquely determined by (A.3). For example, to obtain the coefficients generated by the recursive formulae (1.9), the lower limits of integration in (A.2) are taken as ∓∞ + i ξ.
8,715
2020-04-28T00:00:00.000
[ "Mathematics" ]
D-Limonene Affects the Feeding Behavior and the Acquisition and Transmission of Tomato Yellow Leaf Curl Virus by Bemisia tabaci Bemisia tabaci (Gennadius) is an important invasive pest transmitting plant viruses that are maintained through a plant–insect–plant cycle. Tomato yellow leaf curl virus (TYLCV) can be transmitted in a persistent manner by B. tabaci, which causes great losses to global agricultural production. From an environmentally friendly, sustainable, and efficient point of view, in this study, we explored the function of d-limonene in reducing the acquisition and transmission of TYLCV by B. tabaci as a repellent volatile. D-limonene increased the duration of non-feeding waves and reduced the duration of phloem feeding in non-viruliferous and viruliferous whiteflies by the Electrical Penetration Graph technique (EPG). Additionally, after treatment with d-limonene, the acquisition and transmission rate of TYLCV was reduced. Furthermore, BtabOBP3 was determined as the molecular target for recognizing d-limonene by real-time quantitative PCR (RT-qPCR), fluorescence competitive binding assays, and molecular docking. These results confirmed that d-limonene is an important functional volatile which showed a potential contribution against viral infections with potential implications for developing effective TYLCV control strategies. Introduction Bemisia tabaci (Gennadius) (Hemiptera: Aleyrodidae) is one of the most important invasive pests, causing huge economic losses worldwide [1,2].B. tabaci has a wide range of hosts with more than 500 plant species, such as peppers, tomatoes, tobacco, grains, and beans [3].The two cryptic species of B. tabaci, MEAM1 (B) and MED (Q), are highly invasive and have replaced other native cryptic species in many areas of the world, causing serious harm [4,5].B. tabaci damages crops by removing phloem sap, secreting honeydew on leaves, and transmitting plant viruses [3,6].For example, whiteflies can circularly transmit the tomato yellow leaf curl virus (TYLCV) in Begomoviruses [7].TYLCV is a single-stranded circular DNA virus.It can infect a variety of plants, such as peppers (Capsicum species), gourds (Cucumis species), beans (Phaseolus vulgaris), eustoma (Eustoma grandiflora), and tomatoes (Lycopersicon esculentum Mill.) [8][9][10][11][12].Typical symptoms of infected tomato plants are leaf curling, wilting, intervein yellowing, stunting, and degeneration, resulting in reduced production and serious economic losses [13][14][15][16].TYLCV in tomatoes is difficult and expensive to manage, whether in open-field production or structural cultivation [17].There are many methods to control TYLCV, such as controlling the population of whiteflies, killing intermediate hosts (weeds), and changing the planting season.However, a single method tends to increase the control difficulty and is not frequently effective [8].Therefore, reducing the population and migration of the vector whiteflies and preventing their transmission have become important difficulties in control.Although various methods are used, it is still difficult to control this key pest, and its management relies primarily on using insecticides [6].However, the indiscriminate use of insecticides causes resistance development.It potentially harms non-target insects and the environment [18].From environmentally friendly, sustainable, and efficient points of view, more methods have been used to control B. tabaci, such as plant volatiles, natural insecticides, and biological control [19][20][21]. Several studies have indicated that volatiles are the key factor influencing host preference in B. tabaci [22][23][24].By identifying the functional volatiles, the preference of B. tabaci for host plants can be changed to further control their numbers on host plants.Therefore, an increasing number of functions of volatiles have been revealed, choosing suitable volatiles and then using "pushing" (repellent) and "pulling" (attractant) to provide new strategies for B. tabaci population control [24].Insects rely on a sensitive and complex olfactory system to detect chemicals to complete their physiological behaviors [25][26][27].Also, odorant-binding proteins (OBPs) are important components of the olfactory system and the first step in the signal chain of odorant perception [28].Determining the odorant-binding proteins that bind to plant volatiles could help increase the control effect on pests.For example, ApisOBP3 and ApisOBP7 played key roles in aphid response to the alarm pheromone (E)-β-farnesene, a target gene that attenuates signal perception in aphids [29].(E)-β-farnesene could be detected by Eupeodes corollae via EcorOBP15, an important basis for E. corollae to prey on aphids [30].Moreover, AlucOBP5 combined with cis-3-hexenal and phenylacetaldehyde to help Apolygus lucorum locate hosts and find nectar.These target genes provide new strategies for A. lucorum control [25].Currently, there is no in-depth information on the target receptor of B. tabaci, especially virus-infected B. tabaci, for repellent volatiles, which could help manage it effectively.At first, we found that the number of B. tabaci on three plants, Apium graveolens (A.graveolens), Agastache rugosa (A.rugosa), and Coriandrum sativum (C.sativum), in the laboratory was very low, so we chose them as test plants to verify their repellent effect on B. tabaci. In our study, A. graveolens, A. rugosa, and C. sativum were found to repel healthy and TYLCV-infected B. tabaci.Moreover, d-limonene was identified as the most crucial functional volatile compound among their volatile components.Based on these findings, we hypothesized that d-limonene performs more important biological functions besides its evasive effect on B. tabaci.The following experiments were conducted to validate this hypothesis: (1) The preference for d-limonene among non-viruliferous and viruliferous whiteflies was determined.(2) The effects of d-limonene on feeding behavior and TYLCV acquisition and transmission of whitefly were investigated.(3) The target gene of OBPs in nonviruliferous and viruliferous whiteflies that recognizes d-limonene was identified, and the BtabOBP3 was cloned and analyzed to determine their binding capabilities.(4) Molecular docking techniques were employed to predict the key binding sites of dlimonene with BtabOBP3.in a greenhouse.TYLCV-infected tomato was characterized by leaf curl and shrinking (Figure S1), and tomatoes were determined by reverse transcription (RT)-PCR using genespecific primers (Table S1).Healthy adults had fed on TYLCV-infected tomato plants for 48 h to obtain viruliferous whiteflies [31,32].The temperature of the rearing chamber was kept at 26 ± 2 • C. The relative humidity was controlled at 55 ± 5%, and the photoperiod was set to 14 L/10 D. The Preference of B. tabaci MEDs Assessed by Y-Tube Olfactometer Behavioral Experiments The preference of B. tabaci MED adults (non-viruliferous and viruliferous) for three plants at a specific time was evaluated by a Y-tube olfactometer.One branch of the Y-tube was placed with the plant, and the other was treated with air as a control.The equipment was ventilated for 30 min to stabilize the air volume at 0.6 L/min.The lighting conditions of the Y-tube were the same during all tests.The central tube is 40 cm, the two branch arms are 30 cm, and the angle between the two arms is 60 • .Each B. tabaci MED was put into the main arm after 2 h of starvation.The observation time for each B. tabaci MED was 3 min, and the choice of each plant in any branch was observed when the tested B. tabaci MEDs moved more than 1/3 into one of the two arms.Otherwise, it was regarded as no choice [33].Each adult was selected only once.A total of 10 whiteflies were used, 10 times in each group, and six groups were repeated.The Y-tube was washed with 95% ethanol after the test.After each test, the positions of the treatment and control were changed to eliminate the influence of the two side arms of the Y-tube.The numbers of B. tabaci were counted in two side arms, and the repellent rate was calculated by the following formula [34]: Repellent rate : = B. tabaci in control arm − B. tabaci in treatment arm B. tabaci in control arm + B. tabaci in treatment arm × 100 Regarding d-limonene, the preference of B. tabaci MED adults (non-viruliferous and viruliferous) was also found by Y-tube olfactometer experiments.One branch of the Y-tube was placed with d-limonene (the concentrations ranged from 10 −1 to 10 −6 g/mL, diluted with n-hexane).The configured 2 mL compound was transferred to the filter paper and volatilized in the test tube.N-hexane was selected as the control when evaluating B. tabaci's selectivity to d-limonene.D-limonene and n-hexane were purchased from Macklin reagent Co., Ltd.(Shanghai, China).The remaining experimental methods were conducted following the procedures outlined in Section 2.1.2. Extraction and Identification of Volatile Compounds in A. graveolens, A. rugosa, and C. sativum Gas chromatography-mass spectrometry (GC-MS) was used for the identification of volatiles.The plants grown in the greenhouse for 8 weeks were selected to extract and identify volatile compounds.Leaves from different parts (top, middle, and bottom) of each plant were collected, totaling 500 mg, and ground to a fine powder using liquid nitrogen.The samples were transferred to 2 mL centrifuge tubes and mixed with 1 mL of n-hexane.After vortexing for 30 s, they were transferred to brown sample vials using disposable sterile needles and a 0.22 µm bacterial filter and placed on ice as samples.Each experiment was conducted with three biological replicates [35]. For the GC, the starting injector temperature was 50 • C for 1 min, ramped at 5 • C/min to 240 • C for 2 min, and then ramped at 30 • C/min to 300 • C for 5 min.The MS electronic impact ionization energy was set to 71 eV.The ion source, MS quad, was set to 230 • C and 150 • C, and the mass scanning range was set to 50-650 m/z with 0.5 scans/s.Data were correlated with the mass spectra of these compounds, and the database was searched for similar compounds with same retention time and molecular mass.The column type used for GC-MS was HP-5MS quartz capillary column (30 m × 0.25 µm × 0.25 µm).All peaks were identified from their mass spectra by comparison with those present in NIST The d-limonene was mixed with n-hexane to prepare a working solution of 10 −2 g/mL, which was uniformly sprayed on the surface of tomato plants using a 10 mL sprayer.Tomatoes grown in the greenhouse for 4 weeks were used to measure feeding behavior.These tomatoes were used to record the feeding behavior of non-viruliferous and viruliferous whiteflies.The sexes were distinguished, and females were selected under a stereoscopic microscope (Figure S2).The newly emerged females from healthy tomatoes in 3-5 days (defined as non-toxic whiteflies) were fed on TYLCV-infected tomato plants, and viruliferous whiteflies were obtained 48 h later.Before the experiment, whiteflies were starved for 2 h [37]. Conductive silver glue was utilized to secure one whitefly onto a 12 µm gold wire electrode.The electrode was then connected to the EPG system, and the date of feeding behavior was acquired and analyzed using Stylet + for Windows software (EPG stylet + d, Wageningen, The Netherlands) [38].All materials were placed inside a Faraday cage, and the test was carried out at the temperature of the rearing chamber from 10:00 to 18:00.EPG waveforms were categorized into four groups: np (non-probing behavior), C (showing the insect intercellular stylet pathway), E1 (phloem salivation), and E2 (phloem ingestion).Each group was set up with 10 biological replicates [39]. Influence of D-Limonene on the Acquisition and Transmission of TYLCV by B. tabaci As described in Section 2.2.1, female adults were fed on d-limonene-treated tomato plants (treatment group) or n-hexane-treated tomato plants (control group) for 12 h.Then, treated female adults were transferred to clip cages (50 whiteflies/cage) to starve for 2 h.Clip cages containing starved B. tabaci were placed on TYLCV-infected tomato plants (3 cages/plant) for virulence acquisition.After 6, 12, 24, 48, or 72 h of feeding, 100 whiteflies from each treatment were collected, and RNA was extracted from each whitefly.TYLCV was detected by RT-PCR and agarose gel electrophoresis, and TYLCV detection results were used to calculate the acquisition rate. After 48 h of TYLCV acquisition, B. tabaci were placed in clip cages (1, 5, 10, 25, and 50 B. tabaci/cage) and fixed on healthy tomato plants (20 tomato plants/group).After 48 h of feeding, the clip cages were removed, and tomato plants were further cultivated without B. tabaci.After 14 days, TYLCV was separately detected in tomato plants by RT-PCR (Table S1) and agarose gel electrophoresis.Based on the TYLCV infectivity in different groups, the transmission rate was calculated.Four independent replications were performed for each experiment [32]. Binding Analysis of OBP Associated with D-Limonene Recognition in B. tabaci 2.3.1. The Relative Gene Expression of BtabOBPs The relative gene expression of BtabOBPs in non-viruliferous and viruliferous whiteflies after d-limonene treatment at 0, 6, and 12 h was determined by RT-qPCR.cDNA was quantified to 200 ng, combined with specific primers designed by Primmer 5 (Table S1).qPCR was performed on the qTOWER3G fluorescence quantitative PCR instrument.Fifty adult insects were used in each experiment, with three technical and three biological replicates.The relative expression of BtabOBPs was calculated using the comparative cycle threshold (2 −∆∆ct ) method [40]. Competitive Fluorescence Binding Assay The PCR product of BtabOBP3 was cloned into a pEASY-T3 vector.After excising the vector, the target fragment was cloned into the expression vector pET-28b (+) for expression in the BL21 (DE3) strain.BtabOBP3 was induced with 1 mM isopropyl β-D-1-thiogalactopyranoside (IPTG) for 4 h at 37 • C. BtabOBP3 was expressed in the in-clusion bodies.Inclusion bodies were denatured by 8 M urea.Recombinant proteins of BtabOBP3 were solubilized and refolded based on the reported methods [41].Protein purification was performed with His-Tag purification resin column (LMAI Bio) and purified by 300 mM imidazole buffer.The purity and size of purified recombinant proteins were detected by SDS-PAGE.The protein concentrations were measured with bicinchoninic acid (BCA) Protein Assay Ki [42]. N-phenyl-1-naphthylamine (1-NPN) was used as the fluorescent probe.1-NPN and d-limonene were diluted with methanol to 1 mM.BtabOBP3 (2 µM) of 2 mL was added to a 10 mm fluorescence cuvette.The scanning results showed that the optimal emission wavelength (Figure S5A) and excitation wavelength (Figure S5B) of BtabOBP3 are 337 nm and 278 nm.At the excitation wavelength of 277 nm, the fluorescence spectrum of BtabOBP3 binding to 1-NPN was recorded at 300-500 nm.1-NPN (1 mM) was added to BtabOBP3 at concentrations ranging from 2 to 16 µM.The binding constant of 1-NPN to BtabOBP3 (K 1−NPN ) was calculated using GraphPad Prism Windows 5.00 software.1−NPN was mixed equally with 2 µM BtabOBP3, then 1 mM d-limonene was successfully translated into the mixture.The excitation wavelength was set to 220 nm.The emission spectrum was recorded at 250-550 nm.The dissociation constant of d-limonene was calculated with the equation , where [1−NPN] is the free concentration of 1-NPN, and K 1−NPN is the dissociation constant of the protein/1−NPN [43]. Homology Modeling of BtabOBP3 and Molecular Docking with D-Limonene The amino acid sequence of the protein was obtained from the NCBI.The 3D model of BtabOBP3 was constructed by AplhaFold, and the molecular docking was performed using AutoDock 4.2 software [44].Then, the small structure molecules were downloaded from (https://pubchem.ncbi.nlm.nih.gov(accessed on 15 October 2022), and Chem3D was used for structure optimization.The entire protein was wrapped in a docking box, and 200 conformations were searched.The lowest-energy conformation was selected and visualized with pymol. Data Analysis All the statistical data analyses were performed by SPSS 22.0 (SPSS Inc., Chicago, IL, USA).The independent samples t-test (* p < 0.05, ** p < 0.01, *** p < 0.001, ns p > 0.05) and ANOVAs followed by Tukey tests (p < 0.05) were used to analyze the preference of non-viruliferous and viruliferous whiteflies for plants or d-limonene in Y-tube olfactometer experiments.The EPG data exported after software processing were processed using the EPG Excel Data Workbook developed by Sarria et al. [45].Direct comparisons of feeding behaviors between non-viruliferous and viruliferous whiteflies after d-limonene treatment were made by a non-parametric Mann-Whitney U-test [46].p < 0.05 was regarded as indicating statistically significant differences.Independent samples t-test was used to compare the effects of d-limonene treatment on acquisition and transmission of TYLCV by non-viruliferous and viruliferous whiteflies and ANOVAs followed by Tukey tests (p < 0.05) used to analyze the relative expression of BtabOBPs.To validate the feasibility of the Y-tube olfactometer behavioral experiments, we assessed the preference of whiteflies for host plants and clean air.The results revealed that viruliferous whiteflies exhibited a preference for healthy tomato over clean air (Figure S3).Subsequently, the repellent effect of A. graveolens, A. rugosa, and C. sativum on healthy and TYLCV-infected whitefly was measured by Y-tube olfactometer behavioral experiments.The results showed that all plants had a strong repellent effect on non-viruliferous and viruliferous whiteflies (Figure 1).In both types of whitefly, the repellent rate of A. rugosa was Viruses 2024, 16, 300 6 of 16 the highest and was 43.42% ± 3.34% in healthy whitefly (Figure 1A) and 57.20% ± 6.27% in TYLCV-infected whitefly (Figure 1B). A. graveolens had a relatively weak repellent effect compared with A. rugosa, and C. sativum was the lowest of the three.Regarding the viruliferous whiteflies, the repellent rates of A. graveolens and C. sativum were 42.32% and 22.97%, respectively, and were higher than those of non-viruliferous whiteflies (33.40% and 22.70%, respectively).In addition, 30.45% of non-viruliferous whitefly preferred A. rugosa, 32.15% preferred A. graveolens, and 39.59% preferred C. sativum (Figure 1C).The preference trend was consistent with that of the viruliferous whitefly, with the preference rates of 21.40%, 28.84% and 38.52%, respectively (Figure 1D). Key Volatile sessed the preference of whiteflies for host plants and clean air.The results revealed that viruliferous whiteflies exhibited a preference for healthy tomato over clean air (Figure S3).Subsequently, the repellent effect of A. graveolens, A. rugosa, and C. sativum on healthy and TYLCV-infected whitefly was measured by Y-tube olfactometer behavioral experiments.The results showed that all plants had a strong repellent effect on non-viruliferous and viruliferous whiteflies (Figure 1).In both types of whitefly, the repellent rate of A. rugosa was the highest and was 43.42% ± 3.34% in healthy whitefly (Figure 1A) and 57.20% ± 6.27% in TYLCV-infected whitefly (Figure 1B). A. graveolens had a relatively weak repellent effect compared with A. rugosa, and C. sativum was the lowest of the three.Regarding the viruliferous whiteflies, the repellent rates of A. graveolens and C. sativum were 42.32% and 22.97%, respectively, and were higher than those of non-viruliferous whiteflies (33.40% and 22.70%, respectively).In addition, 30.45% of non-viruliferous whitefly preferred A. rugosa, 32.15% preferred A. graveolens, and 39.59% preferred C. sativum (Figure 1C).The preference trend was consistent with that of the viruliferous whitefly, with the preference rates of 21.40%, 28.84% and 38.52%, respectively (Figure 1D). 10 −5 g/mL, the repellent rates remained 23.48% to 42.48% (Figure 2A).When the concentration dropped to 10 −6 g/mL, its repellent effect on B. tabaci turned to attraction (Figure 2A,C).The trend of preference in TYLCV-infected B. tabaci adults for d-limonene was same as that of uninfected B. tabaci adults.The repellent rates remained in the range of 52.00% to 12.27% (Figure 2B).When the concentration decreased to 10 −6 g/mL, whiteflies preferred d-limonene (Figure 2B,D). Feeding Behavior of B. tabaci after D-Limonene Treatment For non-viruliferous whitefly, the following indicators were significantly smaller compared to the control group: the total duration of probes in the treatment group (F1,10 = Effects of D-Limonene on Stylet Activities, Acquisition, and Transmission of TYLCV by B. tabaci 3.2.1. Feeding Behavior of B. tabaci after D-Limonene Treatment For non-viruliferous whitefly, the following indicators were significantly smaller compared to the control group: the total duration of probes in the treatment group (F 1,10 = 7.166, p = 0.015) and the total duration of waveforms E1 (F 1,10 = 19.920,p < 0.001) and E2 (F 1,10 = 19.997,p < 0.001) (Figure 3B,E,F).In addition, the duration of waveform np was increased after treatment (Figure 3C).There was no significant difference in the number of probes (F 1,10 = 0.405, p = 0.533) and the duration of waveform C (F 1,10 = 0.101, p = 0.754) (Figure 3A,D). The Relative Gene Expression of BtabOBPs From 0 h to 12 h, the relative expression of BtabOBP3 in non-viruliferous and viruliferous whiteflies gradually increased with the continuous addition of d-limonene.However, the other seven BtabOBPs did not show a regular trend (Figure 5). Binding Analysis of BtabOBPs Associated with D-Limonene Recognition in B. tabaci 3.3.1. The Relative Gene Expression of BtabOBPs From 0 h to 12 h, the relative expression of BtabOBP3 in non-viruliferous and viruliferous whiteflies gradually increased with the continuous addition of d-limonene.However, the other seven BtabOBPs did not show a regular trend (Figure 5). The Relative Gene Expression of BtabOBPs From 0 h to 12 h, the relative expression of BtabOBP3 in non-viruliferous and viruliferous whiteflies gradually increased with the continuous addition of d-limonene.However, the other seven BtabOBPs did not show a regular trend (Figure 5). Competitive Fluorescence Binding Assay of BtabOBP3 The Western blot showed that the size of the recombinant protein was 28.57kDa (Figure S4).The single and bright band of BtabOBP3, which proved the recombinant protein, had an excellent purification effect and can be used for subsequent experiments.As the concentration of 1-NPN increased, the fluorescence intensity of the protein gradually decreased, and the complex gradually increased (Figure S5C).The linearized spectral data Competitive Fluorescence Binding Assay of BtabOBP3 The Western blot showed that the size of the recombinant protein was 28.57kDa (Figure S4).The single and bright band of BtabOBP3, which proved the recombinant protein, had an excellent purification effect and can be used for subsequent experiments.As the concentration of 1-NPN increased, the fluorescence intensity of the protein gradually decreased, and the complex gradually increased (Figure S5C).The linearized spectral data showed a good fit (Figure 6A).The dissociation constant of BtabOBP3 and 1-NPN was 5.03 µmol/L.The number of binding sites was 0.994, which indicated that 1-NPN and BtabOBP3 were bound at 1:1.As the concentration of odorant molecules increased, the fluorescence intensity of the protein-probe complex system gradually weakened (Figure S5D).When the relative fluorescence value of the complex decreased to half of the initial value, we calculated IC 50 = 44.2± 1.33 µM, Ki = 7.33 ± 0.22 µM (Figure 6B).The results showed that d-limonene had a strong binding ability to BtabOBP3. 5.03 μmol/L.The number of binding sites was 0.994, which indicated that 1-NPN and Bta-bOBP3 were bound at 1:1.As the concentration of odorant molecules increased, the fluorescence intensity of the protein-probe complex system gradually weakened (Figure S5D).When the relative fluorescence value of the complex decreased to half of the initial value, we calculated IC50 = 44.2± 1.33 μM, Ki = 7.33 ± 0.22 μM (Figure 6B).The results showed that d-limonene had a strong binding ability to BtabOBP3. Homology Modeling of BtabOBP3 and Molecular Docking with D-Limonene The interaction between d-limonene and BtabOBP3 is depicted in the 3D model (Figure 7A) and 2D model (Figure 7C).Hydrophobic interactions exist between Arg37, Leu80, Ile23, Cys75, Val73, and small molecules.The hydrophobic interaction between Arg37 and small molecules was 3.4 A°, and that of Val73 was 3.5 A° and 4.2 A°.The hydrophobic interactions between Leu80, Ile213, Cys75, and small molecules were 3.5 A°, 3.5 A°, and 3.8 A° (Figure 7B). Discussion Virus particles can be transmitted between host plants through vectoring insects.Moreover, TYLCV has been found to be efficiently transmitted through whitefly eggs [7].With the global invasion of whiteflies, TYLCV threatens the growth of many economically important crops.Therefore, controlling the population of whiteflies on host plants and Homology Modeling of BtabOBP3 and Molecular Docking with D-Limonene The interaction between d-limonene and BtabOBP3 is depicted in the 3D model (Figure 7A) and 2D model (Figure 7C).Hydrophobic interactions exist between Arg37, Leu80, Ile23, Cys75, Val73, and small molecules.The hydrophobic interaction between Arg37 and small molecules was 3.4 A • , and that of Val73 was 3.5 A • and 4.2 A • .The hydrophobic interactions between Leu80, Ile213, Cys75, and small molecules were 3.5 A • , 3.5 A • , and 3.8 A • (Figure 7B). bOBP3 were bound at 1:1.As the concentration of odorant molecules increased, the fluorescence intensity of the protein-probe complex system gradually weakened (Figure S5D).When the relative fluorescence value of the complex decreased to half of the initial value, we calculated IC50 = 44.2± 1.33 μM, Ki = 7.33 ± 0.22 μM (Figure 6B).The results showed that d-limonene had a strong binding ability to BtabOBP3. Homology Modeling of BtabOBP3 and Molecular Docking with D-Limonene The interaction between d-limonene and BtabOBP3 is depicted in the 3D model (Figure 7A) and 2D model (Figure 7C).Hydrophobic interactions exist between Arg37, Leu80, Ile23, Cys75, Val73, and small molecules.The hydrophobic interaction between Arg37 and small molecules was 3.4 A°, and that of Val73 was 3.5 A° and 4.2 A°.The hydrophobic interactions between Leu80, Ile213, Cys75, and small molecules were 3.5 A°, 3.5 A°, and 3.8 A° (Figure 7B). Discussion Virus particles can be transmitted between host plants through vectoring insects.Moreover, TYLCV has been found to be efficiently transmitted through whitefly eggs [7].With the global invasion of whiteflies, TYLCV threatens the growth of many economically important crops.Therefore, controlling the population of whiteflies on host plants and Discussion Virus particles can be transmitted between host plants through vectoring insects.Moreover, TYLCV has been found to be efficiently transmitted through whitefly eggs [7].With the global invasion of whiteflies, TYLCV threatens the growth of many economically important crops.Therefore, controlling the population of whiteflies on host plants and reducing their ability to transmit TYLCV can safeguard the growth and development of plants.In this study, the repellent effect of three plants on healthy and TYLCV-infected whiteflies was measured by Y-tube olfactometer behavioral experiments, and they had a strong repellent effect (Figure 1).Our results provided a new barrier plant (A.rugosa) for the "push-pull" strategy and a possible new target plant for controlling B. tabaci.Subsequently, we identified and analyzed three repellent plant volatiles and found that d-limonene was the common volatile of the three plants (Tables 1-3).It proved their repellent effect on both healthy and TYLCV-infected whiteflies (Figure 2). Based on the volatile components of plants, apart from d-limonene, other terpene compounds are present in A. graveolens, A. rugosa, and C. sativum, which may also play a role in the selection of plants by whiteflies.However, we chose d-limonene as the main focus of our study because it exhibited strong repellent effects against whiteflies (Figure 2), and its relative abundance is higher in all three plants (Tables 1-3).Tu and Qin (2017) found that β-Myrcene and (E)-Ocimene also had a repellent effect, but the repellent rates were lower than those of d-limonene [47].Our findings confirmed the original hypothesis that d-limonene plays an important role in the avoidance properties of plants.Furthermore, d-limonene has a high volatility and toxicity to insects but less so to humans, so it is used as a green pesticide [48].For example, d-limonene acted as an insect repellant to control the population of B. tabaci and reduced their feeding on host plants [49].Also, the main components of Zanthoxylum bungeanum essential oil were d-limonene and linalool, which had a good toxicity effect on insects [50].In summary, d-limonene can play an effective role in controlling whiteflies and is a functional volatile.Furthermore, spraying d-limonene on plants can be considered in future applications, and the preference of whiteflies for tomatoes treated with d-limonene (treatment) or n-hexane (control) was determined to investigate whether the repellent effect persists under natural crop conditions.Our results indicated that spraying d-limonene on the surface of tomatoes still has a repellent effect (Figure S6).It is worth noting that the repellent effect of d-limonene was concentrationdependent, and appropriate spraying concentrations should be considered in future field applications, considering factors such as planting area and host plant growth, to achieve optimal control efficacy. Volatiles affect the physiological processes of insects and are also the main basis for selecting target plants for barrier crop protection.Previous studies have found that cucumber intercropping with celery (A.graveolens) could reduce the number and oviposition of B. tabaci on cucumbers [47].Results from our study showed that spraying of d-limonene on tomato leaves affected the feeding behavior of B. tabaci.After d-limonene treatment, the duration of waveform np (non-feeding) was longer, waveform E1 and E2 was shortened, and the total probing time was decreased for both infected and healthy females (Figure 3).According to the characteristics and mechanism of TYLCV transmission by whiteflies, they ingest virions while feeding on phloem sap, leading to f phytotoxic disorders [51].Regarding whiteflies, TYLCV particles travel up the salivary glands into the gut and are further transported to the hemolymph, then to the salivary glands, and finally back into the new plants during the next feeding [52].Therefore, phloem feeding is the key for whiteflies to acquire and spread TYLCV.Reducing the occurrence of this behavior can significantly reduce the acquisition and transmission efficiency of the virus.In addition, female whiteflies were more capable of transmitting the virus than males [53].Our results on feeding behavior combined with the acquisition and transmission of TYLCV confirmed that the decreased duration of waveforms E1 and E2 significantly reduces the transmission efficiency of TYLCV.The results suggested that d-limonene can affect the acquisition and transmission of females by changing feeding, which meant that d-limonene might play an anti-feeding role in insect control.It is noteworthy that Jiang et al. (2000) reported a strong correlation between the duration of waveform E1 and E2 and TYLCV inoculation, especially with E1 playing a pivotal role.Furthermore, the total number of pds was also associated with virus inoculation.However, executing E2 waveforms was not crucial for virus acquisition, as the results indicated a negative correlation with virus inoculation.This phenomenon seemed difficult to explain and suggested that the whiteflies might not acquire sufficient virus particles despite multiple probing events.Therefore, we focused on analyzing only six parameters related to non-probing waveforms and phloem-feeding waveforms for feeding analysis [54]. Next, the molecular mechanism of recognition of d-limonene by whitefly was explored.OBPs are important components of the olfactory system and play a key role in the recognition of signals in B. tabaci, and their relationship with volatiles was also revealed [25,29].For example, BtabOBP1 played a key role in the recognition of R-curcumene, causing differences in B. tabaci preference for wild and cultivated tomatoes [55].BtabOBP1 and BtabOBP4 could bind β-ionone and affect the localization of B. tabaci to egg-laying sites [56].Our study showed that the relative expression of BtabOBP3 in non-viruliferous and viruliferous whiteflies gradually increased with the continuous addition of d-limonene by qPCR (Figure 5).Combined with the results of fluorescence competition (Figure 6, Figures S4 and S5), this indicates that BtabOBP3 played a role in the olfactory response of B. tabaci MED to d-limonene (Figure 5).In addition, the binding sites of BtabOBP3 and d-limonene were predicted by molecular docking (Figure 7).BtabOBP3 has been found to be highly expressed in the head and combined with odorant molecules, such as β-ionone, trans-cinnamaldehyde, trans-2-hexenal, linalool, naphthalene, cedrol, 1,8cineole, and β-citronellol [56].Our results confirmed that the recognition of d-limonene by B. tabaci was also related to BtabOBP3.BtabOBP3 played a key role in the odor recognition process of B. tabaci.In addition, the transmission rate of ToCV was reduced by 83.3% after feeding dsBtabOBP3, so BtabOBP3 was likely to become an essential target for controlling B. tabaci [57].D-limonene has high volatility and toxicity to insects but less so to Homo sapiens (humans), so it is used as a green pesticide [48].For example, d-limonene acted as an insect repellant to control the population of B. tabaci and reduced their number on host plants [49].The main components of Zanthoxylum bungeanum essential oil were d-limonene and linalool, which had an excellent toxicity effect on Tribolium castaneu [50]. Conclusions Our results confirmed that d-limonene is an important functional volatile, showing a potential contribution against viral infections with potential implications for developing effective TYLCV control strategies. Compounds in A. graveolens, A. rugosa, and C. sativum and Their Impact on the Preference of B. tabaci 3.1.1.Y-Tube Olfactometer Behavioral Experiments of B. tabaci and Plants Figure 1 . Figure 1.Repellent rate of non-viruliferous B. tabaci MED to three plants (A) and viruliferous B. tabaci MED to three plants (B); and preference rate of non-viruliferous B. tabaci MED for three plants (C) and viruliferous B. tabaci MED to three plants (D).Each value is the mean ± SEM of six replicates.Different numbers of asterisks (*) and letters above each bar indicate significant differences (p < 0.05) among the treatments. Figure 1 . Figure 1.Repellent rate of non-viruliferous B. tabaci MED to three plants (A) and viruliferous B. tabaci MED to three plants (B); and preference rate of non-viruliferous B. tabaci MED for three plants (C) and viruliferous B. tabaci MED to three plants (D).Each value is the mean ± SEM of six replicates.Different numbers of asterisks (*) and letters above each bar indicate significant differences (p < 0.05) among the treatments. Figure 2 . Figure 2. Repellent rate of non-viruliferous B. tabaci MED (A) and viruliferous B. tabaci MED (B) to different concentrations of d-limonene and preference rate of non-viruliferous B. tabaci MED (C) and viruliferous B. tabaci MED (D) for different concentrations of d-limonene.Different colors indicate different concentrations of d-limonene (specific concentrations are shown on the axis).Each value is the mean ± SEM of six replicates.Different numbers of asterisks (*) and letters above each bar indicate significant differences (p < 0.05) among the treatments. Figure 2 . Figure 2. Repellent rate of non-viruliferous B. tabaci MED (A) and viruliferous B. tabaci MED (B) to different concentrations of d-limonene and preference rate of non-viruliferous B. tabaci MED (C) and viruliferous B. tabaci MED (D) for different concentrations of d-limonene.Different colors indicate different concentrations of d-limonene (specific concentrations are shown on the axis).Each value is the mean ± SEM of six replicates.Different numbers of asterisks (*) and letters above each bar indicate significant differences (p < 0.05) among the treatments. Figure 3 . Figure 3. Feeding behavior of non-viruliferous and viruliferous whiteflies after d-limonene treatment.(A) Total number of probes.(B) Total duration of probes.(C) Total duration of np summed.(D) Total duration of C summed.(E) Total duration of E(pd)1 summed.(F) Total duration of E(pd)2 summed.Each value is the mean ± SEM of ten replicates.Different numbers of asterisks (*) above each bar indicate significant differences (p < 0.05) among the treatments; no significant difference is denoted as ns. Figure 3 . Figure 3. Feeding behavior of non-viruliferous and viruliferous whiteflies after d-limonene treatment.(A) Total number of probes.(B) Total duration of probes.(C) Total duration of np summed.(D) Total duration of C summed.(E) Total duration of E(pd)1 summed.(F) Total duration of E(pd)2 summed.Each value is the mean ± SEM of ten replicates.Different numbers of asterisks (*) above each bar indicate significant differences (p < 0.05) among the treatments; no significant difference is denoted as ns.Viruses 2024, 16, x FOR PEER REVIEW 10 of 16 Figure 4 . Figure 4.The effects of d-limonene treatment on acquisition (A) and transmission (B) of TYLCV by non-viruliferous and viruliferous whiteflies.Circles or triangles represent different biological repetitions.Each value is the mean ± SEM of four replicates.Different numbers of asterisks (*) above each bar indicate significant differences (p < 0.05) among the treatments; no significant difference is denoted as ns. Figure 4 . Figure 4.The effects of d-limonene treatment on acquisition (A) and transmission (B) of TYLCV by non-viruliferous and viruliferous whiteflies.Circles or triangles represent different biological repetitions.Each value is the mean ± SEM of four replicates.Different numbers of asterisks (*) above each bar indicate significant differences (p < 0.05) among the treatments; no significant difference is denoted as ns. Figure 4 . Figure 4.The effects of d-limonene treatment on acquisition (A) and transmission (B) of TYLCV by non-viruliferous and viruliferous whiteflies.Circles or triangles represent different biological repetitions.Each value is the mean ± SEM of four replicates.Different numbers of asterisks (*) above each bar indicate significant differences (p < 0.05) among the treatments; no significant difference is denoted as ns. Figure 6 . Figure 6.Binding test of BtabOBP3 with d-limonene.(A) Linearized spectral data of emission spectrum of 1-NPN and BtabOBP3.(B) The relative fluorescence value of d-limonene binding to Bta-bOBP3.Values are means ± SEM of three replicates. Figure 7 . Figure 7. Homology modeling of BtabOBP3 and molecular docking with d-limonene.(A) Model diagram of the interaction between d-limonene and BtabOBP3; (B) specific binding sites of Bta-bOBP3 to d-limonene; (C) plan of the interaction between d-limonene and BtabOBP3. Figure 6 . Figure 6.Binding test of BtabOBP3 with d-limonene.(A) Linearized spectral data of emission spectrum of 1-NPN and BtabOBP3.(B) The relative fluorescence value of d-limonene binding to BtabOBP3.Values are means ± SEM of three replicates. Figure 6 . Figure 6.Binding test of BtabOBP3 with d-limonene.(A) Linearized spectral data of emission spectrum of 1-NPN and BtabOBP3.(B) The relative fluorescence value of d-limonene binding to Bta-bOBP3.Values are means ± SEM of three replicates. Figure 7 . Figure 7. Homology modeling of BtabOBP3 and molecular docking with d-limonene.(A) Model diagram of the interaction between d-limonene and BtabOBP3; (B) specific binding sites of Bta-bOBP3 to d-limonene; (C) plan of the interaction between d-limonene and BtabOBP3. Figure 7 . Figure 7. Homology modeling of BtabOBP3 and molecular docking with d-limonene.(A) Model diagram of the interaction between d-limonene and BtabOBP3; (B) specific binding sites of BtabOBP3 to d-limonene; (C) plan of the interaction between d-limonene and BtabOBP3. Compounds in A. graveolens, A. rugosa, and C. sativum and Their Impact on the Preference of B. tabaci 2.1.1.B. tabaci MEDs and Plant Rearing B. tabaci MED used in this experiment was presented by Dr. Youjun Zhang from Chinese Academy of Agricultural Sciences.Healthy adults were reared on tomato plants (Solanum lycopersicum Mill.cv.Zuanhongmeina) for more than 6 generations to establish a population with no pesticide exposure.A. graveolens, A. rugosa, and C. sativum seeds were purchased from the market.The tomato seeds were purchased from the Institute of Vegetable Crops, Hunan Academy of Agricultural Sciences.All seedlings were planted Table 1 . Components and relative contents of volatiles in Apium graveolens. Table 2 . Components and relative contents of volatiles in Agastache rugosa. Table 3 . Components and relative contents of volatiles in Coriandrum sativum.
8,614.2
2024-02-01T00:00:00.000
[ "Environmental Science", "Agricultural and Food Sciences", "Biology" ]
Altered Brain Arginine Metabolism and Polyamine System in a P301S Tauopathy Mouse Model: A Time-Course Study Altered arginine metabolism (including the polyamine system) has recently been implicated in the pathogenesis of tauopathies, characterised by hyperphosphorylated and aggregated microtubule-associated protein tau (MAPT) accumulation in the brain. The present study, for the first time, systematically determined the time-course of arginine metabolism changes in the MAPT P301S (PS19) mouse brain at 2, 4, 6, 8 and 12 months of age. The polyamines putrescine, spermidine and spermine are critically involved in microtubule assembly and stabilization. This study, therefore, further investigated how polyamine biosynthetic and catabolic enzymes changed in PS19 mice. There were general age-dependent increases of L-arginine, L-ornithine, putrescine and spermidine in the PS19 brain (particularly in the hippocampus and parahippocampal region). While this profile change clearly indicates a shift of arginine metabolism to favor polyamine production (a polyamine stress response), spermine levels were decreased or unchanged due to the upregulation of polyamine retro-conversion pathways. Our results further implicate altered arginine metabolism (particularly the polyamine system) in the pathogenesis of tauopathies. Given the role of the polyamines in microtubule assembly and stabilization, future research is required to understand the functional significance of the polyamine stress response and explore the preventive and/or therapeutic opportunities for tauopathies by targeting the polyamine system. L-arginine is a metabolically versatile semi-essential amino acid. Its de novo synthesis involves the conversion of L-citrulline by argininosuccinate synthetase (ASS) and argininosuccinate lyase (ASL), the so-called L-citrulline recycling pathway ( Figure 1) [14][15][16]. In the brain, L-arginine can be metabolized by nitric oxide synthase (NOS) to produce L-citrulline and nitric oxide (NO), by arginase to form L-ornithine and urea, and by arginine decarboxylase (ADC) to generate agmatine ( Figure 1). L-ornithine can be further Figure 1. L-arginine metabolic pathways and the polyamine system. L-arginine is metabolized by nitric oxide synthase (NOS), arginase and arginine decarboxylase (ADC) to form several bioactive molecules and can be de novo synthesized by argininosuccinate synthetase (ASS) and argininosuccinate lyase (ASL) from L-citrulline (see text for detailed description). Overall, we found genotypeand age-related increases in L-arginine, L-ornithine, glutamine, putrescine and spermidine (indicated by short green arrows) and decreases in glutamate and spermine (indicated by short red arrows) levels in the brain of PS19 mice, indicating a shift of L-arginine metabolism towards the arginase and polyamine system. Moreover, PS19 mice displayed increased levels of mRNA and/or Figure 1. L-arginine metabolic pathways and the polyamine system. L-arginine is metabolized by nitric oxide synthase (NOS), arginase and arginine decarboxylase (ADC) to form several bioactive molecules and can be de novo synthesized by argininosuccinate synthetase (ASS) and argininosuccinate lyase (ASL) from L-citrulline (see text for detailed description). Overall, we found genotypeand age-related increases in L-arginine, L-ornithine, glutamine, putrescine and spermidine (indicated by short green arrows) and decreases in glutamate and spermine (indicated by short red arrows) levels in the brain of PS19 mice, indicating a shift of L-arginine metabolism towards the arginase and polyamine system. Moreover, PS19 mice displayed increased levels of mRNA and/or protein expression of ASS, ASL, arginase I and II, ornithine decarboxylase (ODC), spermidine synthase (SPDS), spermine oxidase (SMOX), spermidine/spermine-N 1 -acetyltransferase-1 (SSAT1) and polyamine oxidase (PAO), suggesting upregulated citrulline recycling and polyamine retro-conversion (indicated by long green arrows). AGMAT, agmatinase; SMS, spermine synthase. Adapted from [28]. Under physiological situations, the NOS pathway is the predominate L-arginine metabolic pathway, as NOS has an approximately 1000-fold greater affinity for L-arginine than arginase and the endogenous agmatine level in the brain is very low (only about 0.2 to 0.4 µg/g) [14]. In AD brains, interestingly, arginine metabolism is altered with a shift towards the arginase pathway along with various changes in polyamine levels and upregulated gene expression of the polyamine retro-conversion enzymes [29][30][31]. Sandusky-Beltran et al. [26] reported that rTg4510 mice expressing the MAPT P301L mutation displayed increased putrescine and acetylspermidine levels in the whole brain tissue at 8 months of age, and reduced ODC and SMS expression, but increased SPDS, SSAT1 and SMOX expression in the hippocampus at 12 months of age, indicating the disrupted polyamine homeostasis in mice with a tau mutation. Using PS19 mice bearing the MAPT P301S mutation, we previously investigated how the arginine metabolic profile changed in the brain at 4, 8 and 12-14 months of age [28]. Intriguingly, there were significantly increased levels of L-ornithine (in all three age groups) and the polyamines putrescine and spermidine (particularly in older age groups), but a trend of reduced spermine levels in the hippocampus of PS19 mice. Again, these findings demonstrate a clear shift of arginine metabolism to favour the arginase-polyamine system in PS19 tau mice, perhaps aiming to produce more polyamines (spermine in particular) to combat tau pathology. However, the system failed to produce more spermine in the hippocampus of PS19 mice even at 12-14 months of age, when high levels of putrescine and spermidine were present. We postulate that such a failure might be due to impaired spermine biosynthesis and/or enhanced spermine retro-conversion to spermidine. It has been shown that the de novo synthesis pathway is the safest way to produce polyamines, as the indirect retro-conversion pathway can generate toxic by-products [17,32,33]. Given the role of polyamines in microtubule assembly and stabilization [22][23][24], the existing evidence implicates altered polyamine homeostasis in the pathogenesis of tauopathies. The present time-course study was designed to systematically investigate how brain arginine metabolism, particularly the polyamine system (including both the de novo synthesis and retro-conversion pathways), changed in PS19 mice during their progression from prodromal to severe disease-like stages, using mice at 2 (mild behavioral deficits), 4 (synaptic loss), 6 (tau aggregates in the brain), 8 (hippocampal neuronal loss) and 12 (widespread atrophy) months of age [7,12]. In order to analyze correlations between the brain L-arginine metabolite changes in the PS19 mice and their behavioral deficits, we used a cohort of animals with confirmed age-related accumulation of phosphorylated tau (p-tau) species and behavioral impairments [12]. Spermidine/Spermine Ratio The levels of spermidine/spermine ratio in five brain regions of WT and PS19 mice at five age points are presented in Figure 4D. Regarding the frontal cortex, hippocampus, parahippocampal region and striatum, there were significant effects of genotype (FC: PS19 mice displayed increased ratios in the frontal cortex at 12 months (46%), hippocampus at 8 and 12 months (63% and 66%, respectively), parahippocampal region at 8 and 12 months (43% and 92%, respectively) and striatum at 6 and 12 months (20% and 30%, respectively) relative to their age-matched WT controls. For the cerebellum, we found no significant effects of genotype, age and their interaction (all F ≤ 1). mRNA Expression of the Enzymes Involved in Polyamine Synthesis and Retro-Conversion As described above, PS19 mice (particularly at 8 and 12 months of age) displayed markedly increased levels of the polyamines putrescine and spermidine, accompanied, however, by unchanged or reduced levels of the highest order polyamine spermine. There were also genotype-related changes in the polyamine precursors L-ornithine (the product of arginase) and agmatine (the product of ADC). To understand the mechanism(s) behind these changes, we then employed the quantitative reverse transcription-polymerase chain reaction (RT-qPCR) technique to determine how mRNA expression of the enzymes involved in the de novo synthesis and retro-conversion of polyamines changed in the frontal cortex, hippocampus, parahippocampal region and cerebellum of PS19 mice at 8 and 12 months of age. Protein Expression of the Enzymes Involved in L-Arginine Metabolism As described above, the HPLC and LC/MS assays revealed marked genotype-related increases in the tissue concentrations of L-arginine (recycled from L-citrulline by ASS and ASL) and L-ornithine (the product of arginase). Moreover, the RT-qPCR indicated the upregulation of arginase and ODC in PS19 mice, particularly at older ages. Western blot was therefore employed to determine the time-course of changes in protein expression of the enzymes involved in the catabolism and/or biosynthesis of L-arginine and L-ornithine (arginase I, arginase II, ODC, ASS and ASL) in PS19 mice using the parahippocampal region, the brain area with more observed changes and available tissue. Neurochemical and Behavioural Correlations In the present study, the brain tissue samples were collected from WT and PS19 mice at 2, 4, 6, 8 and 12 months of age that underwent a battery of behavioral tests in the elevated plus-maze, open field, Y-maze and the working memory version of the Morris water maze [12]. These PS19 mice displayed hyperactivity and reduced anxiety levels with age and early and persistent spatial working memory deficits, as detailed in our previous pub- in the parahippocampal region of wildtype (WT) and PS19 mice at 2, 4, 6, 8 and 12 months of age (n = 7-9/genotype/age). (F): representative western blots of AI, AII, ODC, ASS and ASL, as well as housekeeping protein glyceraldehyde 3-phosphate dehydrogenase (GAPDH), in wild-type (W) and PS19 (P) mice at five age points. && indicates a significant genotype and age interaction at p < 0.01. # indicates a significant age effect at # p < 0.05 or ## p < 0.01. * indicates a significant genotype effect at ** p < 0.01 or *** p < 0.001. Neurochemical and Behavioural Correlations In the present study, the brain tissue samples were collected from WT and PS19 mice at 2, 4, 6, 8 and 12 months of age that underwent a battery of behavioral tests in the elevated plus-maze, open field, Y-maze and the working memory version of the Morris water maze [12]. These PS19 mice displayed hyperactivity and reduced anxiety levels with age and early and persistent spatial working memory deficits, as detailed in our previous publication [12]. We, therefore, determined neurochemical-behavioural correlations using simple linear regression, and, interestingly, several significant correlations were observed primarily in 8-month-old PS19 mice. When tested in the elevated plus-maze, PS19 mice at 8 months of age spent 85% more time in the open arms (reduced anxiety level) and made 28% more total arm entries (hyperactive) when compared to their age-matched WT controls [12]. Interestingly, the time 8-month-old PS19 mice spent in the open arms was positively correlated with their putrescine (r = 0.91, p = 0.002; Figure 7A) and spermidine (r = 0.75, p = 0.03; Figure 7B) levels in the frontal cortex and spermidine in the parahippocampal region (r = 0.90, p = 0.0023; Figure 7C). Moreover, their total number of arm entries was positively correlated with spermidine in the hippocampus (r = 0.84, p = 0.009; Figure 7D) and parahippocampal region (r = 0.82, p = 0.01; Figure 7E). PS19 mice at 8 months of age also generated a 20% longer path length in the open field test (hyperactive) [12]. We observed a significant positive correlation between their path length and spermidine in the hippocampus (r = 0.94, p = 0.0005; Figure 7F). These findings suggest that higher levels of the polyamines putrescine and/or spermidine are associated with the reduced anxiety and hyperactivity seen in 8month-old PS19 mice [12]. Figure 7C). Moreover, their total number of arm entries was positively correlated with spermidine in the hippocampus (r = 0.84, p = 0.009; Figure 7D) and parahippocampal region (r = 0.82, p = 0.01; Figure 7E). PS19 mice at 8 months of age also generated a 20% longer path length in the open field test (hyperactive) [12]. We observed a significant positive correlation between their path length and spermidine in the hippocampus (r = 0.94, p = 0.0005; Figure 7F). These findings suggest that higher levels of the polyamines putrescine and/or spermidine are associated with the reduced anxiety and hyperactivity seen in 8-month-old PS19 mice [12]. Discussion Recent human and animal research has implicated altered brain arginine metabolism in the pathogenesis of tauopathies, whereas the higher-order polyamines (spermine in particular) protect against tau fibrilization and reduce the formation of toxic tau species Discussion Recent human and animal research has implicated altered brain arginine metabolism in the pathogenesis of tauopathies, whereas the higher-order polyamines (spermine in particular) protect against tau fibrilization and reduce the formation of toxic tau species [25,26,[28][29][30][31]38]. PS19 mice bearing the MAPT P301S mutation show an early and rapidly progressing neurodegenerative phenotype of tauopathy, and hence are a valuable transgenic mouse model to better understand the onset and development of tauopathies. These tau mice display soluble p-tau species in the brain along with mild spatial learning and memory deficits (that worsen with age), synaptic loss, brain deposition of tau aggregates, hippocampal neuronal loss and widespread atrophy at 2, 3, 6, 8 and 12 months of age, respectively [7,12]. Moreover, PS19 mice have altered brain arginine metabolism with a clear shift to favor the arginase-polyamine pathway, resulting in the overproduction of the polyamines putrescine and spermidine, but interestingly not spermine [28]. The present study, for the first time, carried out a time-course study across the disease progression of PS19 mice (starting at the prodromal age of 2 months) to systematically investigate how arginine metabolism (with an emphasis on the polyamine system) changed in the brains of PS19 mice at 2-12 months of age and the associations between these changes in PS19 mice and their behavioural impairments. Our results demonstrated altered arginine metabolism, the polyamine system in particular, in the PS19 brain with the upregulation of polyamine retro-conversion pathways. Altered Brain Arginine Metabolism in PS19 Mice As illustrated in Figure 1, L-arginine is a versatile amino acid with several bioactive metabolites. Therefore, we first quantified the tissue content of L-arginine and nine of its downstream metabolites in the frontal cortex, hippocampus, parahippocampal region, striatum and cerebellum of PS19 mice and their WT littermates at 2, 4, 6, 8 and 12 months of age. Regarding L-arginine, PS19 mice had significantly increased levels in the striatum (2 and 6 months), parahippocampal region (6-12 months) and hippocampus (8-12 months). In terms of its three direct metabolites L-citrulline, L-ornithine and agmatine, we found that their levels were either unchanged or increased in the brain of PS19 mice. For example, PS19 mice displayed increased levels of L-citrulline in the striatum (2 months), hippocampus (2-8 months), parahippocampal region (6 months; although reduced levels at 12 months), frontal cortex (8 months) and cerebellum (12 months). More consistent changes were seen in L-ornithine, with increased levels in PS19 mice at the age of 2 (hippocampus, parahippocampal region, striatum), 4 (frontal cortex), 6 (hippocampus and striatum), 8 and 12 months (all five regions). Regarding agmatine, PS19 mice showed increased levels in the hippocampus and parahippocampal region at 12 months of age, although a small reduction was seen in the striatum across all age points. These results clearly demonstrate genotype-related alterations in L-arginine and its three direct metabolites in PS19 mice in a region-specific and age-dependent manner, which are largely consistent with our previous study using WT and PS19 mice at 4, 8 and 12-14 months of age [28]. While the MAPT P301S mutation affected all three direct metabolic pathways of L-arginine, the arginase pathway appeared to be severely affected in PS19 mice as evidenced by the early and persistent increases in L-ornithine. It is of interest to emphasize the increased L-arginine levels observed in the present study and reported in our earlier publication [28]. As the de novo synthesis of L-arginine involves the recycling of L-citrulline by ASS and ASL (Figure 1) [14][15][16], we measured their protein expression in the parahippocampal region. The findings of increased protein levels of ASS and ASL in PS19 mice at 2-8 months of age confirmed the upregulation of the L-citrulline recycling pathway, which could account for the increased L-arginine levels in PS19 mice mentioned above. Interestingly, while no significant increases in L-arginine were evident in the parahippocampal region of PS19 mice at 2 and 4 months of age, western blot showed genotype-related increases in ASS and ASL protein expression accompanied by no changes in L-citrulline. More intriguingly, PS19 mice at 12 months of age displayed increased levels of L-arginine, but reduced levels of L-citrulline (also [28]) and ASS and ASL protein expression in the parahippocampal region. It is currently unclear whether this is a compensatory mechanism aiming to normalise L-arginine. L-ornithine can be channeled to form glutamate, which, in turn, can be metabolized to GABA and glutamine (Figure 1) [14][15][16]. In the present study, we found reduced glutamate levels in the frontal cortex (6 months), hippocampus and parahippocampal region (both 8 and 12 months) and the cerebellum (4 and 6 months) of PS19 mice. Regarding GABA, there were transient increases in the parahippocampal region (6 months), frontal cortex and striatum (both 8 months), but a decrease in the cerebellum (4 months), in PS19 mice. In terms of glutamine, PS19 mice displayed increased levels in the frontal cortex (12 months), hippocampus (6-12 months), parahippocampal region (6 months), striatum (6 and 12 months) and cerebellum (12 months). Taken together, there appeared to be a shift from glutamate to favour glutamine production in PS19 mice, particularly at older ages, which was further supported by the increased glutamine/glutamate ratios in PS19 mice at 6-12 months of age in all five brain regions examined. Similar findings were also observed in our earlier study [28], in which increased glutamine/glutamate ratios were found in 12-14-months-old PS19 mice in the five out of six brain regions examined. Collectively, these findings indicate that MAPT P301S mutation significantly affects the glutamate-glutamine relationship in the brain with age. It has been well documented that glutamate released by neurons is taken up by astrocytes and converted to glutamine by glutamine synthase, which can, in turn, be transported back to neurons and reconverted to glutamate by glutaminase [37,39]. Given the important role of the glutamate-glutamine cycle in neurotransmission, such widespread increases in the glutamine/glutamate ratios may contribute significantly to the behavioural deficits seen in PS19 mice [12]. Upregulated Arginase-Polyamine Pathway in PS19 Mice L-ornithine can also be converted to polyamine putrescine by the rate-limiting (for polyamine biosynthesis) enzyme ODC (Figure 1) [14,17,18]. We found increased putrescine levels in the brain of PS19 mice, with sustained changes in the frontal cortex (8-12 months), hippocampus (6-12 months) and parahippocampal region (all age points) although with more transient changes in the striatum (2 and 12 months) and cerebellum (12 months). In conjunction with the genotype-related increases in L-ornithine described above, these results indicate the upregulated arginase-ODC pathway in the brain of PS19 mice, which was further supported by increased mRNA and protein expression of arginase I/II and ODC. It should be pointed out, however, that the increased arginase II and ODC mRNA expression in the parahippocampal region of 12-month-old PS19 mice were not emulated in the protein expression. Previous research has reported that protein synthesis is impaired in tauopathies, as tau disrupts translation and ribosomal function [40][41][42][43], which could explain the discrepancy between our mRNA and protein levels. Our results are also in contrast to the findings of an earlier study, in which 12-month-old rTg4510 mice harboring the MAPT P301L mutation displayed a reduced ODC protein level in the hippocampus [26]. Putrescine can also be synthesised by AGMAT from agmatine, decarboxylated arginine produced by ADC (Figure 1) [14][15][16]. In the present study, we found few genotype-related changes in ADC and AGMAT mRNA expression in the brain (apart from the hippocampus), indicating that the putrescine synthesis via agmatine was not largely affected in PS19 mice. However, the increased levels of agmatine and AGMAT mRNA expression in the hippocampus of PS19 mice at older ages suggest that agmatine might have contributed to the genotype-related increases in hippocampal putrescine to a certain extent. Using post-mortem human brain tissue, earlier studies have reported that both the arginase and ODC pathways are drastically affected in AD [25,[29][30][31]44]. In the present study, we found early and persistent changes in arginase I/II, L-ornithine, ODC and putrescine in the brain of PS19 mice. These findings seem to implicate an altered arginasepolyamine pathway in the pathogenesis of tauopathies. It is of interest to note, however, that arginase I upregulation induced by interleukin-4 improved learning and memory in the 3xTg mouse model of AD [45]. Sustained overexpression of arginase I also markedly reduced tau pathology and kinases involved in tau phosphorylation and mitigated hippocampal atrophy in rTg4510 tau mice [38], whereas ODC upregulation appears to be detrimental in PS19 mice [25]. ODC antizyme is a small regulatory protein that inhibits ODC activity, promotes ODC degradation and down-regulates polyamine uptake, and itself can be regulated by agmatine and antizyme inhibitors [46][47][48]. It has been shown that these ODC regulators (ODC antizymes and antizyme inhibitors) are upregulated in AD brains [25]. Interestingly, overexpression of antizyme inhibitor 2 (hence increased ODC activity) augmented tau neuropathology and cognitive impairments in PS19 mice [25]. In view of these results, further work is required to better understand the implication of ODC dysregulation in the pathogenesis of tauopathies. Polyamine System Dysfunction in PS19 Mice The higher-order polyamines spermidine and spermine can be formed through de novo synthesis by SPDS (from putrescine) and SMS (from spermidine), respectively, or channelled back through the retro-conversion pathways via SMOX and SSAT1/PAO (Figure 1) [17,18]. Alike to putrescine, spermidine levels were drastically increased in the frontal cortex (12 months), hippocampus and parahippocampal region (both 8 and 12 months), and striatum (2 and 12 months) of PS19 mice. Since the increased spermidine in PS19 mice was found alongside increased SPDS mRNA expression in the hippocampus and parahippocampal region, putrescine is likely an important source for increased spermidine levels in these regions. Regarding spermine, intriguingly, we found no changes or mild reductions in PS19 mice in all five brain regions examined and no parallel genotype-related increases between spermidine and spermine. When SMS was investigated, its mRNA expression levels were unaltered in the brain of PS19 mice, except for the cerebellum. The spermidine/spermine ratio is normally tightly conserved for normal cellular function [34][35][36]. However, the differential effects of the MAPT P301S mutation on the two higher-order polyamines resulted in the increased spermidine/spermine ratios in PS19 mice primarily in the cerebral brain areas in an age-dependent manner. The increased putrescine and spermidine levels, but decreased or unchanged spermine concentrations, alongside unchanged SMS in the cerebral brain areas of PS19 mice, led us to investigate how the enzymes involved in the direct (SMOX) and indirect (SSAT1 and PAO) polyamine retro-conversion pathways changed in the brain. Our RT-qPCR work revealed genotype-related increases in the mRNA expression of SMOX (in the hippocampus, parahippocampal region and cerebellum), SSAT1 (the parahippocampal region only) and PAO (in all four regions examined), therefore confirming increased polyamine retro-conversion in PS19 mice. Using the whole brain tissue, moreover, Sandusky-Beltran et al. [26] reported increased levels of putrescine, SPDS, SMOX and SSAT1 expression (although reduced SMS expression) in 8-12-month-old rTg4510 mice. Taken together, these findings demonstrate that MAPT mutations lead to polyamine system dysfunction in the brain, which involves the dysregulation of the de novo synthesis pathway and the upregulation of both the direct and indirect retro-conversion pathways. It should be noted that cellular polyamine transporters constitute another key regulator of the polyamine system and have been genetically linked to the early onset forms of Parkinson's disease [49]. Future research is required to explore their role in the dysregulation of the polyamine system in tauopathies. Polyamines are normally tightly regulated through complex feedback mechanisms to maintain and modulate their key physiological functions, including DNA, RNA and protein synthesis, cell proliferation and differentiation, microtubule assembly and stabilization, as well as neurotransmitter receptor regulation [17,18,[20][21][22][23][24]. However, transient increases in polyamines (known as the polyamine stress response, or PSR) can occur following exposure to stress signals to promote cell survival mechanisms [50][51][52]. The results of the present study demonstrate the presence of a sustained PSR in PS19 mice, as evidenced by the prolonged elevation of putrescine and spermidine and increased spermidine/spermine ratio in the brain. It has been shown that polyamines can regulate microtubule homeostasis and prevent tau fibrillization [22][23][24][25][26]. However, high levels of spermidine can induce SSAT1 to initiate the polyamine retro-conversion via the indirect pathway [53,54], and acetylated polyamines can exacerbate tau aggregation and seeding [25,26], hence augmenting tau pathology. Moreover, the polyamine retro-conversion enzymes produce toxic by-products, including highly reactive oxygen species, aldehydes and acrolein [32,33,[55][56][57][58], in addition to acetylated polyamines, which are elevated in tauopathies and are associated with the accumulation of pathological tau species [25,26]. Reduced spermine level as a consequence of the polyamine retro-conversion could also be devastating for the brain, as it is the most potent polyamine regulating cell survival, neurotransmission, microtubule polymerisation and stabilisation and combating tau pathology by reducing tau aggregation into its more toxic fibrillar and oligomeric forms [21,22,[24][25][26]59]. Furthermore, high levels of polyamines can be neurotoxic, due to their modulatory roles on N-methyl-D-aspartate (NMDA) receptors [60][61][62][63]. To this end, the sustained PSR in PS19 mice is likely detrimental. Taken together, our study supports the idea that tauopathies are a result of a chronic dysregulated PSR [26,51], although it needs to be further validated and the underlying mechanisms remain to be investigated. It has been well documented that polyamines are critically involved in learning and memory processes [17][18][19]. PS19 mice used in the present study displayed hyperactivity and reduced anxiety levels with age, and early and persistent spatial working memory deficits [12]. While there were no direct correlations between the polyamines and working memory deficits, higher levels of the polyamines putrescine and/or spermidine appeared to be associated with reduced anxiety and hyperactivity seen in 8-month-old PS19 mice. The polyamine system has been implicated in various mental disorders, with genetic polymorphisms in SSAT1 and SMS linked to anxiety disorders and altered polyamine levels associated with psychological stress [64][65][66][67]. It is of interest to note that hyperactivity is a major phenotype of SMS-deficient mice, which exhibit reduced spermine levels and an increased spermidine/spermine ratio in the brain [36,68]. Conclusions The present study, for the first time, systematically investigated the time-course of changes in brain arginine metabolism, particularly the polyamine system, in PS19 mice at 2-12 months of age (from prodromal to severe disease-like stages). Consistent with our earlier research [28], the MAPT P301S mutation affected brain arginine metabolism drastically in a region-specific and age-dependent manner. Importantly, our findings largely replicated genotype-related neurochemical changes and a shift of arginine metabolism towards the arginase-polyamine pathway [28], further demonstrating the upregulation of the L-citrulline recycling, arginase-ODC and polyamine retro-conversion pathways (Figure 1), the association between altered polyamines (putrescine and spermidine) and the reduced anxiety and hyperactivity behaviours observed in PS19 mice at 8 months. The brain regions affected most severely in PS19 mice appeared to be the hippocampus and the parahippocampal region, which are important for cognition and mood and are affected early and severely in several tauopathies [69][70][71][72][73][74]. While most genotype-related changes observed in this study often occurred at the older age points, PS19 mice at 2 months of age displayed increased levels of L-ornithine in three brain regions, alongside putrescine, spermidine and the enzymes ODC, ASS and ASL. It is of interest to note that PS19 mice at such young ages already showed mild working memory deficits and the accumulation of soluble p-tau species in the brain in the absence of overt neuronal or synaptic loss [7,12]. Hence, our findings further implicate altered arginine metabolism, particularly the polyamine system (including a sustained PSR), in the pathogenesis of tauopathies. We propose that initially the PSR was induced in the PS19 mice perhaps as a neuroprotective response to counter the accumulation of p-tau by stabilising microtubules and/or sequestering tau in its less toxic unaggregated forms. With age, the polyamine retro-conversion pathways were upregulated in the PS19 mice, leading to the prolonged overproduction of putrescine and spermidine. The sustained PSR and the toxic polyamine retro-conversion by-products augmented tau pathology and cognitive dysfunction. Future research is required to better understand the functional significance of PSR in tauopathies and to explore the preventive and/or therapeutic opportunities by targeting the polyamine system. Animals Male P301S MAPT transgenic (PS19) mice (B6;C3-Tg(Prnp-MAPT*P301S)PS19Vle/J; stock number: 008169; Jackson Laboratory) and C57BL/6J female mice were crossed to produce PS19 and WT offspring that were confirmed by tail tip genotyping. The present study used male PS19 mice and their age-matched WT littermates at 2, 4, 6, 8 and 12 months of age (n = 7-16/genotype/age), which underwent a battery of behavioural tests and cerebral blood flow assessments in our previous study [12]. All animals were group-housed in 15 × 20 × 38 cm 3 polypropylene individually ventilated cages, maintained on a 12 h light/dark cycle regime (light on at 7:00), and provided ad libitum access to food and water. Animals' body weights and general health conditions were closely monitored. All experimental procedures were carried out in accordance with the regulations of the University of Otago Animal Ethics Committee. Every attempt was made to reduce the number of animals used and to minimise their suffering. Brain Tissue Collection Brain tissue collection was performed 2-3 days after the completion of the last behavioural test following a cerebral blood flow assessment [12,[75][76][77]. Each animal was transcardially perfused with ice-cold saline, and the brain was rapidly removed and kept in cold saline (4 • C) for at least 45 s. The frontal cortex (FC), whole hippocampus (HP), parahippocampal region (PH containing the entorhinal, perirhinal and postrhinal cortices), striatum (ST) and cerebellum (CE) were dissected freshly on ice from each hemisphere, immediately snap-frozen on dry ice and stored at −80 • C for the HPLC and LC/MS assays, RT-qPCR and western blot. HPLC and LC/MS Assays Brain tissue samples (from the FC, HP, PH, ST and CE) were weighed, homogenised in ice-cold 10% perchloric acid (∼50 mg wet weight per ml) and centrifuged at 10,000× g for 10 min. The perchloric acid extracts (supernatants) were then stored at −80 • C until the HPLC and LC/MS assays. The brain tissue concentrations of L-arginine, L-citrulline, L-ornithine, glutamate, glutamine, GABA, spermidine and spermine were measured using HPLC, while agmatine and putrescine concentrations were measured by a highly sensitive LC/MS method as described in detail in our previous publications [28,31,76,78]. Highpurity external and internal standards were used (Sigma, Sydney, Australia) and all other chemicals were of analytical grade. For each brain region, the samples from the PS19 and WT mice at all five age points were assayed in duplicate simultaneously in a counterbalanced manner. The concentrations of L-arginine and its nine downstream metabolites in each tissue were calculated with reference to the peak area of external standards, and values were expressed as µg/g wet tissue. The experimenters were blind to the grouping information at the time of the assay. RNA Extraction, cDNA Synthesis and RT-qPCR RT-qPCR was employed to measure the mRNA expression of ADC, AGMAT, arginase I and II, ODC, SPDS, SMS, SMOX, SSAT1 and PAO in the FC, HP, PH and CE tissue of 8and 12-month-old PS19 mice and their age-matched controls. Total RNA from brain tissue (20-30 mg) was extracted using a mirVana™ PARIS™ Protein and RNA Isolation System (Thermo Fisher Scientific, AM1556) according to the manufacturer's instructions. RNA quality and quantity were determined by spectrophotometric analysis (NanoDrop™ ND-1000, Thermo Fisher Scientific, Waltham, MA, USA). A High Capacity RNA-to-cDNA kit (Applied Biosystems, 4387406) was used to convert the RNA to cDNA as per the vendor's instructions on a Biometra TAdvanced PCR Thermal Cycler (Analytik, Jena, Germany). RT-qPCR (triplets per sample) was performed with cDNA as a template using a Light Cycler ® 480 SYBR ® Green I Master kit (Roche, 04707516001, Basel, Switzerland) on a ViiA™ Real-Time PCR System (Applied Biosystems, Waltham, MA, USA) in a final volume of 10 µL. The cycling conditions were 95 • C for 10 min, 40 cycles of 30 s each at 95 • C, 60 • C and 72 • C, followed by a melt curve. The primers used are depicted in Supplementary Table S1. Efficiency of the primers was tested using the standard curves made from the same batch of cDNA. Analysis of the melt curve indicated no non-specific amplification or primer dimer formation. Plate-to-plate variations were corrected to a calibrator sample, prepared from an independent cDNA sample, not part of the cohort. Relative mRNA levels were normalized against the house-keeping genes; glyceraldehyde 3-phosphate dehydrogenase (GAPDH) and hypoxanthine-guanine phosphoribosyltransferase (HPRT), and presented as 2 −∆∆Ct . Western Blot Brain samples were homogenised in protease-inhibitory buffer (50 mM Tris-HCl (pH 7.4), 10 µM phenylmethylsulfonyl fluoride, 15 µM pepstatin A and 2 µM leupeptin) on ice, and centrifuged at 12,000× g for 10 min at 4 • C. The supernatant was collected and stored at −80 • C and the protein concentration was determined using the Bradford method [79]. The protein expression of arginase I and II, ASL, ASS and ODC in the parahippocampal region of PS19 mice in relation to their age-matched controls was then determined using western blots, as previously described [77]. All the samples were mixed with gel-loading buffer, containing 50 mM Tris-HCl, XT sample buffer (Bio-Rad Laboratories, 610791, Hercules, CA, USA) and reducing agent (Bio-Rad, 1610792), with the final protein concentration equalized to 2 mg/mL and boiled for 5 min. The samples (5-10 µL) were loaded into a Criterion TM XT 4-12% gradient Bis-Tris gel (Bio-Rad, 3450125), along with pre-stained protein markers (10-250 kDa; Bio-Rad) and a biological control sample. The loading order was counter-balanced between the samples from the genotype/age groups for each brain region. The separated proteins were transferred onto nitrocellulose blotting membranes using a transblotting apparatus (Bio-Rad) and then blocked with 5% bovine serum albumin (BSA) in Tris-buffered saline with 0.1% Tween-30. The membranes were then incubated with primary polyclonal goat antibody against arginase I (1:1000; Santa Cruz, sc-18355), ASL (1:1000; Santa Cruz, sc-68250) or ASS (1:1000; Santa Cruz, sc-46066), or monoclonal mouse antibody against arginase II (1:1000; Santa Cruz, sc-271443), or ODC (1:1000; Abcam, ab66067) overnight at 4 • C. As a loading control, the monoclonal mouse or rabbit antibody against the housekeeping protein GAPDH (1:100,000; Abcam, ab8245 or ab181602, respectively) was used. The following day, the membranes were probed with IRDye ® 680RD goat anti-mouse IgG antibody, IRDye ® 680RD goat anti-rabbit IgG antibody, IRDye ® 680RD donkey anti-mouse IgG antibody, IRDye ® 800 CW goat anti-mouse IgG antibody or IRDye ® 800 CW donkey anti-goat IgG antibody (all 1:10,000; LI-COR Biosciences, Lincoln, NE, USA). Detection of the immunoreactive signal bands was performed using an Odyssey ® CLx Imager (LI-COR). Signals were quantified with Odyssey CLx Image Studio software (LI-COR), normalized with the corresponding GAPDH loading controls and biological controls to account for inter-gel variation. The experimenters were blind to the grouping information at the time of the assay and analysis. Statistical Analysis All data from Sections 2.1-2.3 were analysed using two-way analysis of variance (ANOVA) to determine the effects of genotype, age and their interaction, followed by posthoc tests (Fisher's LSD for Sections 2.1 and 2.3 orŜídák tests for Section 2.2). Simple linear regression was used to determine the correlations between the neurochemical variables and previously reported behavioural data [12] in Section 2.4. Statistical analyses were performed using GraphPad Prism software (Version 9.3.1), and all data were presented as
8,103.2
2022-05-27T00:00:00.000
[ "Biology" ]
A radon-thoron isotope pair as a reliable earthquake precursor Abnormal increases in radon (222Rn, half-life = 3.82 days) activity have occasionally been observed in underground environments before major earthquakes. However, 222Rn alone could not be used to forecast earthquakes since it can also be increased due to diffusive inputs over its lifetime. Here, we show that a very short-lived isotope, thoron (220Rn, half-life = 55.6 s; mean life = 80 s), in a cave can record earthquake signals without interference from other environmental effects. We monitored 220Rn together with 222Rn in air of a limestone-cave in Korea for one year. Unusually large 220Rn peaks were observed only in February 2011, preceding the 2011 M9.0 Tohoku-Oki Earthquake, Japan, while large 222Rn peaks were observed in both February 2011 and the summer. Based on our analyses, we suggest that the anomalous peaks of 222Rn and 220Rn activities observed in February were precursory signals related to the Tohoku-Oki Earthquake. Thus, the 220Rn-222Rn combined isotope pair method can present new opportunities for earthquake forecasting if the technique is extensively employed in earthquake monitoring networks around the world. , and stable isotope ratios (δ 2 H and δ 18 O) have been monitored in numerous fault and volcanic zones for earthquake prediction [1][2][3][4][5][6] . Such precursor signals are anticipated by the formation of micro cracks in fault zones or by groundwater mixing due to crustal dilation 1,6 . Stresses that develop prior to an earthquake are thought to be responsible for the release and accumulation of certain constituents that may be useful as tracers or precursors of these tectonic forces. However, these potential precursors have not been extensively used to forecast tectonic or volcanic activities because abnormal increases of these components also occur due to other environmental processes, including changing meteorological conditions. Among the potential earthquake precursors, 222 Rn in soil and groundwater has shown the highest sensitivity because of its radioactive nature and origin in the subsurface [7][8][9][10][11][12] . However, 222 Rn still suffers from interferences by meteorological phenomena and tidal forces 10,13,14 . In order to test the concept of a dual isotopic tracer, we monitored both 222 Rn and 220 Rn every hour in a limestone cave (Seongryu Cave, Korea) from May 18, 2010 to June 17, 2011 (see Methods). To our knowledge, this represents the first study to evaluate this 222 Rn-220 Rn isotope pair in an underground environment as a precursor of earthquakes. Seongryu cave, which is ~250 Ma in age, is located in Seonyu Mountain (elevation: 199 m) in the eastern part of Korea (Fig. 1). The cave is at 20 m above the mean sea level and is ~330 m in length, 1-13 m in height, and has an entrance of ~1 m 2 . The main cave contains many branches, including three lakes, of which the two located near the entrance are affected by an outside stream 15 . More detailed description of this cave is available in Oh and Kim 16 . For the 220 Rn measurements, the position of the air-inlet above the cave floor is critical because 220 Rn decays almost immediately (~5 minutes for 97% decay) after emanation from the source. Since 220 Rn in the soil air should be in equilibrium with its parent 224 Ra (even in the skin layer) due to its short half-life, variations in the 220 Rn activity of soil air, including that of the porewater in rocks and soils would not be useful. Therefore, the air-inlet must be positioned at a height where the inputs of 220 Rn from general environmental processes are minimal, but also where large inputs from earthquakes are detectable. In addition, the monitoring location should be isolated from the outside air, which can also generate 220 Rn anomalies due to wind-driven skin flow through rocks and soils. The atmosphere in the first 130 m of Seongryu Cave is influenced by the outside air, while the atmosphere in the inner cave is almost stagnant 15 . Thus, in this study we chose an air-inlet position 0.2 m above the cave floor and 180 m from the entrance ( Fig. 1). At this position, the noise from general meteorological driving forces was minimal and the meteorological conditions were relatively constant for both 220 Rn and 222 Rn. The activity of 220 Rn was not detectable when the air-let was positioned 1.5 m above the cave floor due to its short half-life (see Supplementary Fig. S1). The activities of 220 Rn were higher at the entrance site than at the monitoring site due to the influence of wind-driven skin flow (see Supplementary Fig. S2). At the monitoring site, 222 Rn was both significantly stable and enriched relative to the entrance site (see Supplementary Fig. S2). Furthermore, since the activity ratios of 222 Rn to 220 Rn are high in the normal natural environment, there is a possibility of counts from 222 Rn spilling over to 220 Rn (see Methods). Previous study 16 has shown that without a correction for this spillover, one could observe erroneous positive correlations between 222 Rn and 220 Rn. Over the monitoring period, 222 Rn activity was on average higher during the summer (avg. 645 ± 194 Bq m −3 ) than during the winter (avg. 140 ± 172 Bq m −3 ) (Fig. 2a), which is typical of 222 Rn observations in caves around the world 17,18 . This seasonal variation in 222 Rn activity is known to be due to the difference in air ventilation intensity. In contrast, 220 Rn activity was on average higher in the winter (avg. 9.7 ± 10.1 Bq m −3 ) than in the summer (avg. 1.2 ± 3.7 Bq m −3 ) (Fig. 2b). In particular, we noted that both 222 Rn and 220 Rn showed high peaks in February 2011 that are decoupled from the general seasonal patterns (Fig. 3). Outside weather parameters (temperature, relative humidity, and pressure) showed large variations (− 13.8-35.2 °C , 7.5-97.9%, and 990.9-1033.6 mbar, respectively) relative to the inside air (10.7-15.7 °C , 94-99.9%, and 999.0-1034.8 mbar, respectively). With the exception of the anomalous peaks (red circles in Fig. 3), the daily average of 222 Rn activities showed a significant positive correlation (n = 234, r 2 = 0.66) with the daily average of temperature (density) difference (the inside temperature is subtracted from the outside temperature) (Fig. 3a), which is typical of cave air behavior 17,18 . In general, the 222 Rn activities in the outside air are approximately two orders of magnitude lower than those in cave airs. Thus, the lower 222 Rn activities in winter could be due to greater ventilation of the denser outside air. In contrast, during the summer, 222 Rn is trapped inside the cave due to the atmospheric stratification. In addition to the density difference, in some regions, the land surface humidity affects the activity of 222 Rn in the underground air because pore space can be affected by water vapor condensation 19 . As such, we observed a positive correlation (n = 396, r 2 = 0.57) between 222 Rn activity and the relative humidity of the outside air. The temperature and relative humidity of the inside air remained fairly constant, and they correlated weakly with 222 Rn activities (r 2 = 0.27 and r 2 = 0.32, respectively). Precipitation and outside pressure showed no significant correlations with 222 Rn activity (r 2 < 0.1 and r 2 = 0.12, respectively). Thus, the variations in 222 Rn activities in the cave seem to be predominantly affected by variations in the outside temperature and humidity. In contrast to 222 Rn, the daily average of 220 Rn activities, except for anomalous peaks, showed a negative correlation (n = 234, r 2 = 0.51) with temperature difference (Fig. 3b) and relative humidity (r 2 = 0.41). However, poor correlations were observed between the activities of 220 Rn and outside pressure and precipitation (r 2 = 0.15 and r 2 < 0.1, respectively). 220 Rn activities in the winter were higher than those in the summer, resulting from higher ventilation which results in rapid advection of the pore air (with 220 Rn already in equilibrium with 224 Ra) in the cave, in the cold season. The significant 222 Rn and 220 Rn anomalies observed in February 2011 (Fig. 3a-c) cannot be explained by normal meteorological variations, including episodic precipitation events (Fig. 3d). Thus, we consider that these anomalies may have been precursors of the Tohoku-Oki Earthquake, which occurred approximately one month later (Fig. 2). A recent study 20 showed that the Tohoku-Oki Earthquake was preceded Rn and 220 Rn activities. Numbers in parentheses in (d) denote distance (km) from the monitoring site. Due to a lack of a precise calibration for 220 Rn, activities are presented in 'arbitrary units' . by a series of small earthquakes that started on 13 February 2011. Our results showed that the 222 Rn alone could not distinguish the February anomalies from the summer peaks (Fig. 3a), however, there are clear anomalous signals based on 220 Rn alone or the combined 222 Rn vs. 220 Rn plots (Fig. 3b,c). In general, carrier gases (CO 2 , CH 4 , Ar, and He) play a critical role in controlling the migration and transport of trace gases (e.g., 222 Rn) towards the surface 21,22 . From our results, we assume that the degassing of carrier gases peaked on February 12, 2011, reduced continuously until March 1, 2011, and then almost stopped on the day of the Tohoku-Oki Earthquake. Many studies have reported that carrier gases anomalies occurred days-weeks before earthquakes 23 . Based on our results, we present three points of evidence to indicate that the 220 Rn-222 Rn isotope pair may be an excellent precursor of earthquakes: (1) 220 Rn peaks during the anomalous period were much higher than those during normal periods over the year. The observed anomalies cannot be explained by any normal environmental conditions during the monitoring period (Figs 2 and 3); (2) A positive correlation was observed between 220 Rn and 222 Rn during the anomalous period, perhaps due to the venting of carrier-gases (e.g., CO 2 ) from the sub-surface. On the other hand, negative correlations were observed more generally (Fig. 3c); (3) The peak hours of the 220 Rn and 222 Rn anomalies were episodic and decoupled from normal diurnal patterns (see Supplementary Fig. S3). In general, 222 Rn showed a diurnal fluctuation pattern, particularly in spring and fall when temperature differences between day and night were largest, although this pattern is not seen for the short-lived 220 Rn. Although the monitoring site in this study is ~1200 km distant from the epicenter of the Tohoku-Oki Earthquake, the earthquake impacted the Korean Peninsula in a number of ways. The Korean Peninsula is located on the Eurasian tectonic plate, which extends to Japan. As a result of the Tohoku-Oki Earthquake it was estimated to have moved eastward by 1.2-5.6 cm 24 . In addition, 46 out of 320 monitoring wells in Korea showed changes in water level, temperature, and electrical conductivity as a result of the earthquake 25,26 . Therefore, it is not surprising that the radon isotope anomalies observed in Seongryu Cave may represent precursors of this extremely large earthquake. On the basis of our observations of 222 Rn and 220 Rn, we suggest that a network of 222 Rn-220 Rn monitoring stations should be constructed to further verify the potential of this method for forecasting the locations and strengths of pending earthquakes. In order to filter out other environmental forcing factors, meteorological parameters and potential carrier gases (CO 2 , CH 4 ) should also be monitored. Most importantly, we have to carefully select suitable natural or artificial cave systems for these stations. We can easily develop more sensitive 220 Rn monitoring systems, data transmission setups to remote laboratories, and institute a canary program to automatically detect potential earthquake signals. Methods A detailed description of the RAD7 radon monitor is available in Burnett et al. 27 and Lane-Smith et al. 28 . Briefly, the RAD7 uses a silicon alpha detector to determine the daughters of 222 Rn and 220 Rn, 218 Po (t 1/2 = 3.05 min; 6.00 MeV), 214 Po (t 1/2 = 164 μ s; 7.67 MeV), and 216 Po (t 1/2 = 0.15 s; 6.78 MeV). The surface of the detector uses electrostatic attraction to capture Po + ions using an electric potential of 2000 to 2500 volts, and the alpha detector counts 218 Po, 216 Po, and 214 Po alpha decays. We used both 214 Po and 218 Po peaks for the 222 Rn measurements. An air filter is used at the entrance of the RAD7 to prevent dust particles and charged ions from entering the radon chamber. The internal air pump of the RAD7 (flow rate: 1 L min -1 ) was activated for 1 minute every 5 minutes to reduce maintenance labor in the humid cave air. In order to maintain relative humidity of < 10%, which is necessary for a constant detection efficiency of the RAD7, a desiccant column and a passive moisture exchanger (DRYSTIK, Durridge Co.) were coupled to the air path of the RAD7. During the measurement period, a new desiccant column was replaced every 3 or 4 weeks. In order to obtain accurate 220 Rn activity data in the presence of extremely high 222 Rn levels, we corrected for the 'spillover effect' of 222 Rn to 220 Rn by using the method of Chanyotha et al. 29 . Briefly, we assume that the efficiency of 220 Rn detection is a quarter of that for 222 Rn to account for thoron decay sample in the intake system (volume of sample tube + drying unit) and internal cell of the RAD7. For the correction of 'spillover effect' , the spill factor is assumed to be 0.015, which is an average value calibrated and measured by Durridge. The spillover from one of the radon channels (C: 214 Po) into the thoron channel (B: 216 Po) can be corrected by the analysis software (Capture) provided by Durridge. Due to a lack of a precise calibration for 220 Rn, the activity data are presented in 'arbitrary units' . While there is uncertainty in the calibration of absolute 220 Rn activities, it does not affect the interpretation of our results since the same procedures and conditions were held constant during the measurement period. The atmospheric parameters (temperature, relative humidity, and pressure) in the cave were measured hourly using external sensors (MSR145, MSR electronics) and stored in a data logger. Outside weather parameters were obtained from the Korea Meteorological Administration (KMA).
3,416
2015-08-13T00:00:00.000
[ "Environmental Science", "Geology", "Physics" ]
Emergent Stem Cell Homeostasis in the C. elegans Germline Is Revealed by Hybrid Modeling The establishment of homeostasis among cell growth, differentiation, and apoptosis is of key importance for organogenesis. Stem cells respond to temporally and spatially regulated signals by switching from mitotic proliferation to asymmetric cell division and differentiation. Executable computer models of signaling pathways can accurately reproduce a wide range of biological phenomena by reducing detailed chemical kinetics to a discrete, finite form. Moreover, coordinated cell movements and physical cell-cell interactions are required for the formation of three-dimensional structures that are the building blocks of organs. To capture all these aspects, we have developed a hybrid executable/physical model describing stem cell proliferation, differentiation, and homeostasis in the Caenorhabditis elegans germline. Using this hybrid model, we are able to track cell lineages and dynamic cell movements during germ cell differentiation. We further show how apoptosis regulates germ cell homeostasis in the gonad, and propose a role for intercellular pressure in developmental control. Finally, we use the model to demonstrate how an executable model can be developed from the hybrid system, identifying a mechanism that ensures invariance in fate patterns in the presence of instability. INTRODUCTION Organogenesis in multicellular organisms is a highly reliable process, achieved by robust temporal and spatial signals transmitted and received by cells within a tissue. In this process, populations of mitotic and apoptotic cells within an organ achieve homeostasis. The movement of cells in a growing organ, triggered by cell division or death, may initiate signaling events and differentiation-thereby coupling controls explicitly to the cellular dynamics. An organ exemplifying this problem of multiscale control of development is the Caenorhabditis elegans germline ( Fig. 1 A) (1-3). The C. elegans gonad is formed by a pair of U-shaped tubes that are each connected with their proximal ends to a common uterus. In the distal region of each gonad arm, germ cells form a multinucleate syncytium, in which the germ-cell nuclei line the outer gonad perimeter and each nucleus is partially enclosed by a plasma membrane but connected by a shared cytoplasm (i.e., the rachis) that fills the inner part of the distal arm. In the bend region, which connects the distal and proximal gonad arms, the germ cells become cellularized and start oogenesis. As the differentiating, immature oocytes enter the proximal arm, they then grow in size, become stacked in single-file, and proceed toward the uterus. This process is controlled by the local signaling molecules present in different regions of the gonad. At the distal tip of each arm, a DELTA signal from the somatic distal tip cell activates NOTCH signaling to promote mitosis and establish a pool of regenerating stem cells (4)(5)(6)(7). As this stem cell niche fills, mitotic cells move out of the distal zone and no longer receive the DELTA signal from the distal tip cell. As a consequence, the cells enter meiosis (8,9). Continued pressure from mitotic division in the distal zone drives meiotic germ cells toward the bend region at the end of the distal arm. RAS/MAPK signaling is activated in the distal arm to promote progression through the pachytene stage and entry into diplotene (10)(11)(12)(13)(14). Finally, as the cells move through the bend into the proximal arm they enter diakinesis, turn off RAS/ MAPK signaling, cellularize, and grow in size to form oocytes. However, it has been estimated that at least 50% of all germ cells undergo apoptosis at the end of the distal arm near the bend region, instead of initiating oogenesis (15,16). Hyperactivation of the RAS/MAPK signaling pathway causes-directly or indirectly-an increased rate of apoptosis (17)(18)(19). The immature oocytes in the proximal arm move toward the spermatheca at the proximal end, where a sperm signal induces oocyte maturation and cell cycle progression by reactivating the RAS/MAPK pathway. Thus, germ cell homeostasis is achieved by the competition of mitosis, fertilization, and apoptosis, which maintain a steady number of germ cells. This progression of states, mitosis / pachytene / diplotene / diakinesis, from the distal tip region up to the proximal gonad end, is invariant in the wild-type (20). Uniquely in C. elegans, mutations that activate germ cell mitosis lead to the formation of germ-cell tumors (20). The development of cells in C. elegans germline is therefore controlled by the intersection of both physical forces exerted between cells and the internal signal transduction networks acting within individual cells. Models of the germline must therefore capture both of these phenomena to accurately describe the process. Executable models (also known as formal models) have been established as a powerful technique for describing cellular signaling networks (21)(22)(23)(24). In contrast to other types of models that aim to represent a literal representation of physico-chemical properties, executable models capture the underlying function of the cell in a more abstract description. In modeling the func-tional behavior of proteins and genes in a cell (20), we derive a finite, discrete model of development, which accurately describes observed behaviors (25)(26)(27). Such models have the further advantage of being amenable to model-checking approaches. These methods offer guarantees of model behavior through analytical techniques, while avoiding the need for explicit exhaustive simulation (28,29). Despite their successes, however, executable approaches cannot be easily applied to three-dimensional biophysical systems. Previously, Beyer et al. (30) showed that a molecular dynamics approach could be used in a physical model of the C. elegans germline to describe the physical motions of growing cells. In our approach, we applied the principles of Brownian dynamics to model the movement of entire cells in time and space. Brownian dynamics is a lattice-free, physically realistic implicit solvent model of physical motion originally developed for atomically detailed systems. Particles in Brownian dynamics simulations have no momentum due to a high friction environment, and as such offer a powerful framework for considering the motions of large entities such as cells. Moreover, it is being increasingly applied to the analysis of micrometer-scale cellular and physical systems (such as polymer/clay nanocomposites (31), and rheological systems (32), cell swimming (33), and cell adhesion (34)), demonstrating its appropriateness for considering the dynamics of developing cells in a tissue. To accurately describe the development of the C. elegans germ cells, we have combined these two types of model (physical and executable) into a single hybrid, multiscale model. To link the physical and executable models, one requires an interface between the two models describing how spatial location and biophysical properties of the cells influence their signaling states, and vice versa. This interface defines how cells grow, divide, die, and respond to external signals. Here we present such a hybrid approach for modeling the development of oocytes from stem cells in the C. elegans gonad. We take two standard methods to describe executable signaling behavior (using qualitative networks (35)) and physical movements (using Brownian dynamics (36)), and combine them into a hybrid model. Using this hybrid model, we investigate the mechanisms preventing clonal dominance, the role of apoptosis in maintaining germ cell homeostasis, and the control of fate progression through directional flow. Cells in a hybrid model Each cell in the system consists of a single particle and a single qualitative network (QN) (35). Particles describe the physical parameters, including location in space, whereas the QN describes the signaling state of the cell. At each time step of the simulation, the state of the system is updated. A system update consists of all, some, or none of the following functions applied to the system: an update of the QN (following its formalism defined FIGURE 1 Images of the C. elegans germline and our model. (A) Microscopy images of the germline. (B) The complete physical model of the germ cells. Cells in the distal tip undergo mitosis in response to DELTA, and movement out of the distal tip initiates cell behavior shifting to meiosis and entry into pachytene stage I (orange). RAS activation in the pachytene region causes progression through pachytene and entry into diplotene (green). In the bend region, germ cells progress into diakinesis (blue), begin oogenesis, and move into the proximal arm. The first 150 oocytes are fertilized at the end of the proximal gonad arm and removed from the simulation. Instead of progressing through diplotene/diakinesis, approximately half of the germ cells undergo apoptosis. After fertilization has ceased, the tube becomes blocked and oocytes cease moving. (C) A simplified model of fate progression in germline cells. DELTA activates NOTCH, triggering mitosis. Once a cell moves out of a DELTA-rich environment, cells enter pachytene where RAS activation causes pachytene exit and entry into diplotene or apoptosis. As RAS is downregulated, cells progress into diakinesis. Finally, the major sperm protein (MSP) induces oocyte maturation in the most proximal oocyte by reactivating the RAS/MAPK pathway. Biophysical Journal 109(2) 428-438 below); an update of the particle (using a Brownian dynamics integrator); and a hybrid update (changing the QN according to some state of the particle and vice versa, see Fig. 2). Each of these updates is described below. Qualitative networks A quantitative network (QN) is formally defined as follows: a QN, Q ¼ (V, T, N), is described by a set of m variables V ¼ [v 1 , ., v m ], which can each have integer values in the range [0, ., N] and are each associated with a single target function T ¼ [T 1 , ., T m ]. The value of each variable corresponds to the quantity of a substance in the cell, or the activity of a substance in the cell (i.e., 0 ¼ off, N ¼ maximum activity), and this granularity can differ between variables. For example, a variable representing VAB1 can have a value of 0 or 1, representing the off-/on-states of the protein. The target function associated with each variable describes how the quantity or activity of the substance varies gradually based on its activators and inhibitors. For example, in the model the function of NOTCH is dependent on the value of the DELTA variable: if DELTA is equal to 1, the value of Notch in the next step is equal to 1. The default target function is calculated from the difference between the average of the activating inputs and the average of the inhibiting inputs (i.e., the average difference). A state of the network s is a value for all V in Q, and all states are considered initial states. The update function for a QN is defined as follows from state s ¼ (d 1 , d 2 , ., d m ). Here, each d is the value of a variable at a point in time, i.e., a state of the system is an assignment of a single value to each variable. The next That is to say, for a specific variable, the next state of that variable is either the same, one higher, or one lower. If the evaluated target function (which takes the value of specific variables from the state of the system s, hence Ty(s)) is higher than this value, and this value is lower than the upper bound, the next state of the variable is this state increased by 1. If the evaluated target function is lower than this state and this state is higher than the lower bound, the next state of the variable is this state is reduced by 1. In all other circumstances, the next state of the variable is equal to the previous state. All variables update concurrently. Thus, all executions end in a cycle of states that are visited infinitely often. If all executions end in a single cycle, and that cycle is of length 1 (i.e., T(s) ¼ s, therefore s 0 ¼ s), we conclude that the network is stabilizing. The final state in a stabilizing network is known as the stable state. The QN model was developed in the BioModelAnalyzer (BMA) (37). Brownian dynamics Physical simulations were performed using a Brownian dynamics simulation (36). Cells are considered to have no long-range interactions. Shortrange interactions are modeled as harmonic repulsion when cell boundaries overlap (with spring constant 36 pN/mm, estimated based on experimental data from fibroblasts (38)). Germ cell positions are updated according to the Brownian dynamics equation x 0 ¼ x þ dt:F:ðg=mÞ þ sqrtð2:k B T:dt:ðg=mÞÞr; where dt is the time step, F is the sum of the forces experienced by a cell, g is the friction coefficient, m is the mass, k B is the Boltzmann constant, T is the temperature, and r is Gaussian-distributed noise with a mean of 0 and a standard deviation of 1 (taken from the GROMACS manual (39,40)). Temperature was set at 298 K, and the friction coefficient (g) was set to 5 fs (estimated from simulation). Based on the density of an Escherichia coli cell, particles were estimated to have a density of 1.3 pg/mm 3 , and cell mass (m) was calculated from the cell radius and density. A variable timestep was used with a maximum of 3 s. Hybrid updates Every 0.3 s, both the physical model and the QN for each cell are updated. QNs are updated based on the location of the associated particle in space, or the amount of (physical) time that a variable in the QN has been in a certain state. If the particle is in one of a set of defined regions, the value of a single variable is changed. In the germline model, three regions are defined as DELTA, Growth factor, and Sperm, which represent the Notch activating region, the RAS activating region, and the fertilization region. An alternative model exchanged the RAS activating region with a timed activation and downregulation of RAS (represented by a transition of the Boolean variable from 1 to 0) based on time spent in the meiotic and differentiating states. Physical particles are updated based on both the state of specific variables in the QN and the physical environment. Cells can grow, shrink, die, or divide. Cells that divide replace the parent cell with two new particles and QNs. Daughter QNs have the same state as the parent QN, and the new particles have the same volume as the parent particle and are both displaced along a defined orientation vector (a property of the parent particle), which crosses the center of mass of the parent particle. Cell growth is modeled as a linear increase in cell radius over time. Cells that both grow and divide grow linearly until they reach a defined maximum size threshold, after which they are replaced by two new cells whose sum volume is equal to that of the parent. The axis of division is initially assigned to cells arbitrarily, and changes direction over time due to Brownian motion. Variability in cell division length is modeled by randomly modifying the size threshold for individual cells according to a normal distribution with known properties. Cell shrinking occurs as a linear reduction in cell radius over time, and cell death occurs once cells reach a defined minimum size. The probabilities of random events (e.g., cell shrinking and cell death, see below) are calculated for a given timestep from a user-defined period of time, and the userdefined probability of that event occurring in that time period. Visualization Cell positions and sizes were visualized using the software VMD (41) for analysis. States of individual cells were visualized with simulation trajectories in VMD using TCL scripts. Logs of births and deaths were converted into Newick format and visualized as phylogenies using the programming language R (http://www.r-project.org/). Plots of rates of mitosis, apoptosis, and fertilization show that the model predicts fertilization rates within an order of magnitude of the experimentally observed values (Fig. S1 in the Supporting Material). RESULTS AND DISCUSSION A hybrid model for germline development in C. elegans In the germline, mitotic cell divisions induced by NOTCH signaling in the distal region generate forces that drive cells away from the distal tip toward the bend region where oocyte differentiation is initiated. These forces initially drive a front of cells out of the distal tip zone, causing the cells to become meiotic as they move along the tube. Activation of RAS/MAPK (10) signaling in late pachytene as cells approach the bend region triggers entry into diplotene ( Fig. 1). As cells move into the bend region, MAPK is downregulated and the germ cells enter diakinesis and develop into cellularized oocytes. However, at least 50% of all germ cell undergo apoptosis instead of forming oocytes. As the oocytes reach the proximal end of the gonad arm, a sperm-derived signal induces oocyte maturation by reactivating the RAS/MAPK pathway in the proximalmost oocyte. Finally, the mature oocytes exit the gonad through the spermatheca, where they are fertilized, and enter the uterus. To capture the different steps of germ cell development, cells in the hybrid model have two components: a physical particle described using Brownian dynamics, and a signaling state described using qualitative networks (Fig. 3). The dynamics of a cell are controlled by the physical forces exerted on the particle and Brownian motion, arising from the collision of atoms with particles (also known as random thermal motion), which vary depending on the mass of the particle. The particles additionally have an internal orientation, which is updated by thermal motion over the course of the simulation and dictates the axis of division. The walls of the gonad and the syncytium are modeled as static particles, forming a barrier, which ensures cells form a monolayer until the end of the distal arm. The gonad structure itself is capped at the proximal end of the gonad, where the arm would lead to the uterus. The size and shape of this structure was based on experimental microscopy images. Simulations start from a single cell in the distal tip region, and run for 21 days, representing the lifetime of the worm. The signaling state of the initial cell is an arbitrarily selected state, and changes according to the presence of local ligands and the QN formalism. When the gonad is filled with germ cells, it contains the expected number of germ cells (~1000) (Fig. 1 B). Communication between the physical and signaling models occurs through an interface update. This interface update consists of a physical update, where the state of the QN model changes the property of the physical model, and an executable update, where properties of the physical particle change the executable model. In our model, three signaling regions are defined for the executable update. These are cuboid sections of space, representing external ligands, which alter individual variables in the executable model, if a cell enters them. These three regions represent the presence of DELTA in the distal tip region (the DELTA zone); the presence of a RAS activating ligand near the end of the distal arm, before the bend (the RAS zone); and the presence of sperm/MAPK activating ligands at the end of the proximal arm (the Fertilization zone) (Fig. 1, B and C). In the physical update, cells may grow, divide, and die. Cells that are in mitosis grow over a period of 20 h, until they divide into two smaller cells, which each inherit the parent cell's signaling state. At least 50% of cells undergo apoptosis, and are therefore removed from the simulation. Two distinct mechanisms of cell death are explored: a single-step model, where cells randomly die and are instantly removed from the simulation; and a multistep model, where cells shrink to a threshold size before death and removal from the simulation. Cells that leave the RAS factor region and have not entered apoptosis grow if the pressure from external forces on the cell is below a defined threshold. Finally, the first 150 oocytes that enter the maturation state are removed from the simulation at the proximal gonad end to represent oocyte fertilization. Predictions arising from the model Mixing of mitotic stem cells avoids clonal dominance Clonal dominance is the process by which single cell lineages come to dominate a population of cells. This plays a role in the development of cancers, where single mutated stem cells reproduce and eventually make up the majority of the population of growing cells, increasing the likelihood of further mutation and ultimately tumor development. In the case of germline stem cells, clonal dominance would be detrimental because it would reduce genetic diversity and propagate harmful mutations. In human systems, the accumulation of harmful mutations that may result from clonal dominance would be expected to increase the likelihood of cancers based on the Vogelstein model of tumor development (42). Given the low likelihood of a cell entering carcinogenesis, it is reasonable to propose that mechanisms exist to prevent this, although such mechanisms are not presently known. Through our model, however, the lineages of the cells can be tracked and plotted, either as a phylogeny (Fig. 4), or visualized in three-dimensional space ( Fig. 5 and Movies S1 and S2 in the Supporting Material), allowing us to analyze this behavior directly. Examination of these graphics highlighted several unexpected emergent features. Firstly, whole branches defined after a limited number of divisions can be seen to stop dividing and either die by apoptosis or be fertilized. However, the stem cell population at the end of a 21-day simulation consisted of a handful of separate branches, separated by up to 13 generations. This would suggest that the probability of any single cell coming to dominate the germline is relatively low, but that the pool of stem cells remains relatively diverse (compared to the eight divisions that are sufficient to fill the distal tip). Closer examination revealed that, while all germ cells in the simulation are undergoing thermal motion, the type of motion varies along the length of the gonad. Mitotic cells undergo greater lateral motion around the wall of the tube, while other cells move almost exclusively along the tube's length (Fig. 5 B). This arises from the randomized orientation of the mitotic cleavage planes due to Brownian motion, combined with the forces generated in mitosis, which allows lateral motion along the vector of the cell orientation. The movements of cells that enter meiosis are driven by the forces along the gonad in a single direction, and so undergo less lateral movement. We propose that this increased lateral motion effectively mixes the stem cell population, and that this mixing in turn acts as a barrier to clonal dominance. A deleterious mutation in a single germ cell, which does not alter division rate, is therefore unlikely to dominate the stem cell niche due to this thermal mixing, making the genome more robust to mutagenesis. Apoptosis reduces cellular flow by killing small cells Despite the importance of homeostasis in germline development, the precise purpose and mechanism of apoptosis in this system remains unclear. Loss-of-apoptosis mutations, such as ced-3 loss-of-function alleles, lead to relatively mild phenotypes and no apparent overgrowth of germ cells (43). Young (%14-day-old) loss-of-apoptosis mutants show normal morphology of the germline. In contrast, older apoptosis mutants, in which sperm supplies have been exhausted and oocytes cannot leave the proximal gonad arm, demonstrate an abnormal gonad morphology. While gonads of old wild-type animals still contain single-file, large stacked oocytes in the proximal arm, apoptosis mutants contain many smaller oocytes, tightly packed in multiple rows in the proximal arm (Fig. 6 A). In our model, MAPK (represented by a Boolean value) directly activates a chain of proteins, which leads to apoptosis (also represented by a Boolean value). It has been observed that the relationship between MAPK and apoptosis is complex (19); MAPK is a hub for a number of signaling networks, and is known to be activated by other pathways (for example, in response to DNA damage (44,45)) and regulated by additional components (such as GLA-3 (18)). Moreover, it is not known if apoptosis is initiated after exit from the pachytene by an unknown signal or directly by RAS/MAPK signaling. Our executable model of cell signaling here does not aim to reproduce all the complex quantitative relationships between MAPK and its inputs, but rather the signal transduction process in the worm. However, modifying our model to make cell fate rather than MAPK activity drive apoptosis does not alter the observed behaviors. In the model, fate determination is driven by changes in gene expression, which is driven by RAS/MAPK activity. Therefore, a modification that causes MAPK-driven-fate changes simply adds more dependencies between MAPK and apoptosis, and does not change the relationship. Analysis of the loss-of-apoptosis scenario highlighted the need for a negative feedback to limit mitotic growth. Early models of mitotic division allowed cells to divide without restriction. In the absence of fertilization (i.e., in female animals) and without apoptosis, cells would continue to divide despite the increasing overlap and forces between them, eventually leading to such a high pressure that cells would move through the gonad wall and rupture the gonadal tube. To prevent this, we included a negative feedback loop that stops mitotic growth if the pressure experienced by the cell exceeds a defined threshold. Our model accurately reproduces the observed dynamic morphology of cell-death-defective (ced) mutants (Fig. 6, A and B). Once the germline is full of cells and fertilization has started, the germ cells can be seen to adopt a wild-typelike distribution of cell sizes, with the bend and proximal arm of the gonad filled with large, single-file oocytes. As mitotic divisions continue in the absence of apoptosis (while fertilization is ongoing), the forces generated are sufficient to force multiple germ cells into the bend. While repacking of the cells allows for some of them to reorder in young A B FIGURE 5 Dynamics of the stem cell population. (A) At 3.5 days, every cell is given a unique color and all descendants of that cell retain the same color. In the model, it takes~18 days for ancestors of one of these cells to exclusively dominate the distal tip (roughly 22 generations). In principle, the descendants of a single cell could dominate within eight generations (7 days). (B) The vector field of average cellular motion across 21 days (plotted as blue spikes). (White) Cross section of the gonadal wall; (blue box) distal tip region. In the mitotic region, the forces generated by cellular division cause cells to move randomly, which in turn causes the averages to be small and/or directionless. Cells in the pachytene stage and at the edge of the distal tip zone move clearly in a single direction, driven by forces generated through division in the distal tip. Biophysical Journal 109(2) 428-438 Stem Cell Homeostasis by Hybrid Modeling worms and grow to fill the tube, over time this leads to the influx of a larger number of small cells around the bend of the gonad and into the proximal arm. This ability of the cells to repack in three dimensions explains the distinct changes in morphology that occur as result of the flow-reducing effects of apoptosis. This observation may intuitively suggest that an increase in the rate of division (or a loss of fertilization) may lead to a morphology that resembles the loss-of-apoptosis phenotype, if the rate of division was able to overcome the rate of death. However, a limited exploration of models with different division rates suggests that the model is robust to changes in mitosis rates. This robustness arises because the packing of growing cells in the bend can act as a barrier to cell movement, forcing cells to remain in the growth-factor region for longer periods of time. This increases the likelihood of the cells dying, which in turn prevents an overflow of cells occurring as a result of mitosis. The technical difficulties in experimentally observing and tracking the fates of individual germ cells over long time periods have made it difficult to study the exact causes of germ-cell death in the absence of external stress. It has been estimated that in the wild-type, roughly one-half of all germ cells die by apoptosis instead of differentiating into oocytes (15,16). Genetic studies have shown that increasing RAS/MAPK activity causes more germ-cell apoptosis, possibly due to an accelerated rate of pachytene exit (17,18,46). The mechanism by which individual cells are selected to survive or die is, however, not known. In our model, we have tested two possibilities of how cells may be selected for apoptosis. In the single-step model, all cells with active RAS have a defined probability of dying at any given timestep. Cell death immediately removes that cell from the simulation. (Fig. 6 D). The alternative multistep model defines a mechanism of cell death, where cells either shrink by a user-defined quantity and probability, dying when they reach a size threshold, or remain at the same size. Probabilities are assigned to each cell based on its rate of cell movement (i.e., time spent in the RAS region) to give at least a roughly 50% chance of cell death in a wild-type cell. The effect of loss-of-apoptosis mutations described above is insensitive to the precise mechanism of apoptosis. Each of these two mechanisms has markedly different dynamics (Fig. 6, D-F). In both models, apoptosis reduces the flow rate and slows the development of oocytes relative to the loss-of-apoptosis mutation. In the single-step model for apoptosis, cell deaths are evenly distributed across the RAS activation zone (Fig. 6, D-F). In contrast, the multistep model for cell death leads to the majority of cells dying at the end of the RAS activation region. The location of cell death has been reported as at the start of the bend specifically (19,47). We therefore propose that cellular death consists of a multistep process, which increases the likelihood of entering apoptosis and leads to an accumulation of cell deaths at the end of the distal arm near the bend region. We further suggest that this multistep process may be achieved by a process of cell shrinking before apoptosis. For example, the cellularization and growth of germ cells entering oogenesis in the bend region may increase the local pressure and thus result the shrinking of adjacent cells, driving them into apoptosis. While this is only a single example of a multistep process, this would have the additional impact of selecting cells that enter pachytene for death based on their size when entering meiosis; smaller cells would require fewer steps to reach the apoptosis threshold and therefore would be more likely to die. As such, this could provide a mechanism for removing germ cells from the population that are underdeveloped. It should be noted, however, that there is, as of this writing, no experimental proof for such a mechanism. Cellular flow in the gonad permits robust compartmentalization of cell fates As the germ cells move along the gonad arms, they pass through four defined states based on their relative location in the different compartments: 1) mitosis in the distalmost region, which is activated by DELTA/NOTCH signaling; 2) entry into the pachytene stage of meiotic prophase I, once NOTCH signaling is terminated; 3a) entry into diplotene, which requires activation of the RAS/MAPK pathway by an unknown signal, followed by entry into diakinesis accompanied by oocyte formation in the turn region; 3b) a pro-apoptotic state as an alternative to entry into diakinesis; and 4) oocyte maturation at the proximal end of the gonad, which involves RAS/MAPK activation by a sperm signal. Therefore, the progression through these distinct fates must at least partially be defined by the changing environments the cells are exposed to in the different compartments of the gonad arms. We can observe this invariant fate progression in our model ( Fig. 1 B). This is noteworthy as there are five fate variables in the model, which could potentially exist in 32 unique states. The correct progression observed in the model both demonstrates the model's validity and raises the question of how the alternative potential fates are avoided to achieve invariance in fate progression across all wild-type animals. To address the question of how these compartments arise and what test conditions are known to disrupt this invariant pattern, we developed an executable model of the hybrid system. We represent each compartment in the cell as a distinct environment. For example, in the distal tip region, the external signal for DELTA is set to be active, while external ligands activating RAS are inactive. We then test the reachable states of the whole cell (including cell fates) for each compartment, based on the external signals and the reachable states in the previous compartment. In the first environment (the distal tip zone), we find that the model is stable-that is, all initial states eventually lead to a single final, mitotic state. From this state, cells move into a region without DELTA/NOTCH activation. The next environment lacks any external ligands, and analysis in the BMA demonstrates that in principle, when all states are considered initial, there are at least two possible end states, and an oscillation. However, we can prove that cells starting from the stable state in the DELTA/NOTCH active region lead to a single pachytene state, even if cells move repeatedly into and out of the DELTA zone (Fig. 7). This occurs because the stability of the model in the DELTA/NOTCH active state effectively reduces the accessible states when exiting the region. Through this mechanism, the stability of the initial environment can propagate to subsequent environments, and achieves an invariance in fate progression in the animal (Fig. 8). This flow of cells through different compartments therefore allows complex decision-making processes between multiple end states to be encoded in both the protein network and the structure of the gonad. Our executable models also give us an opportunity to test alternative mechanisms of signaling in the gonad. While two external ligands are known (DELTA and major sperm protein (MSP)), the external ligand used to initiate RAS/MAPK activation has not been identified. Alternatively, an internal change within the cell may lead to RAS/MAPK activation, induced by timed events such as cell-cycle changes. Using our model, we studied how alternative mechanisms of signaling may achieve this. FIGURE 7 Proving all accessible states lead to a single fix point when moving from a stabilizing environment to an unstable environment. Cells at the border between two environments may move back and forth across the boundary due to diffusive motion. The different environments are represented by changes in constant values in the model (different conditions). All accessible states in the two different conditions are enumerated and tested to find whether they lead to the same fix point or not. This proceeds as follows: in the stable environment, the stable state is identified (shown as A). A simulation from state A in the unstable condition is performed until fix-point B is reached, and the set of states between A and B are collected. For each state, a simulation is performed in the stable condition until A is reached, and the set of states encountered in each simulation is recorded. If these have not been observed previously, simulations are performed in the unstable condition to determine if they reach fix-point B. This is repeated until either no new states are found (i.e., all accessible states have been identified), or an alternative fix point or cycle is discovered. Biophysical Journal 109(2) 428-438 Early models of the gonad included DELTA/NOTCH signaling, and an external signal that activates RAS/ MAPK in the region before the bend. In this rudimentary model, maturation and fertilizations of germ cells occurred via a simplistic, disconnected pathway. While this model was capable of reproducing the invariant fate pattern across the gonad, individual cells at the first boundary between the early pachytene region and the RAS-active region transiently showed signs of later differentiation (i.e., late diakinesis). This occurred as a result of thermal motion in the system; cells could briefly move backward across the boundary and therefore initiate diakinesis. This observation in the preliminary model is not supported by available experimental data. Furthermore, while it correctly describes the overall pattern of the fate progressions, it is incompatible with maturation of the cells being driven by a later activation of the MAPK. This is because models that have multiple, alternative fates caused by MAPK activation (entry into diakinesis versus maturation) would need to be bistable, and transient activations of late diakinesis may lead to early maturation of cells before the bend. This possibility is a property of the model, although we are not able to comment on the probability of the event due to intrinsic limitations of the model. Given that this outcome is possible but is never observed in nature, there must exist mechanisms to prevent this from arising. We propose three possible mechanisms by which this may be achieved: the first option is that fate determination at the boundary of the signaling regions must be highly buffered. That is to say, the signaling networks make RAS downregulation a slow process, in order to minimize the probability that backflow causes premature diakinesis. This buffering mechanism would need to be highly robust to the effects of reduced flow that increase the amount of time a cell resides at the boundary between signaling environments. The discovery of such a buffering mechanism would support this assertion. The second option is that a persistent MAPK activation is initiated by a timed event. And, finally, a third option is that a transient signal initiates a persistent MAPK activation. In the second and third options MAPK would need to be actively downregulated at the entry into diakinesis. Simulations show that if the exit from diakinesis is timed, small germ cells can escape into the proximal arm. One prediction arising from both the first and third possibilities would be that there was a clearly definable boundary where cells moved from one signaling region to another (similar to the boundary observed in the distal tip zone as cells exit mitosis). In contrast, if the initiation of MAPK activation is controlled by a timer (i.e., second option), cells may enter diplotene at slightly different locations in the same region of the gonad as the location of entry becomes dependent on the speed of an individual germ cell. In light of this observation, and the role of MSP in activating RAS in oocyte maturation, both the hybrid model and QN were extended and further refined to explore both the timed activation of RAS in pachytene and MSP-induced activation. A new pathway was added to the QN to allow MSP to RAS via VAB1. To model the timed activation, the QN was modified to allow a transient input that causes RAS to become, and remain, active until it is downregulated by a later signal. In the hybrid model, this first transient input is initiated by the amount of time spent by in the pachytene, and the subsequent downregulation is caused by entry into the bend. These mechanisms exclude the possibility that any backflow could occur, and as such we propose that RAS is not activated by an external ligand. Such a timed event is tied to the meiotic cell cycle, which we suggest here as a mechanism of timing RAS activation. We further propose that subsequent RAS/MAPK downregulation mediated by GAP-1/GAP-3 and LIP-1 is linked to cellularization, as cells separate from the syncytium. CONCLUSIONS The development of tools that model the interface between biological signaling networks and biophysical motion is an important challenge to understanding stem cells and organogenesis in a wide number of systems. Our approach takes two long-standing formalisms for modeling each of these phenomena, and has allowed us to connect them in a single model of oogenesis from stem cells in C. elegans. This allows us to gain insights into the intersection of cellular dynamics and signal transduction, which are inaccessible (as of this writing) by using experimental approaches, and has generated new predictions. Furthermore, we have also shown that a simple physical model, even based on limited data, in the hybrid model reproduces mutant behavior and predicts plausible physical parameters such as fertilization rates. Our tool and approach could therefore be easily applied and adapted to model a wide range of hybrid cellular phenomena. Finally, through the development of executable models of the hybrid model, we have shown a future route to allow the development of the organ to be integrated in yet larger systems. The use of a detailed executable model in combination with a physical model of the organ structure could be used to show how other organs and bodies within organisms interact and generate emergent properties. The future development and application of these hybrid models therefore offer unique opportunities for understanding complex development processes. The predictions generated by this model demonstrate the type of unique information that can be offered by such hybrid approaches. The avoidance of clonal dominance by thermal mixing of the germ-cell population may represent an important mechanism for avoiding tumor development. While cancer is a widespread disease, the absolute likelihood of an individual cell of the trillions progressing to become cancerous is low, raising the question of why cancers are not more common given the number of opportunities to develop. The mixing of stem-cell populations would further minimize this by reducing the accumulation of mutations in populations. Our suggested multistep shrinkage mechanism for apoptosis gives two potentially new insights into the role and purpose of cell death in the gonad: it serves to reduce the flow rate, and it creates competition between stem cells, selectively killing smaller cells. This model is consistent with experimental evidence showing that increased RAS activity leads to smaller oocytes and increased cell death, in addition to other evidence presented here. Finally, the control of cell fate through cellular flow offers an explanation for a well-characterized phenomenon in the germline, allowing for future experimental examination. Together, these insights into stem-cell development in the germline demonstrate the power of our approach, and show how hybrid modeling may allow phenomena over multiple time-and length scales to be successfully combined.
9,730
2015-07-21T00:00:00.000
[ "Biology", "Computer Science" ]
Topotactically induced oxygen vacancy order in nickelate single crystals The strong structure-property coupling in rare-earth nickelates has spurred the realization of new quantum phases in rapid succession. Recently, topotactic transformations have provided a new platform for the controlled creation of oxygen vacancies and, therewith, for the exploitation of such coupling in nickelates. Here, we report the emergence of oxygen vacancy ordering in Pr$_{0.92}$Ca$_{0.08}$NiO$_{2.75}$ single crystals obtained via a topotactic reduction of the perovskite phase Pr$_{0.92}$Ca$_{0.08}$NiO$_{3}$, using CaH$_2$ as the reducing agent. We unveil a brownmillerite-like ordering pattern of the vacancies by high-resolution scanning transmission electron microscopy, with Ni ions in alternating square-pyramidal and octahedral coordination along the pseudocubic [100] direction. Furthermore, we find that the crystal structure acquires a high level of internal strain, where wavelike modulations of polyhedral tilts and rotations accommodate the large distortions around the vacancy sites. Our results suggest that atomic-resolution electron microscopy is a powerful method to locally resolve unconventional crystal structures that result from the topotactic transformation of complex oxide materials. I. INTRODUCTION In transition metal oxides, strongly correlated valence electrons can couple collectively to the lattice degrees of freedom, which can lead to a variety of emergent ordering phenomena, including exotic magnetism, multiferroicity, orbital order, and superconductivity [1]. In oxides with the perovskite structure, high flexibility and tolerance to structural and compositional changes enable the controlled exploitation and manipulation of the emergent properties [2]. Oxygen vacancies, for example, can radically alter the electronic states in materials, and in turn, suppress or enhance emergent phases via charge compensation and/or structural phase transitions [3][4][5]. Understanding the formation of oxygen vacancies and their impact thus provides promising prospects for exploring new physical properties and potential future technological applications. A prototypical example for correlated transition-metal oxides is the family of perovskite rare-earth nickelates, RNiO 3 (R = rare-earth ion), exhibiting a rich phase diagram including metal-to-insulator and antiferromagnetic transitions [6][7][8]. For R = Pr and Nd, these transitions occur concomitantly with a breathing distortion of the NiO 6 octahedra and a disproportionation of the Ni-O hybridization [9][10][11][12]. As a consequence, the material family exhibits a pronounced structure-property relationship [13][14][15][16][17][18][19] and a sensitivity to oxygen vacancy formation, which can modify the surrounding Ni-O bonds and the nominal 3d 7 electronic configuration of the Ni 3+ ions [20]. Notably, an extensive oxygen reduction of the perovskite phase towards Ni 1+ with a cuprate-like 3d 9 elec- *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>tronic configuration was recently realized via topochemical methods in Nd 0.8 Sr 0.2 NiO 2 thin films, yielding the emergence of superconductivity [21]. Furthermore, superconductivity was also observed in topotactically reduced films with R = La and Pr [22][23][24], as well as for various Sr-substitution levels [25,26] and substitution with Ca ions [27]. Since these reduced nickelates with the infinite-layer crystal structure are nominally isoelectronic and isostructural to cuprate superconductors, the degree of the analogy between the two material families is vividly debated [28][29][30][31][32][33]. Moreover, vigorous efforts are ongoing to realize superconductivity not only in thin films, but also in polycrystalline powders [34,35] as well as in single-crystalline samples [36,37], while also an improved understanding of the topotactic reduction process between the perovskite and infinite-layer phase is highly desirable. In particular, the reduction involves various intermediate (metastable) phases, in which the oxygen vacancy ordering patterns and the nature of the emergent phases have not yet been clarified comprehensively. For instance, extensive experimental and theoretical studies [38][39][40][41][42][43][44] were performed on oxygen deficient LaNiO 3−δ with 0 < δ ≤ 0.5, suggesting a transition from a paramagnetic metal to a ferromagnetic semiconductor and an antiferromagnetic insulator as a function of increasing vacancy concentration [45][46][47]. For δ ≈ 0.5, neutron powder diffraction [38] revealed that the parent perovskite crystal structure with uniform NiO 6 octahedra changed to a structure with sheets of NiO 6 octahedra and square-planar NiO 4 units arranged along the pseudocubic [100] direction [38,39], involving a 2a p ×2a p ×2a p reconstruction of the parent pseudocubic unit cell (a p is the pseudocubic lattice parameter). Yet, the detailed crystal structure for the case δ ≈ 0.25 is not known, although an electron diffraction study suggested that it involves a 2a p √ 2×2a p √ 2×2a p reconstructed supercell [48]. Moreover, for compounds with R = Pr and Nd, even arXiv:2306.02809v1 [cond-mat.mtrl-sci] 5 Jun 2023 less understanding of the oxygen deficient phases exists. Metastable structures with ferromagnetic order were initially identified for δ ≈ 0.7, with x-ray diffraction data indicating a 3a p ×a p ×3a p supercell that possibly comprises two sheets of NiO 4 square-planar units connected with one sheet of NiO 6 octahedra [49]. In a subsequent neutron powder diffraction study it was suggested, however, that the metastable phase of the Pr compound rather corresponds to δ ≈ 0.33 with a √ 5a p ×a p × √ 2a p reconstruction and one sheet of NiO 4 square-planar units connected with two sheets of NiO 6 octahedra [50]. Here, we use atomic-resolution scanning transmission electron microscopy (STEM) together with electron energy-loss spectroscopy (EELS) to investigate the oxygen vacancy formation occurring in a Pr 0.92 Ca 0.08 NiO 3−δ single crystal upon topotactic reduction. We resolve the chemical composition and the atomic-scale lattice of the crystal, identifying a 4a p ×4a p ×2a p reconstructed superstructure with a highly distorted Pr sublattice. We find that the oxygen vacancy ordering pattern corresponds to a brownmillerite-like structure with a two-layer-repeating stacking sequence of NiO 6 octahedra and NiO 5 square pyramids, suggesting an oxygen deficiency of δ ≈ 0.25. Meanwhile, quantification of the octahedral tilts and Ni-O bond angles reveals distinct periodic wavelike patterns of polyhedra coordination in different layers due to the oxygen vacancies. These results are markedly distinct from previous reports on reduced rare-earth nickelates and provide an atomic-scale understanding of the moderately oxygen deficient structure with δ ≈ 0.25, which is one of the metastable phases occurring during the topotactic reduction process towards the infinite-layer phase of the superconducting nickelates with δ = 1. II. METHODS Single crystals of perovskite Pr 1−x Ca x NiO 3 were synthesized under high pressure and high temperature. Specifically, a 1000 ton press equipped with a Walker module was used to realize a gradient growth under a pressure of 4 GPa, executed in spatial separation of the oxidizing KClO 4 and NaCl flux, similarly to the previous synthesis of La 1−x Ca x NiO 3 single crystals [36]. The precursor powders were weighed in according to a desired composition of Pr 0.8 Ca 0.2 NiO 3 , although the incorporated Ca content in the obtained Pr 1−x Ca x NiO 3 was lower and ranged from x = 0.08 to 0.1. The Pr 1−x Ca x NiO 3 single crystals were reduced using CaH 2 as the reducing agent in spatial separation to the crystals. The duration of the reduction was eight days, using the same procedure and conditions as previously described for the reduction of La 1−x Ca x NiO 3 crystals [36]. Single-crystal x-ray diffraction (XRD) was performed on crystals before and after the reduction. The technical details are given in the Supplemental Material [51]. Electron-transparent TEM specimens of the sample were prepared on a Thermo Fisher Scientific focused ion beam (FIB) using the standard liftout method. Samples with a size of 20 × 5 µm 2 were thinned to 30 nm with 2 kV Ga ions, followed by a final polish at 1 kV to reduce effects of surface damage. HAADF, ABF and EELS were recorded by a probe aberration-corrected JEOL JEM-ARM200F scanning transmission electron microscope equipped with a cold-field emission electron source and a probe Cs corrector (DCOR, CEOS GmbH), and a Gatan K2 direct electron detector was used at 200 kV. STEM imaging and EELS analyses were performed at probe semiconvergence angles of 20 and 28 mrad, resulting in probe sizes of 0.8 and 1.0Å, respectively. Collection angles for STEM-HAADF and ABF images were 75 to 310 and 11 to 23 mrad, respectively. To improve the signal-to-noise ratio of the STEM-HAADF and ABF data while minimizing sample damage, a high-speed time series was recorded (2 µs per pixel) and was then aligned and summed. STEM-HAADF and ABF multislice image simulations of the crystal along [100] and [101] zone axis were performed using the QSTEM software [52]. Further details of the parameters used for the simulations are given in the Supplemental Material [51]. III. RESULTS In thin films of infinite-layer nickelates, the highest superconducting transition temperatures are realized through a substitution of approximately 20% of the rareearth ions by divalent Sr or Ca ions [25][26][27]. Accordingly, we have prepared the precursor materials for the synthesis of single crystals with a nominal stoichiometry of Pr 0.8 Ca 0.2 NiO 3 . Using a high-pressure synthesis methods [36], we obtain Pr 1−x Ca x NiO 3 crystals with typical lateral dimensions of 20 -100 µm. Yet, an analysis of the as-grown crystals by scanning electron microscopy (SEM) with energy-dispersive x-ray (EDX) spectroscopy indicates that the incorporated Ca content lies between x = 0.08 and 0.1 (see Fig. S1 in the Supplemental Material [51]). This discrepancy to the nominal Ca content suggests that different growth parameters, such as an increased oxygen partial pressure, might be required to achieve stoichiometric Pr 0.8 Ca 0.2 NiO 3 crystals. By contrast, the employed parameters yielded higher Ca substitutions as high as x = 0.16 in the case of La 1−x Ca x NiO 3 as determined on the as-grown crystal surface by EDX [36], which exhibits a less distorted perovskite structure [6]. As a next step, we use single-crystal XRD to investigate a 20 µm piece that was broken off from a larger as-grown crystal. The acquired XRD data indicate a high crystalline quality (see Fig. S2 of the Supplemental Material [51]) and can be refined in the orthorhombic space group P bnm, which is consistent with PrNiO 3 single crystals and polycrystalline powders [53,54]. The refined Ca content of the crystal is 8.6%. The refined lattice parameters and atomic coordinates are presented in the Supplemental Material [51]. Furthermore, we find that the investigated crystal piece contains three orthorhombic twin domains extracted from the refinement, with volume fractions 0.95/0.04/0.01. Subsequently, we carry out the topotactic oxygen reduction on a batch of Pr 1−x Ca x NiO 3 single crystals for eight days, using the same conditions as previously described for the reduction of La 1−x Ca x NiO 3 crystals [36]. Single-crystal XRD measurements on a reduced 20 µm crystal indicate a significant transformation of the crystal structure after eight days. However, a strong broadening and the resulting overlap of the Bragg reflections in the XRD maps prohibit a structural refinement and determination of the symmetry by this method (see Fig. S2 [51]). Hence, in order to investigate the topotactic transformation of the crystal lattice on a local scale, we turn to atomic-resolution STEM imaging. We examine a reduced Pr 0.92 Ca 0.08 NiO 3−δ crystal with lateral dimensions of ∼80 µm. A top-down view of the crystal is shown in Fig. 1(a). Identical TEM specimens were prepared from a region of the crystal without visible surface defects caused by the FIB process. Figure 1(b) shows a low-magnification STEM high-angle annular dark-field (HAADF) image. As a first characteristic of the topotactic-reduced crystal, we note that singlecrystalline regions in the specimen are separated by grain boundary (GB) like regions, with a width ranging from a few ten to hundred nanometers and a length ranging from a few nanometers to micrometers. The GBs exhibit mostly an amorphous structure [ Fig. 1(b) inset and Fig. S3], exhibiting dark contrast in the images that originate from diffuse scattering [55] (see Fig. S3 for more details [51]). The amorphous character of the GBs is also confirmed in EELS measurements of elemental distribution profiles across a GB, which show a reduced EELS intensity of all cations due to the deteriorating signal in structurally disordered regions (Fig. S4 [51]). The presence of GBs can be a consequence of topotactic reduction. Alternatively, the GBs might have formed already during the high-pressure growth of the perovskite phase. Zoom-in STEM-HAADF images from areas on either side of a GB [orange and blue squares in Fig. 1(b)] are displayed in Figs. 1(c) and 1(d). The same lattice structure and orientation are observed from the different crystalline domains near the GB. One typical domain size in the crystal is found to be hundreds of nanometers. STEM-HAADF imaging over larger field of view of hundred nanometers does not reveal any regions with defects or impurity phase inside one crystalline domain, suggesting that each domain retains a high crystalline quality and a stable structural phase after the reduction process (see Fig. S3 [51]). Fig. 2(b), there is an approximate 8% difference of the distances between the FFT maxima in reciprocal space along the [002] and [020] axis (blue arrows on FFT patterns). Based on the preference of vacancy formation on the apical oxygen sites [56], this indicates a contraction along the [002] axis due to the removal of apical oxygen atoms following the topotactic reduction process. Between the FFT maxima, the satellite spots corresponding to the superstructure periodicity appear. The wave vector of the superstructure reflexes is q = 1/4 reciprocal lattice units along [020] axis and q = 1/2 along [002] axis, indicating a formation of the 4a p ×4a p ×2a p superstructure of perovskite [a = b = 16.56(32)Å, c = 7.60 (14) A]. In STEM-HAADF images, the contrast is approximately proportional to Z 1.7−2 (where Z is the atomic number) [57,58], so Pr columns (Z = 59) appear bright and Ni columns (Z = 28) exhibit a darker contrast. The fourfold superstructure arises from the periodic changes in the intensity of the STEM image. As shown in the zoomed-in image [ Fig. 2(c)], half of the Pr atoms along the [100] projection appear elongated, while the other half remain round and undistorted. Focusing on the first two Pr rows, the atoms exhibit alternating round and vertical oval shapes, while the third and fourth rows exhibit alternating round and horizontal oval shapes. Considering the projection in our image, the distortions stem from displacements of Pr columns. Figure 2(d) indeed reveals straight and zigzag line patterns of Pr atoms alternating along the [101] direction. A closer look at the atomic arrangement in Fig. 2(f) shows that the zigzag line on the fourth column appears to be a mirror of the second, forming a four-layer repeat sequence (ABAC stacking). EELS elemental maps of Pr, Ca and Ni obtained from the crystal are displayed in Figs. 2(g)-2-(i) using Pr M 5,4 , Ca L 3,2 , and Ni L 3,2 edges, respectively. The maps show that the Pr, Ca and Ni contents are homogeneous over the structure. Integrated concentration profiles of Pr and Ca confirm the A-site cation stoichiometry and uniform distribution (see Fig. S5 [51]). This suggests that strong distortions of the Pr lattice are likely not rooted in an A/B-site deficiency or ordering, but rather originate from other factors such as oxygen vacancy formation. To gain insights into the detailed atomic structure, high-resolution STEM annular bright-field (ABF) images were acquired for imaging lighter elements such as oxygen. Figures 3(a) and 3(b) show the simultaneously acquired HAADF and ABF images of the crystal along the [100] projection. The distribution of oxygen ions including filled and empty apical oxygen sites is clearly visible by ABF imaging. The corresponding inverse intensity profiles extracted from different layers are displayed in Fig. 3(c). The absence of image contrast at every fourth oxygen site of the Pr-O layers confirms the vacancy ordering (profiles 1 and 3), while the oxygen content remains constant in Ni-O layers (profile 2). By overlaying the yellow arrows, the ordering pattern of oxygen vacancies can be clearly visualized. Half of the NiO 6 octahedra lose one apical oxygen atom and the remaining five oxygen atoms in a pseudocubic unit cell form a NiO 5 pyramid. Notably, a square pyramidal coordination of the Ni ion is uncommon in nickel-oxide based materials. Few compounds with Ni 2+ ions in similar five-fold coordination include KNi 4 (PO 4 ) 3 and BaYb 2 NiO 5 [59]. However, none of these compounds exhibits a perovskite-derived structure. Our investigation therefore reveals the existence of a new structural motif in marked distinction to the NiO 4 square-planar coordination in previous reports on oxygen-deficient perovskite nickelates [44,60]. A magnified view of the ABF image is shown in Fig. 3(d). The apex-linked pyramids as "bow-tie" dimer units form a one-dimensional chain running along the [001] direction. Such configuration is consistent with the brownmillerite type structure A n B n O 3n−1 with n = 4 corresponding to the A 4 B 4 O 11 phase: Layers of apex-linked pyramids are stacked in the sequence ...-Oc-Py-Oc-Py-..., where Oc denotes a layer containing only octahedra (cyan), and Py a layer only pyramids (orange). The ...-Oc-Py-Oc-Py-... sequence runs parallel to the [010] axis, so that the Py layers are at 1/4 (001) and 3/4 (001) planes with a stacking vector 1/2 [001]. In the pyramidal layers, the remaining oxygen atoms are located at the center of apical sites without causing any tilt of square pyramids. In contrast, the apical oxygen atoms in octahedral layers tend to shift towards the elongated Pr atoms, leading to large octahedral tilts. A corresponding STEM-ABF image simulation was performed based on the predicted atomic model shown in Fig. 3(e) using the multislice method [52] (see Fig. S6 [51]). The simulated image reproduces the vacant sites and distortions well as observed from the STEM measurements, confirming the main crystal structure and alternating-stacking configuration. The distribution of oxygen vacancies can lead to modifications in bond angles and tilting of octahedra. Quantitative STEM ABF measurements were used to precisely examine the atomic structure including the oxygen positions along the [101] direction. From the inverse ABF intensity profiles taken in Fig. 4(a), as shown in Fig. 4 Ni bond angles on the octahedral layer exhibit a zigzag modulated pattern with two sub-cell repeats and an average of 161.1 • . The overall periodicity of bond angles and tilt modulations is consistent with the alternating zigzag and straight pattern on the Pr columns along the [101] direction. The simulated ABF image for the predicted model along this viewing direction also agrees well with the STEM-ABF image, confirming the polyhedral distortions in the structure [ Fig. 4(e)]. The different amplitudes of tilt and bond angles between pyramidal and octahedral layers are the result of the change in oxygen content. This is also revealed in the structure along the [100] projection [ Fig. 3(e)], where NiO 6 octahedra show highly distorted tilts in the octahedral layer, while less distorted NiO 5 pyramids are present in the pyramidal layer for the lattice accommodation due to the removal of oxygen during reduction. IV. DISCUSSION AND CONCLUSION The obtained insights into the vacancy order in the oxygen sublattice and the distortions of the Pr sublattice are compiled in the two schematics in Fig. 4(f), depicting a brownmillerite-like lattice along the [100] and [101] investigated in this study. Oxygen vacancies at every fourth apical site in every second row lead to a contraction of the out-of-plane lattice parameter and an ordering pattern with alternating NiO 6 octahedral and NiO 5 pyramidal layers. We note that the STEM-ABF images indicate a nominal structure of Pr 0.92 Ca 0.08 NiO 2.75 based on the oxygen vacancy ordering in one crystalline domain, and a possible nonuniform oxygen content variation can occur due to the presence of GBs. This can be the subject of future work to explore the mechanism and origin of GBs [61]. Compared to the perovskite structure, the vacancy order reduces the tilt angles and increases the Ni-O bond angles in the pyramidal environment. Also the Pr sublattice is affected by the lack of every fourth apical oxygen ion, and the resulting complex distortion pat-tern leads to a 4a p ×4a p ×2a p reconstructed superstructure. The observed distortion of the Pr cation position and the associated wavelike variation of the surrounding bond angles and polyhedral tilts is highly unusual for perovskite-related materials. Yet, highly distorted Aand B -site cation sublattices were also reported for other topotactically reduced perovskite-related materials, such as CaCoO 2 [62], enabling the realization of phases that might be categorically unattainable by direct synthesis methods. Further studies on Pr 0.92 Ca 0.08 NiO 2.75 are highly desirable to accurately determine the reconstructed atomic positions and the crystallographic unit cell, which is likely larger than the 4a p ×4a p ×2a p supercell. For instance, high-resolution synchrotron XRD might allow to resolve the overlapping structural reflexes in our singlecrystal XRD maps (Fig. S2 of the Supplemental Material [51]) and therewith a full structural refinement might be achievable. Moreover, future STEM studies on crystals after prolonged topotactic reduction can reveal whether our alternating pyramidal/octahedral structure eventually transforms into a square-planar/octahedral structure with a √ 5a p ×a p × √ 2a p supercell, which was proposed in Ref. 50 for PrNiO 3−δ with δ ≈ 0.33. Notably, our observed oxygen vacancy ordering with apex-linked pyramidal units is distinct from all previously identified RNiO 3−δ lattice structures, which contain stacks of alternating square-planar NiO 4 and octahedral NiO 6 sheets. Moreover, to the best of our knowledge, pyramidal coordination was generally not observed in perovskite-derived Ni compounds to date, whereas it is a common lattice motif in various oxygen-deficient transition metal oxides, including Fe [63,64], Co [65], and Mn [66,67] compounds. In particular, SrFeO 3−δ hosts a variety of oxygen vacancy ordered phases with distinct spin and charge ordered ground states [68,69]. Since a closely similar vacancy ordering pattern as in Fig. 4(e) emerges in SrFeO 3−δ for δ = 0.25 (Sr 4 Fe 4 O 11 ), an exploration of potentially emerging magnetic order in Pr x Ca 1−x NiO 2.75 will be of high interest. In summary, we examined the topotactic transformation of a Pr 0.92 Ca 0.08 NiO 3 single crystal to the oxygenvacancy ordered Pr 0.92 Ca 0.08 NiO 2.75 phase. The transformed crystal structure contains a 4a p ×4a p ×2a p supercell and periodic distortions as well as a zigzag pattern of Pr ions along the [100] and [101] directions, respectively. The ordering of the oxygen vacancies on the apical oxygen sites forms one-dimensional chains of bowtie dimer units of NiO 5 square pyramids. These square pyramidal chains run in parallel to the [001] direction, connecting with flattened NiO 6 octahedra. Our atomicscale observation of the systematic lattice distortions and oxygen vacancies underpins an unexpected pyramidaltype brownmillerite-like phase in the nickelates after a topotactic reduction. Our results are instructive for future efforts to gain a comprehensive understanding of the topotactic reduction of rare-earth nickelates and related materials.
5,315.8
2023-05-18T00:00:00.000
[ "Materials Science" ]
Binary Spectrum Feature for Improved Classifier Performance —Classification has become a vital task in modern machine learning and Artificial Intelligence applications, including smart sensing. Numerous machine learning techniques are available to perform classification. Similarly, numerous practices, such as feature selection (i.e., selection of a subset of descriptor variables that optimally describe the output), are available to improve classifier performance. In this paper, we consider the case of a given supervised learning classification task that has to be performed making use of continuous-valued features. It is assumed that an optimal subset of features has already been selected. Therefore, no further feature reduction, or feature addition, is to be carried out. Then, we attempt to improve the classification performance by passing the given feature set through a transformation that produces a new feature set which we have named the “Binary Spectrum”. Via a case study example done on some Pulsed Eddy Current sensor data captured from an infrastructure monitoring task, we demonstrate how the classification accuracy of a Support Vector Machine (SVM) classifier increases through the use of this Binary Spectrum feature, indicating the feature transfor-mation’s potential for broader usage. I. INTRODUCTION Supervised learning in regards to classification builds models of the distribution of given class labels in terms of given predictor variables (or features). The learned models (known as Classifiers) can then serve to assign class labels to provided testing instances where the predictor variable values (or features) are known, but the class labels are unknown [1]. Performing classification in such manner has become a common and vital component in the modern-day use of Artificial Intelligence. Many techniques such as Decision Trees, Discriminant Analysis, Perceptron-based techniques (e.g., Neural Networks), Logistic Regression, Bayesian Networks, Instance based learning (e.g., Nearest Neighbour Classifiers), Ensemble Classifiers, and Support Vector Machines (SVM) have been developed to learn and perform classification [1], [2], [3], [4]. More recently, deep learning based classification techniques too have been developed [5]. Over-fitting, lack of accuracy, and computation cost are some of the commonly encountered challenges when developing classifiers. It can be understood that over-fitting and computation cost become issues especially when working with high dimensional data. Feature selection (or feature 1 reduction) is a commonly followed practice to overcome the curse of dimensionality. That practice in return does on occasions help alleviate some of over-fitting and accuracy-related issues as well. Following literature, one can categorize the methods available for feature reduction to be three-fold: (1) Filter methods; (2) Wrapper methods; and (3) Embedded methods [6], [7]. In our paper, we consider the case where an optimal feature selection has already been carried out. That means, we focus on a supervised learning classification task that has to be performed with a given set of features, with no further feature reduction or feature addition being allowed. We assume the features to be real and continuous-valued for this paper. Now suppose there is some benchmark accuracy that can be achieved by using the feature set as it is. Then, we ask the question, whether that benchmark accuracy can be overtaken by performing some transformation to the existing feature set. We contribute in this paper by answering that question, via introducing a feature transformation we name the "Binary Spectrum feature transformation". Derivation of the Binary Spectrum feature is presented in detail in this paper. Following derivation, we demonstrate the effectiveness of this Binary Spectrum transformation by benchmarking accuracy, and overtaking that benchmarked accuracy of an SVM-based classification task. Classification is performed on some Pulsed Eddy Current (PEC) sensor data. This dataset has been collected from an automated infrastructure monitoring exercise performed on a ferromagnetic critical water pipe [8], [9]. The class labels are reflective of the thickness of the pipe wall on points where sensing has been done. Descriptor variables happen to be some real continuous-valued features extracted from the corresponding PEC signals [8], [9], [10], [11]. The Binary Spectrum feature transformation happens to transform a given set of real continuous-valued features to a set containing both continuous-valued and discrete-valued categorical-like data (i.e., binary data). This categorical-like data subset is derived from the original continuous-valued dataset. The rationale behind the evident improvement in classification accuracy resulting from this transformation can be argued to be the effect of this combination of two data-types: (1) The natural continuous-valued features; (2) Categorical-like component derived from the natural features. The structure of the paper is as follows: Section II mathematically formulates the problem of improving the accuracy of a given classifier, or a classification task; Section III presents the derivation of the Binary Spectrum feature transformation and presents an algorithm to find best performing classifiers; Section IV presents the effectiveness of the proposed method via a demonstrative example performed on II. PROBLEM FORMULATION We consider a binary classification (i.e., two-class classification) supervised learning problem that has to be solved making use of continuous-valued features. As such, let there be a given binary classifier, trained by the training data X t ∈ R a×b and Y t ∈ B a where B a denotes an a × 1 vector of binary digits. We make the following assumption about the training data. Assumption 1: The training features (i.e., X t ) is an optimal subset of training features, i.e., no further feature reduction or feature addition is to be done. Assumption 2: The two classes in the set of training labels (i.e., Y t ) are evenly (or equally) populated, i.e., there is approximately a 50:50 population ratio between the two classes. The vector Y t ∈ B a containing training labels (or the training targets) is given by Y t = y t1 y t1 . . . y t1 y t2 y t2 . . . y t2 T 1×a (1) where y t1 = 0, y t2 = 1, and [ * ] T denotes matrix transpose. The corresponding training features contained in X t (∈ R a×b ) are given by where i, j ∈ Z + are generic subscript notations where 1 ≤ i ≤ a and 1 ≤ j ≤ b. Similarly, the testing dataset on which the classifier is to make predictions, is given by the corresponding matrices X te ∈ R c×b and Y te ∈ B c . Y te = y te1 y te1 . . . y te1 y te2 y te2 . . . y te2 T 1×c (3) Here, y te1 = 0, y te2 = 1. With this data, we define the following operations o(X t ) = X t u and o(X te ) = X te u with respect to u ∈ R b×b . u here is an orthonormal basis of X t . Now suppose, a classifier trained with the above defined data (i.e., o(X t ),Y t ) is given, and it is denoted as C, C : R d×b → B d . This classifier would predict the classes (i.e., 0 or 1) for the testing data X te . The prediction output will come in a vectorŶ te ∈ B c given as follows. The vector err is then defined in order to compute the classification accuracy. Locations of err corresponding to instances of a correct prediction having been made would carry zeros. Therefore, we define the total number of zeros in err as sum 0 . With that we define the classification accuracy acc in the following manner. As such, acc can be represented as a function in the following manner. The objective now will be to increase the classification accuracy (i.e., to increase acc) using the same set of features X t and X te without any reduction or addition of features respecting Assumption 1. Accomplishing this objective under Assumption 1 is the reason for this paper introducing the Binary Spectrum feature transformation. III. DERIVING THE BINARY SPECTRUM FEATURE Binary Spectrum transformation is performed by applying a function f , f : R e×b → B e×bn , n ∈ Z + to features X t and X te in the following manner. Here, the matrix sizes come as a × b(n + 1) for X tb and c × b(n + 1) for X teb . Also, f (x ti j , n) and f (x tei j , n) for any x ti j ∈ R or x tei j ∈ R, is defined as where f does some scaling to x i, j and rounds it off to the nearest integer (discussed in the remainder of this section), and produces . . . . . . . . . , which is the binary value of the scaled and rounded x i, j given to n bits. To perform the transformation done by f for a prescribed number of bits (i.e., n), we first scale x i, j values to remain within the two bounds l low and l up defined as follows. Now consider the example where the training data point x ti j is to be scaled. To scale x ti j , governed by the column subscript of x ti j , we select the j th column of the corresponding training feature matrix X t . The j th column symbolized as X t j | ∈ R a comes as follows. The minimum and maximum values contained within X t j | are denoted as min(X t j |) and max(X t j |) respectively. With those, we define the scaling of any training feature value x ti j as follows: where x ti js is the scaled value of x ti j . Now consider the case of scaling testing feature values in X te . To scale any testing feature value x tei j , we consider the same min(X t j |) and max(X t j |) coming from X t j |. This selection is governed by the column subscript of x tei j . With those, we define the scaling of any testing feature value x tei j as follows: where x tei js is the scaled value of x tei j . When scaling testing feature values using (18), there is a chance of some scaled values lying outside the bounds specified by l low and l up . That is a limitation in this scaling method and to alleviate some of the adversity caused by outliers, for all i, j where x tei js < l low we assign and for all i, j where x tei js > l up we assign as follows. x tei js ← l up (20) Following the assignments of (19) and (20), all training and testing feature values in X t and X te would have been scaled to map within the lower and upper bounds prescribed by l low and l up in (14) and (15). Rounding the scaled training feature (x ti js ) and testing feature (x tei js ) values to their nearest integers, and converting the rounded numbers to binary, and representing the binary values in n bits is how the Binary Spectrum vectors symbolized by . . . . . . . . . in (13) are formed. Substituting the Binary Spectrum vectors constructed in that manner in (10), (12) and (9), (11) would yield the Binary Spectrum matrices X tb and X teb . With this data, we define the following operations o(X tb ) = X tb v and o(X teb ) = X teb v with respect to v ∈ R b(n+1)×b(n+1) . v here is an orthonormal basis of X tb . A new classifier C n of the form C n ,C n : R d×b(n+1) → B d can now be trained with the training dataset o(X tb ) and Y t . This classifier would predict the classes for the testing data X teb . The prediction output will come in a vectorŶ teb ∈ B c given as follows.Ŷ The vector errb can then be defined in order to compute the classification accuracy. Similar to the vector err in (6), locations of errb corresponding to instances of a correct prediction having been made would carry zeros. Therefore, we define the total number of zeros in errb as sumb 0 . With that we define the classification accuracy accb in the following manner. As such, accb can be represented as a function in the following manner, similar to the function representation of acc shown in (8). Now suppose a classifier C n for some n ∈ Z + can be found such that the condition accb > acc is satisfied. Then, our objective of increasing the classifier accuracy will be accomplished, without removing any features from or adding new features to the feature sets X t and X te , i.e., by satisfying Assumption 1. As opposed to reducing or increasing the feature sets X t and X te , what enables superior classification accuracy in the proposed method will be the Binary Spectrum transformation of existing features. Recall now the function representation of acc and accb given in (8) and (24) respectively. With those, the ultimate solution one can seek following this method can be expressed in the following manner subject to the constraints accb > acc and n < n max , where n max ∈ Z + is some meaningful maximum number of bits to be allowed. The selection of an ideal value for n max is an open question for the time being, and users of this method have freedom to experiment. An initial constraint some might hypothesize for n max may be with the objective of keeping the Binary Spectrum training feature matrix X tb in (9) a tall and skinny matrix, assuming a > b provided the training dataset in original form (i.e., X t in (2)) has a instances and b features. Finding an optimal solution n * by solving (25) will result in an optimal Binary Spectrum transformation f (X t , n * ) and a classifier C n * trained from o(X tb ) and Y t , that performs superior to the classifier C trained from sets o(X t ) and Y t . Thus, the objective of increasing classification accuracy will be accomplished via the Binary Spectrum transformation. As a preliminary effort, we propose Algorithm 1 to find n * and corresponding C n * iteratively. Algorithm 1: Find n * and C n * iteratively. Result: n * , C n * n * ← 0; C n * ← C, C is trained with o(X t ), Y t ; Y te ← C(o(X te )); acc ← acc, calculate acc of C from (7); n ← 1; n max ← n max (∈ Z + ), n max > 1; while n ≤ n max do X tb ← X t f (X t , n) ; train C n with o(X tb ), Y t ; X teb ← X te f (X te , n) ; Y teb ← C n (o(X teb )); calculate accb of C n from (23); if accb > acc then acc ← accb; n * ← n; C n * ← C n ; end n ← n + 1; end IV. DEMONSTRATIVE EXAMPLE, EXPERIMENTS & RESULTS In this section we demonstrate how the proposed Binary Spectrum feature (or transformation) improves classification performance. We consider a Support Vector Machines (SVM) binary (or two-class) classifier working with two features (or descriptor variables), i.e., X ∈ R h×2 , Y ∈ B h . A. The Dataset The dataset used for this work consists 8,400 Pulsed Eddy Current (PEC) signal measurements captured on different wall thickness values of grey cast iron. The dataset has been collected through works [8], [9], [10], [12], [13], [14]. The class labels (in Y ) are decided based on the wall thickness (measured in mm). To respect Assumption 2 (i.e., to have approximately 50:50 population split for the two classes in the training dataset), the cut off thickness value was chosen to be 23.3 mm after examining the data. Thickness values less than or equal to 23.3 mm are considered as Class 1 having class label '0'. Class 1 has 4,169 instances accounting to 49.63% of the total population. Thickness values greater than 23.3 mm are considered as Class 2 having class label '1'. Class 2 has 4,231 instances accounting to 50.37% of the total population. The thickness histogram (in percentage frequency) of this total dataset is shown in Fig. 1. The input set (i.e., X) has two real-valued feature vectors. That means the domain of X becomes X ∈ R h×2 to be corresponding to the vector of labels Y ∈ B h . These features have been extracted from time domain PEC signals [10], [8], [9]. Shown in Fig. 2 is a scatter of the two features corresponding to all 8,400 measurements (or instances) in the total dataset. The two classes (i.e., Class 1 and Class 2) are colour coded in Fig. 2. B. Splitting Training and Testing Sets Authors assumed that only 30% of the total dataset will be available for training. To start with, authors performed random nonstratified partitioning of the 8,400 measurements, to a 30:70 split, 100 times. This means, the authors would have, 100 subsets of 30% of the total dataset, and 100 corresponding subsets of 70% of the total dataset. This 100fold splitting, would provide authors with 100 trials to assess classifier performance. The authors' intention was to test the 100 trials separately. That is, to learn 100 classifiers, and perform 100 corresponding validations. If the 100 classifiers and the accuracy of the 100 corresponding validations statistically exhibit some convergence, that would imply the success (or the unsuccessfulness) of the work of this paper. As an example, Fig. 3 shows the thickness histogram (with percentage frequency) of the training set (i.e. 30% subset) of the 100 th trial (or partitioning). This dataset has 2,520 instances, and the histogram is comparable to the thickness histogram (with percentage frequency) of the total population (i.e., 100% of the data) shown in Fig. 1. Having comparable distributions as such is expected for model training / testing exercises, and it appeared that with the availability of a total of 8,400 measurements (or instances), random nonstratified partitions of 30% would usually have distributions comparable with the total population. This observation was common among all the obtained partitions. Fig. 3. Training data (i.e., 2,520 instances), thickness histogram of the 100 th trial, in percentage frequency. Fig. 4 is a scatter of the two features corresponding to the 2,520 measurements (or instances) in the training dataset that came from the 100 th trial (or partitioning). The two classes (i.e., Class 1 and Class 2) are colour coded in Fig. 4. The distribution of instances in Fig. 4 is comparable to that of Fig. 2, indicating that the training dataset has a distribution that is comparable with the total population. This observation was common among all the obtained partitions. Further, this training dataset in Fig. 4 had 1,273 instances (i.e., 50.52%) in Class 1, and 1,247 instances (i.e., 49.48%) in Class 2, indicating that the training sample is in accordance with Assumption 2 (i.e., the training dataset having approximately a 50:50 split among the two classes). This compliance with Assumption 2 as well, was common among all the obtained partitions. In all the 100 trials, the 100% of the data (i.e., the total population) was used as the testing dataset. Assessing classifier performance in that manner on a single large dataset makes the performance of each classifier statistically comparable. C. Benchmarking Classification Accuracy with Features' Original Form SVM is used for classification. The objective is to first benchmark the performance of an SVM classifier over the 100 trials (described in subsection IV-B) with the features in their original decimal form (i.e., prior to performing Binary Spectrum transformation). The subsequent intention is to assess the performance of an SVM classifier trained by Binary Spectrum features over the same 100 trials. In this subsection, we present the former analysis, i.e., benchmarking performance of an SVM classifier trained with the original form of the features over the 100 trials. Given the non-linear nature of the data, the Gaussian kernel was chosen for SVM classification. The commonly known hyper-parameters named the Box Constraint and the Kernel Scale were set to be optimized at training (done with the 30% splits explained in subsection IV-B). The o(X) features (or the predictor variables) were made to standardize before being fed to the classifier. On every trial, the initial value given to both those parameters (i.e., Box Constraint and Kernel Scale) was 1. The optimized classifier resulting from every trial was then evaluated with the testing data (i.e., the 100%, or the total dataset as mentioned in subsection IV-B). The acc value (recall from (7)) for each of the 100 trials was recorder to serve as the metric for performance evaluation. Depicted by the broken black line in Fig. 5 are the acc values resulting from the 100 trials of training a classifier with the raw values of features. D. Evaluating Classification Accuracy with Binary Spectrum Features The accb value (recall from (23)) of the best performing classifier (i.e., C n * identified from Algorithm 1) for each of the 100 trials (the same ones benchmarked in subsection IV-C) was recorder. These accb values would serve as the metric for performance evaluation of the Binary Spectrum feature. Depicted by the solid black line in Fig. 5 are the accb values recorded likewise from the corresponding 100 trials. As for the preliminary work reported in this paper, n max (recall from Algorithm 1) was set to 10. Greater n max values as well can be evaluated. Selection of SVM kernel, hyperparameter initialization, and the training procedure (now considering the Binary Spectrum feature), were identical to those described in subsection IV-C). As evident from Fig. 5, it was possible to achieve superiorly performing classifiers on every single trial by using the Binary Spectrum feature. On average, an improvement of 1.46% in classification accuracy (calculated as the average of (accb − acc)/acc × 100%) was observed across the 100 trials. The maximum improvement of classification accuracy was 3.49% in the 59 th trial. Minimum observed improvement of classification accuracy was 0.1% in the 9 th trial. Depicted in Fig. 6 is the variation of the accb of all 100 trails across the 10 bits (i.e. n max = 10 for the work reported in this paper). There are 100 line graphs in Fig. 6 spanning across 1 through 10. Each line graph corresponds to a trial and illustrates the variation of the trial's accb. It can be observed that there is a clear trend of reduction of accuracy past around the 5 ∼ 7 bit mark. Whether this downward trend persists for greater number of bits was not investigated at this stage. What we intend to report as a finding, is the fact that it is possible to increase the classification accuracy of a given supervised learned classifier with a given fixed set of continuous-valued features, by using the proposed Binary Spectrum feature (or transformation). V. CONCLUSIONS The case of improving classification accuracy for a given supervised learning classification task that has to be performed with a given reduced set of continuous-valued features was considered. The case imposes that further reduction of features or addition of new features is not possible. All the provided features have to be used. It was shown via a demonstrative binary classification (i.e., two-class classification) example, that it is possible to increase classification accuracy within the considered premise, via a feature transformation that produces a novel feature set the authors have named the "Binary Spectrum feature". An increase of classification accuracy by about 1.46%(≈ 1.5%) was observed for the considered example following Binary Spectrum transformation. The derivation of the Binary Spectrum feature was presented in detail along with a preliminary algorithm to identify best performing classifiers. The findings indicate potential for broader usage of the Binary Spectrum feature and may provoke interest for further investigation. Limitations of this study include the following: (1) Only a binary classification task was examined (i.e., multi-class classification was not examined); (2) The descriptor variables were imposed to be continuous-valued (i.e., the more general case of both continuous-valued and categorical descriptor variables being present was not considered); (3) Class populations were imposed to be even (i.e., the case of uneven class populations was not examined); (4) The feature set of the case study included only two feature vectors (i.e., a higher dimensional example was not examined). As such, future work can investigate on relaxing some of the assumptions imposed on this work. Performance evaluation of the Binary Spectrum transformation on more sophisticated classification tasks that involve multiple classes and higher dimensional data also remains unchecked.
5,803
2020-10-01T00:00:00.000
[ "Computer Science" ]
Wilson loops in antisymmetric representations from localization in supersymmetric gauge theories Large-N phase transitions occurring in massive N=2 theories can be probed by Wilson loops in large antisymmetric representations. The logarithm of the Wilson loop is effectively described by the free energy of a Fermi distribution and exhibits second-order phase transitions (discontinuities in the second derivatives) as the size of representation varies. We illustrate the general features of antisymmetric Wilson loops on a number of examples where the phase transitions are known to occur: N=2 SQCD with various mass arrangements and N=2* theory. As a byproduct we solve planar N=2 SQCD with three independent mass parameters. This model has two effective mass scales and undergoes two phase transitions. Introduction Localization is a powerful tool to explore supersymmetric gauge theories in the non-perturbative domain, and in particular in the large-N limit. Exact results obtained in the latter case bear direct links to the holographic duality at strong coupling. The partition function and select observables of any N = 2 gauge theory on S 4 localize to an effective matrix model [1], that can be studied at large-N by the standard methods of random matrix theory [2]. A somewhat unexpected outcome of this analysis is appearance of large-N phase transitions in a variety of massive N = 2 gauge theories [3,4]. Their holographic description remains an interesting open problem. The difficulty is related to the strong-weak coupling nature of the holographic duality: transitions occur upon varying a coupling constant, and this is difficult to achieve in holography where an infinite-coupling regime is normally considered. It is desirable, in this respect, to devise observables that undergo phase transitions at fixed coupling while some other auxiliary parameter is being varied. Particularly promising probes of this kind are Wilson loops in large antisymmetric representations of the gauge group, which were shown to undergo phase transitions once the size of representation is dialed to a critical value [5]. Methods to calculate expectation values of Wilson loops in large representations, both holographically and from localization, have been devised in the context of the N = 4 super-Yang-Mills (SYM) theory [6,7,8], which localizes to the Gaussian matrix model [9,1]. These methods have been transplanted to N = 2 theories, both conformal [10] and massive [5,11], and in massive case antisymmetric Wilson loops were shown to undergo phase transitions in the rank of the gauge-group representation [5]. Here we discuss general features of phase transitions in antisymmetric Wilson loops and then apply the general framework to a variety of theories where the transitions are known to occur, namely to N = 2 super-QCD for different combinations of mass parameters and to N = 2 * SYM, extending in the latter case the results of [5,11]. Anti-symmetric Wilson loops By localization, the expectation value of the circular Wilson loop in any N = 2 theory on S 4 can be expressed as a matrix model correlator: where L = 2πR S 4 is the circumference of the circle, which we consider to be large compared to any other scale in the problem. In the decompactification limit R S 4 → ∞ the circular loop should obey the same universal scaling law as any sufficiently large contour. We thus expect that (2.1) describes the universal behavior of large Wilson loops of arbitrary shape in N = 2 gauge theories. We concentrate on the Wilson loops in antisymmetric representations. The generating function for characters in the rank-k anti-symmetric representation A k is given by An expectation value in any particular representation can be computed as In the 't Hooft limit, with the saddle point of the matrix model is not affected by the Wilson loop insertion, and is characterized by the eigenvalue density obtained by solving the localization matrix model in the large-N limit. In the same scaling limit, the saddle-point approximation for the integral over ν in (2.3) becomes exact and the Wilson loop expectation value takes on an exponential, perimeter-law form: The function F (f ) is determined by the saddle-point equation, which can written as follows. Define the function Then F (f ) = F(f, ν(f )), where ν(f ) is the value of ν that maximizes F(f, ν) for a given f : Once the eigenvalue distribution is known, the expectation value of the antisymmetric Wilson loop can be computed from the above two equations [7]. The distribution of eigenvalues in massive N = 2 theories is confined to an interval (−µ, µ), where µ is the characteristic mass scale of the underlying gauge theory, which we assume to be such that µL ≫ 1. This corresponds to the low-temperature regime of the effective statistical model, and the Fermi distribution in the above formulas can be replaced by the step function: (2.9) It follows from these equations that , (2.10) where ν ≡ ν(f ). While the first of these equations is exact, the second one is only valid in the limit µL → ∞. Phase transitions The eigenvalue distribution in massive theories may develop specific singularities in a certain range of parameters, which lead to phase transitions as the parameters change. The fundamental Wilson loops are not very good probes of the phase transitions, because the influence of any given singularity in the eigenvalue density is washed out by averaging over the whole eigenvalue distribution. On the contrary, the singularities are very pronounced in large antisymmetric Wilson loops due to the sharp form of the Fermi distribution at zero temperature. Large-N solutions of the localization matrix model are known for a number of D = 4 supersymmetric gauge theories [4]. It was observed that in the decompactification limit, when the size of the four-sphere becomes infinite, R S 4 → ∞, the density may develop singularities. The features encountered so far are come in two types, the delta functions and the one-sided cusps. The first type of singularity arises in theories with fundamental matter, such as super-QCD, while the second type is characteristic for adjoint matter and arises, for instance, in the N = 2 * theory. It is these singularities which are responsible for the phase transitions. We consider the two cases in turn, first at a general, model-independent level and then on concrete examples. Delta-function singularity Suppose that the density has a delta-functional peak in the middle of the eigenvalue distribution: where p is the fraction of eigenvalues concentrated in the peak and ρ 0 is a constant. It is not hard to see that this structure translates into two singularities in the Wilson loop expectation value, at f = f c+ and f = f c− , where The function ν(f ) stays flat, ν(f ) = x c , for f c+ < f < f c− . From (2.10) we see that the free energy F is continuous across the transitions together with its first derivative, while the second derivative experiences a finite jump: Cusp Strictly speaking, there are two types of cusps, the left cusp and the right cusp. On one side of the cusp the density approaches a finite value, while on the other side it has an inverse square root singularity: (2.14) As a result, the antisymmetric Wilson loop develops a singularity at f = f c± , where The free energy stays continuous together with its first derivative across the critical point, while the second derivative experiences a finite jump, given by the same formula (2.13) as for the delta-function singularity. These are the two types of singularities encountered in the large-N solutions of the localization matrix models in N = 2 theories in four dimensions. The delta-function singularities arise in theories fundamental matter, while cusps are characteristic of the adjoint matter. Below we consider a few examples of each type. Pure N = 2 SYM theory We begin with a case where there is no large N phase transition, N = 2 SU (N ) SYM theory without matter. The large-N solution of this theory was first obtained from the Seiberg-Witten theory [12]. The localization matrix model for pure N = 2 SYM was studied in [13,14]. In the decompactification limit R → ∞, one finds [12,13]: where R is the radius of S 4 and Λ is the dynamically generated scale. 1 The saddle-point equation (2.9) gives The Wilson loop (2.6) is given by It is a smooth function of f , as expected, since the theory does not have any phase transitions. N = 2 Super QCD As a first example where phase transitions occur, we consider the case of N = 2 SU (N ) super Yang-Mills theory with N f pairs of fundamental and anti-fundamental hypermultiplets of masses (M, −M ), N f < N . 2 This model was studied in [4,14]. We consider the N → ∞ limit with fixed Veneziano parameter ζ ≡ N f N and fixed (renormalized) 't Hooft coupling λ = g 2 YM N , which in turn is traded by the dynamically generated scale Λ, In terms of the original, renormalized 't Hooft coupling, ΛR = exp[−4π 2 λ]. 2 The theory with N f = N with massless hypermultiplets corresponds to N = 2 superconformal SQCD. In this case there are no phase transitions [4]. In the decompactification limit, the theory undergoes a phase transition at Λ = M 2. The different features of the model were also reproduced in [15] by taking the large N limit on the the Seiberg-Witten curve. Another interesting limit is the decompactification limit at finite N . The partition function can then be computed by incorporating instantons through the Seiberg-Witten curve [16]. It is possible to generalize the theory by considering fundamental hypermultiplets of different masses, M 1 , ..., M N f . The eigenvalue density in the large N limit can still be determined in terms of analytic formulas. The resulting theory describes multiple phase transitions occurring whenever the largest eigenvalue µ (or the lowest eigenvalue −µ) crosses a new mass scale. An explicit example with three mass scales, Λ, m, M is given in the appendix A. The present theory depends on two parameters ΛR and M R. In the large N limit, the eigenvalue density is determined by a saddle-point equation. This simplifies in the decompactification limit R → ∞. By differentiating the saddle-point equation once, one obtains, for R → ∞, Differentiating once more, we find the equation: This (singular) integral equation has two different solutions, which depends on whether the points x = ±M lie, or do not lie, within the eigenvalue distribution. The two solutions describe the two phases of the theory: • Phase I. Strong coupling phase, µ > M , with • Phase II. Weak coupling phase, µ < M , with The non-analytic behavior in the Wilson loop ln W A k appears when ν(f ) crosses a singular point in the eigenvalue density. Since in phase II the eigenvalue density is smooth, in the weak coupling phase the ln W A k will be smooth. We thus focus on the more interesting strong coupling phase, where we have a delta-function singularity of the type described in section 2.1.1. In this phase, µ = 2Λ, so the phase transition takes place at Λ c = M 2 (see appendix A). The saddle-point equation (2.9) now leads to The solution has several critical points f 1 , f 2 , f 3 , f 4 , where The critical points ( Let us now compute the Wilson loop. We have ln W A k = N RF (f ), with It exhibits discontinuities in the second derivative with respect to f at the critical points. This is readily seen from (2.10), since the first derivative of ln W A k is proportional to ν(f ) which, as can be seen from fig. 1, has discontinuities in its first derivatives. More directly, from (2.13), we have In what follows we concentrate on the strong-coupling regime of large λ. The width of the eigenvalue distribution 2µ grows with λ, and the number of cusps accordingly multiplies. At strong coupling the density gets a rather complicated short-scale structure with a large number of cusps, while the average density is very simple and is described by the Wigner distribution, as shown in fig. 3(a). To the first two orders of the strong-coupling expansion [17], The width of the eigenvalue distribution, to the same accuracy, is given by but it is more convenient to regard M and µ as independent variables (rather than M and λ). Taking into account that the zeta function has the square-root branch point at zero: ζ(1 2, z) ≃ 1 √ z, we find that the singular points (5.2) are left or right cusps: and C (k) , so the singularities are parametrically weak. The oscillating structure in the density (5.3) is a small correction on top of the regular Wigner distribution, and in addition the irregular part of the eigenvalue density integrates to zero upon averaging over sufficiently large interval, up to O(M 3 2 µ 3 2 ) which is beyond the accuracy of (5.3). Therefore, to the leading order, the expectation value of the anti-symmetric Wilson loop is given by the formulas for the Gaussian matrix model [7]: where cos θ = ν µ, with ν from (2.9) related to the representation variable f by Taking into account the next order introduces the oscillating peak structure in the eigenvalue density and leads to the phase transitions in the Wilson loop expectation value. According to (5.2), the phase transitions happen at f = f Although n ≫ ∆, it is important to keep ∆ in this equation to break the degeneracy under θ → π − θ and to get right the positions of the critical points. The second derivative of the free energy experiences a jump (2.13) at the k-th critical point of an amplitude . (5.12) Since d 2 F df 2 at each critical point jumps to zero, its discontinuity is of the same order O(µ) as its average value. This leads to the structure displayed in fig. 3(b). The above calculations apply to f = O (1). For parametrically smaller f = O(λ −3 4 ) the endpoint region of the eigenvalue distribution, where (5.3) is no longer accurate, becomes more important. The phase transitions for f = O(λ −3 4 ) were analyzed in detail in [5]. In both cases λ is assumed to be large, and these results are potentially relevant for the holographic description [18,19] of the N = 2 * theory. Conclusions Wilson loops in large antisymmetric representations undergo phase transitions which mirror quantum phase transitions in the underlying gauge theories. Both are caused by features in the eigenvalue density, delta functions or cusps, known to occur in localization matrix models for N = 2 gauge theories. In four dimensions only these singularities can arise. It would be interesting to extend the analysis to localization matrix models in other dimensions where different types of singularities may occur. Holographically, anti-symmetric Wilson loops of rank k ∼ N correspond to probe D5-branes in the dual geometry [7]. The classical solutions for D5-branes are known only for AdS 5 × S 5 [7]. It would be extremely interesting to construct D5-brane configurations for the backgrounds that are dual to massive models with phase transitions, for instance in the Pilch-Warner background [18] dual to N = 2 * SYM. So far only D3-brane solutions, which correspond to Wilson loops in large symmetric representations, have been constructed for this background [20]. The latter are affected by singularities in the eigenvalue density to much lesser degree compared to antisymmetric representations and are not very sensitive to the phase transitions. The partition function computed by localization is given by where R is the radius of S 4 and Λ is the dynamically generated scale with Veneziano parameters We shall consider ζ M + ζ m < 1, in which case the theory is asymptotically free. The partition function is thus expressed in terms of a finite dimensional integral, which is still difficult to compute. Here, as section 4 (see also [4,14]), we shall compute this integral in the planar, large N limit at fixed λ, ζ m , ζ M , by the saddle-point method. Let us first summarize the physical picture of the m = M case considered in section 4. The theory has two phases, weak coupling 2Λ < M and strong coupling 2Λ > M , separated by a third-order phase transition at 2Λ c = M . In this new theory with three mass scales Λ, M, m, the question we would like to address is whether there are more phase transitions and at which value of Λ they occur. A naive guess would be that a new phase transition should occur at Λ ∼ m. However, when Λ ≪ M , the hypermultiplets of mass M can be integrated out and the new dynamical scale of the theory is not Λ: it is replaced by a new effective scale that we shall compute. For example, in the theory of [4,14] with a single mass scale, at Λ ≪ M the theory flows to pure N = 2 SYM with Λ eff = Λ 1−ζ M M ζ M . In the new theory with two mass scales M, m, the next phase transition is expected not when Λ = O(m), but, perhaps, when the new dynamical scale of the theory (left behind after the hypermultiplets of mass M have been integrated out) is of order m. The present exact calculation will determine which are the effective scales in each phase, and at which precise coupling Λ the different phase transitions occur. In the large N limit, the saddle-point equation becomes the following integral equation: , ψ represents as usual the logarithmic derivative of the Γ-function and γ is the Euler constant, γ = −ψ(1) (see [4] for the relation of K to the Barnes G-function and other properties). Here we have set R = 1 for the sake of clarity. The model depends on three parameters ΛR, M R and mR. In the decompactification limit, these parameters go to infinity and the saddle-point equations simplify. In this limit, the eigenvalue distribution extends to large eigenvalues and one can use the approximation K = x ln x 2 + 2γx + O(x −1 ). The first derivative of the saddle-point equations is given by Differentiating once more we obtain a singular integral equation that can be easily solved: In what follows we describe the analytic solution in each phase. Strong-coupling phase (µ > M > m) When µ > M > m, the eigenvalue density that solves (A.6) is given by The parameter µ is determined in terms of M, m and Λ by substituting the solution into (A.5). This gives µ = 2Λ. (A.8) Note that in this phase the endpoint of the eigenvalue distribution is therefore independent of the hypermultiplet masses. The solution (A.7) holds as long as M < µ. As Λ is gradually decreased, there is a critical point Λ c1 where µ = M , which thus occurs at For Λ < Λ c1 , the delta-functions δ(x ± M ) lie outside the interval [−µ, µ] and the density (A.7) does not solve (A.6) anymore. This happens just like in the m = M case discussed in section 4. Intermediate-coupling phase (M > µ > m) In the regime M > µ > m, the solution is given by Substituting into (A.5) we find the following transcendental equation for µ: In the particular case ζ m = 0, corresponding to n f = 0 and giving ζ M = ζ, the resulting equation was studied in [4,14]. Like in this case, the solution can be expressed in a parametric form: Here we can see the emergence of the first relevant effective scale. When M ≫ Λ, the hypermultiplets can be integrated out. From the above equations we obtain The theory left behind should be SQCD with n f flavors and dynamical QCD scale Λ eff1 ∼ µ. Weak-coupling phase (M > m > µ) As Λ is further decreased, there is a critical point Λ c2 where µ = m. For lower values of Λ, the delta-functions δ(x± m) move outside the eigenvalue region [−µ, µ] and the density (A.10) does no longer solve (A.6). In this regime M > m > µ we find the solution . (A.18) Substituting into the integrated form of the saddle-point equation (A.5), we now find the following transcendental equation for µ: Following [4,14], it is easy to show that the two first transitions occurring at Λ c1 , Λ c2 are third order. To show this, one computes the different derivatives of the free energy at each phase using (A.26) [Here F = − ln Z is the standard free energy of the theory -it should not be confused with the F (f ) appearing in (2.6), which is related to the free energy of the effective Fermi distribution]. Likewise, one can show that the Wilson loop in the fundamental representation is discontinuous in its first derivatives: recall that ln W f ∼ 2πµ, therefore ln W f inherits the first-derivative discontinuities of µ at Λ c1 , Λ c2 . Summarizing, starting from strong coupling Λ ≫ M, m, as Λ is gradually decreased, the theory undergoes a first phase transition when Λ = M 2. As Λ is further reduced, one finds that nothing happens when Λ = O(m); the theory has a smooth behavior with the coupling Λ at this scale. The new phase transition to the weak-coupling phase occurs at a lower scale Λ c2 (which becomes much lower if m ≪ M ). Once the hypermultiplets of mass M have been integrated out, the theory left behind has a new dynamical scale Λ eff1 . As one may expect, if a new phase transition occurs, this one will take place when Λ eff1 is of order m and this is indeed what happens. It is reassuring that all field theory expectations are realized in a precise way through the exact large N solution of the system. B Useful integrals The formulas for the integrals used in section 2 can be derived by residue integration. One chooses a contour surrounding the cut from (−µ, µ) and evaluates the residue of the poles outside the cut (including possible poles at infinity) by the change of variable z = 1 x. One finds (B.6)
5,273.8
2017-12-19T00:00:00.000
[ "Physics" ]
Simulation of action potential propagation based on the ghost structure method In this paper, a ghost structure (GS) method is proposed to simulate the monodomain model in irregular computational domains using finite difference without regenerating body-fitted grids. In order to verify the validity of the GS method, it is first used to solve the Fitzhugh-Nagumo monodomain model in rectangular and circular regions at different states (the stationary and moving states). Then, the GS method is used to simulate the propagation of the action potential (AP) in transverse and longitudinal sections of a healthy human heart, and with left bundle branch block (LBBB). Finally, we analyze the AP and calcium concentration under healthy and LBBB conditions. Our numerical results show that the GS method can accurately simulate AP propagation with different computational domains either stationary or moving, and we also find that LBBB will cause the left ventricle to contract later than the right ventricle, which in turn affects synchronized contraction of the two ventricles. The heart is a rhythmic pump that maintains blood circulation throughout the body 1 . The rhythmic beating of the heart is caused by the regular spread of action potential (AP) within the heart. Abnormal conduction of AP in the heart can cause arrhythmias. Symptoms of arrhythmia include extrasystole, tachycardia, ventricular fibrillation, etc., of which ventricular fibrillation is the leading cause of cardiac sudden death 2,3 . Sudden cardiac death accounts for 15% of global deaths, and about 80% of sudden cardiac death is the result of ventricular arrhythmias 4 . Furthermore, about a quarter of patients with heart failure are diagnosed with LBBB 5 , which causes asynchronous AP propagation and contraction of the left ventricle, and then potentially leads to the global left ventricle dysfunction 6 . Therefore, it is of great significance to study the mechanism of arrhythmia, such as through numerical modelling, which can explore extreme situations that is difficult to perform in experiments. For several decades, the electrical activity of the heart has been modeled by a system of singularly perturbed reaction-diffusion partial differential equations that couples a set of ordinary differential equations used to describe the cell membrane dynamics 7,8 . The effects of different types of the electrical stimulation on arrhythmia can then be studied by solving these differential equations numerically. At present, numerical simulation of electrical activity has become a powerful tool for studying and understanding cardiac electrophysiology and arrhythmia [9][10][11] . To mathematically model cardiac action potential, a cardiomyocyte model is required. With the abundance of experimental data, myocyte models have been continuously improved. A large number of mammalian cardiomyocyte models already exist in the literature 12 , such as Beeler-Reuter model 13 , Luo-Rudy model 14 , Fenton-Karma model 15 , etc. In order to accurately study the human heart, a large number of human cardiomyocyte models have also been proposed, such as ten Tusscher model 16 , Grandi-Pasqualini-Bers (GPB) model 17 , etc. For example, the GPB model can be used to describe Ca 2+ handling and ionic currents in human ventricular myocytes, and its effectiveness has been validated against available experimental data 7,17,18 . In 1969, Schmitt et al. 8 proposed a bidomain model for AP propagation in tissue level, then was further developed in the late 1970s [19][20][21] . The bidomain model describes active cardiomyocytes on a macroscopic scale by membrane ion current, membrane potential and extracellular potential 22 . Based on a given membrane potential, the bidomain model can modelling both the extracellular potential and the body-surface potential 23 . Recently, Bendahmane et al. 24 introduced a "stochastically forced" version of the bidomain model that accounted for various random effects, and further a two-dimensional FitzHugh-Nagumo model in a rectangular and circular regions 40,53 . The FitzHugh-Nagumo model is where, u is a normalized transmembrane potential, v is a recovery variable. K x and K x are the components of diffusion coefficient K. The model parameters a = 0.1, ε = 0.01, β = 0.5, γ = 1, σ = 0. In this Fitzhugh-Nagumo model, when K x = K y = 10 −4 , Fig. 1 gives the spiral wave of the stable rotation solution at t = 1000. As can be seen from Fig. 1, for the rectangular region, the spiral wave of the model generates a clockwise rotation curve. Figures 1(a,b) give the spiral waves obtained by the GS method and Liu 40,53 , respectively. It can be found that the spiral wave structure obtained by the GS method is consistent with the results obtained by Liu 40 . Figure 2(a,b) show the numerical results obtained by the GS method and Liu et al. 40,53 with K x = 10 −4 , K y /K x = 0.25, and Fig. 2(c,d) are the results with K y = 10 −4 , K x /K y = 0.25. It can be found that for K x = 10 −4 , K y /K x = 0.25 and K y = 10 −4 , K x /K y = 0.25, the spiral wave structures obtained by the GS method are nearly identical as those obtained by Liu et al. 40,53 . Secondly, using the same model parameters and boundary condition, the computational domain is changed to be a circle Ω = {(x, y)|(x − 1.25) 2 + (y − 1.25) 2 ≤ 1.25 2 } with the following initial conditions: The computational domain of the ghost structure remains the same as the rectangular case. Figure 3(a,b) show the numerical results obtained by the GS method and Liu 41 with K x = K y = 10 −4 at t = 1000. It can be seen from Fig. 3 that the results obtained by the GS method in the circular region are also identical to those obtained by Liu 41 . Finally, we simulate the transmembrane potential propagation in moving regions by using the same model parameters and boundary condition. To do this, we assume the Lagrangian point in the rectangular and circular region mentioned above expands in the normal direction → = AP propagation on ventricular section. The electrical conduction system of the heart triggers myocardial contraction by electrical impulses transmitted through the sinus node. As shown in Fig. 6, electrical pulses pass through the atrium to the atrioventricular node and enter the ventricle along the left bundle branch, right bundle branch and Purkinje fiber. When an electrical pulse is transmitted to the cardiomyocytes and triggers the AP production, an excitation-contraction coupling occurs in cardiomyocytes. Myocardial contraction is highly dependent on the dynamics of calcium in a single myocardial cell 54 , which is becasue myofilament contraction is regulated by an increase of the intracellular calcium transient (CaT). Therefore, the excitation-contraction coupling of myocytes essentially depends on the calcium-induced calcium release 55 . In this section, we employ the GPB model to model the myocyte electrophysiology, which describes various ionic current I ion and Ca 2+ dynamics. The reasons for choosing the GPB model are that (1) it matches experimental data well 17 ; (2) it is adequate to analyse AP with detailed Ca 2+ dynamics, which plays a crucial role in excitation-contraction in myocardium 54 . In order to understand the AP propagation in ventricles, we select one transverse and one longitudinal sections obtained from a real human heart 51,56,57 . The computational domain for the ghost structure of the transverse and longitudinal sections of the ventricle are 130 mm × 110 mm and 140 mm × 140 mm, respectively, and the spatial step size in each direction is 0.14 mm. The discrete time step is 0.08 ms, the membrane capacitance C m = 1 μF/cm 2 and the surface-to-volume ratio 58 Am = 0.24 μm −1 . The human ventricular conductivity is listed in Table 1. σ i L and σ e L are the longitudinal intra-and extracellular conductivities, respectively. σ i t and σ e t are the transversal intra-and extracellular conductivities, respectively. As shown in Table 1, the ventricular conductivity depends on the region and direction of cardiomyocyte. In this study, we employ the data obtained by Potse et al. 28 in Table 1. The right and left bundle branches and Purkinje are the main conduction system in the ventricles. The right and left boundle branches divide into a few major branches and subsequently into Purkinje fibers 59 as shown in Fig. 6. Purkinje fibers penetrate into the ventricular muscle, entangled in endocardium, and form a network. The network of Purkinje fibers do not contribute to the activation of ventricular muscles until it reaches the middle and lower third of the septum and ventricle 45 . Purkinje fibers allow for rapid, coordinated, and synchronous physiologic depolarization of the ventricles. Therefore, we consider the initial electrical stimulation to be located in the middle and lower third segments of the septum and endocardium 60 , as indicated by the blue region in Fig. 6(a). For a healthy heart in Fig. 6(a), the blue region of endocardial surface will have the electrical stimulus transmitted from the Purkinje network. The LBBB refers to the blockage of the left bundle branch conduction. In this study, we consider complete left bundle branch block. Thus, no electrical stimulus is transmitted in the left branch, but through the interventricular septum from the right ventricular endocardium to the left ventricle endocardium 45,61 . As shown in the Fig. 6(b), only the blue region in the right ventricle receives the electrical stimulus from the Purkinje fibers. To understand the effects of LBBB, we now compare AP propagations in a healthy heart with and without LBBB. Figure 7 shows AP propagation in the transverse section of heart at different times. Figure 7(a-d) illustrate the state of AP propagation under healthy condition. Figure 7(e-h) shows AP propagation across the transverse section of heart with LBBB. Figure 8 shows AP propagation on the longitudinal section of heart. In the transverse section of heart, the healthy heart completes the AP propagation within approximately 347.2 ms, which is faster www.nature.com/scientificreports www.nature.com/scientificreports/ than the propagation with LBBB (540 ms). As shown in Fig. 7, under healthy conditions, when t = 40 ms, the AP spreads to most of the right ventricle and half of the left ventricular region. However, in the LBBB case, only half of the right ventricular region is excited, and the AP has not yet arrived at the left ventricular. When t = 120 ms, the whole healthy heart is repolarized, but still half of the left ventricle is not excited in the LBBB case. In the longitudinal section, the heart with LBBB needs 1011.2 ms to complete the AP propagation, which is 2.4 times the time (416 ms) that the healthy heart completes the propagation. As shown in Fig. 8, at t = 120 ms, almost the whole healthy heart is stimulated, but for the heart with LBBB, only the right ventricle is stimulated. When t = 360 ms, the AP only spreads to the apex of the heart with LBBB, while in the healthy heart, almost all regions return to the resting potential. Therefore, the LBBB causes significant delay in the activation of the left ventricle. To further explain how LBBB affect ventricular contraction, we select four points on the transverse and longitudinal sections, as shown in Fig. 6. Transmembrane potential and calcium ion concentration at each point are then analyzed. As shown in Fig. 9, the points in the healthy case are stimulated almost simultaneously, but not in www.nature.com/scientificreports www.nature.com/scientificreports/ the heart with LBBB, much delayed at points 4, 7 and 8. Comparing Fig. 9(a) with Fig. 9(b), the activation time of the point in the interventricular septum (such as point 2) in the LBBB case is similar to that in the healthy case. However, the activation time is significantly affected by LBBB at points away from the right ventricle. The farther away from the right ventricle, the later the activation. Figure 10 shows the calcium ion concentration at selected points. Since myofilament contraction is regulated by intracellular calcium transient, it can be seen from Fig. 10(a,c) that all points in the healthy cases can contract at the same time. While in the heart with LBBB ( Fig. 10(a-d)), the points in the interventricular septum (e.g. points 2, 3 and 6) are essentially unaffected and will contract at nearly same time, but points 4 and 7 are about 250 ms late when they start to contract. For the point farthest from the right ventricle, namely point 8 at the lateral wall, the contraction time is about 600 ms later. Therefore, the LBBB will cause significant contraction delay in the left ventricular lateral wall, could potentially lead to heart failure in the long term. www.nature.com/scientificreports www.nature.com/scientificreports/ Discussion The validity of the GS method is verified by some standard examples of the FitzHugh-Nagumo monodomain models. Based on the GS method, we capture the patterns of heterogeneity and complex connectivity of electrophysiological dynamics in biological tissues by solving the Fitzhugh-Nagumo monodomain model in rectangular and circular regions. The spiral wave obtained by the GS method is the same as that obtained by others, and it is verified that the GS method can effectively solve the monodomain model in the rectangular and circular regions. By comparing with the results in the stationary region, propagation velocity and shape of spiral waves in the moving region change. The propagation velocity in the moving region is higher than the velocity in the stationary region. The width of the spiral wave in the moving region is also slightly larger than the width in the stationary region. Since the GS method permits nonconforming discretization of the transmembrane potential and membrane dynamics, the monodomain model can be directly solved using finite different method in a regular ghost structure. Another advantage of the GS method is in dealing with the moving regions. Compared to Liu's method 40 , the GS method needs to use the delta function to transform the Lagrangian and Eulerian variables, and the membrane dynamics need to be solved in a finer Lagrangian grid. In this sense, the GS method will require higher computational resource than Liu's implementation 40 . For the stationary rectangular region, the computational time of the GS method is about 3 hours, since we have not fully optimized the implementation for high performance computing but only employed OpenMP functionality for dealing with "for". It is expected that once we employ MPI and GPU computing, the computational time can be reduced significantly. The AP propagation on the transverse and longitudinal sections of human heart is successfully simulated by the GS method. In a real heart, the AP transmits involves varieties of conduction cells, such as myocyte, sinoatrial node cells, atrioventricular nodal cells, Purkinje fibers, and fibroblasts. The electrical activity of the heart begins www.nature.com/scientificreports www.nature.com/scientificreports/ with the sinoatrial node at the right atrium. Pulses from the sinoatrial node travel through the left and right atrium and meet at the atrioventricular node. From the atrioventricular node, electrical impulses travel along the bundle and are transmitted to the right and left ventricles through the right and left bundle branches. Finally, the bundle at the end of the bundle branch is divided into millions of Purkinje fibers. Nowadays, many researchers have begun to study electrophysiology by including various conduction cells. The sinoatrial node is the normal pacemaker of the mammalian heart. There are a few mathematical models of sinoatrial nodes. For example, based on the Severi-DiFrancesco model of a rabbit sinoatrial node cell and the electrophysiological data from human sinoatrial node cells, Fabbri et al. 62 proposed a comprehensive model of the electrical activity of a human sinoatrial node cell. The AP and CaT obtained by Fabbri were close to experimentally recorded values. In order to illustrate the functional role of various genetic isoforms of ion channels in generating cardiac pace-making AP, Kharche et al. 63 developed a mathematical model for spontaneous AP of mouse sinoatrial node cells. In that model, biophysical properties of membrane ionic currents and intracellular mechanisms were considered. Results showed that their model could reproduce the physiological exceptionally short AP and high pacing rates of mouse sinoatrial node cells effectively. Because of the importance of Purkinje system in both normal ventricular excitation and ventricular arrhythmias, modelling of the Purkinje system is essential for a realistic ventricle model of the heart 61 . Recently, inclusion of Purkinje network in AP modelling has attracted much attention 64 . For example, Oleg et al. 65 developed a detailed model of the canine Purkinje-ventricular junction and varied its heterogeneity parameters to determine www.nature.com/scientificreports www.nature.com/scientificreports/ the relationship between wave conduction velocity, tissue structure, and safety of discontinuous conduction at nonuniform junctions. Oleg found that fast or slow conduction was unsafe, and there existed an optimal velocity that provided the maximum safety factor for conduction through the junction. Vergara et al. 66 developed a model for the electrophysiology in the heart to handle the electrical propagation in the Purkinje system and myocardium. Their results illustrated the importance of using physiologically realistic Purkinje-trees for simulating cardiac activation. However, the majority of current anatomical models have not included models of the Purkinje network 61 . Instead, a simplified approach is adopted by applying electrical stimulus in the middle and lower third segments of the septum and endocardium 60 . The same approach is used in this study. This is partially due to the fact that extensive branching of the Purkinje fibers makes modelling Purkinje network extremely challenging as suggested by Tawara 59 , who carried out a formidable study lasting over 2 years to reconstruct the conduction system from experimental data. Since the focus of this study is to develop a numerical method for AP within myocardium, like many other studies 67,68 , we only consider myocytes, and other conduction cells are not simulated. When the bundle branch is injured after myocardial infarction, or cardiac surgery, it may stop transmitting electrical impulses completely. This will result in a change in the path of ventricular depolarization. According to the anatomical location of the defect that leads to the bundle branch block, the block is further divided into the right bundle branch block and the left bundle branch block. Since the electrical pulse can no longer use the preferred path through the bundle branch, it can only spread through muscle fibers, which slows down the electrical propagation and changes the directional propagation of the electrical pulse. Lange et al. 44 www.nature.com/scientificreports www.nature.com/scientificreports/ tendon could compensate for the propagation delay caused by the LBBB. As demonstrated in this study, LBBB leads to delayed triggering of electrical excitation of the left ventricle, resulting in the loss of ventricular electrical synchrony, and potentially causing mechanical discoordination. Conclusion In this study, we have developed a GS method by immerse the actual irregular electrophysiology computational domain into a larger rectangular region. Action potential propagation using the monodomain model is solved successfully with the GS method. In a rectangular and a circular regions, by using the GS method to solve the FitzHugh-Nagumo monodomain model, we capture the patterns of heterogeneity and complex connectivity of electrophysiological dynamics in biological tissues, and demonstrate the validity of the method. Numerical results show that the GS method can effectively simulate the AP propagation in irregular region. Furthermore, we employ the GS method to simulate the transmembrane propagation in moving regions and analyze the influence of moving region on transmembrane propagation. Our results show that the moving regions affect not only the propagation velocity but also the shape of spiral waves. Subsequently, we simulate the AP propagation on the transverse and longitudinal sections of a healthy heart and a heart with LBBB by using the GS method. The numerical results demonstrate how LBBB affects action prorogation in ventricles. Model Introduction Monodomain model. Simulating myocardial electrical activity needs to describe the anisotropic excitation conduction based on the ion channel model of myocardial cells. In general, a bidomain model based on the Poisson equation is used to describe the electrical coupling between myocytes and the electrical conduction cells in the tissue. At the microscopic level, myocardium can be seen as consisting of two separate regions separated by the cell membrane: the intracellular space (Ω i ) and the extracellular space (Ω e ). The bidomain model consists of the equations for the intracellular potentials (φ i ) and the extracellular potentials (φ e ), thus the transmembrane potential is V m = φ i − φ e . The governing equations of the bidomain model are www.nature.com/scientificreports www.nature.com/scientificreports/ e l e t are the conductivity tensor. A m is the surface-to-volume ratio, i.e., the amount of membrane found in a given volume of tissue. I m is the transmembrane current density. C m is the membrane capacitance, I ion is the ionic current through a number of different types of ion channels. I s is an imposed stimulation current. Assuming the anisotropy ratios in intracellular and extracellular spaces are the same, let σ e = λσ i , then Eq. (8) will be reduced to Substituting Eq. (10) into Eq. (9), we will obtain the governing equation of the monodomain model, that is systems. They are singularly perturbed systems for model parameters and reasonable initial data 69 . When the diffusion phenomenon is included in the system, the threshold phenomenon ensures the stability of the traveling wave solution. The so-called threshold phenomenon, that is, there is a threshold value of the transmembrane potential V m in the uniform space, for the electrophysiology model Eq. (9) or Eq. (11), when the potential is lower than the threshold value, it quickly returns to the stable state; when the potential is higher than the threshold value, it will produce a large excursion before it returns to the stable state. In cardiac tissue, an initial perturbation with a sufficiently large transmembrane potential V m triggers AP propagation. www.nature.com/scientificreports www.nature.com/scientificreports/ Figure 11 shows a typical AP curve for a human myocyte, which consists of four phases, the depolarization phase (phase 0), the early repolarization phase (phase 1), the plateau phase (phase 2), the repolarization (phase 3), and the resting phase (phase 4). In a cardiac cycle, once a myocyte is excited, it can not be excited again for a period of time, the so-called effective refractory period (ERP). During this period, the depolarization of adjacent cardiomyocytes does not trigger already-excited myocytes. When entering the resting phase (phase 4), myocytes are ready for next excitation. ERP is usually characterized by the interval between the depolarization (phase 0) and repolarization (phase 3) phases. As a protective mechanism, ERP can control the heart rate, prevent arrhythmias and coordinate muscle contractions. The ghost structure method. In this ghost structure method, the transmembrane potential V m is described by an Eulerian form and discretized with a regular Cartesian grid, while the membrane dynamics is described by a Lagrangian form and calculated by the cell membrane model. As shown in Fig. 12, the entire computational domain (the ghost structure region) is represented by Ω, where X = (X 1 , X 2 ) ∈ Ω c is Eulerian (physical) coordinates. The region where myocytes sit is expressed as Ω c , X = (X 1 , X 2 ) ∈ Ω c are fixed Lagrangian (material) coordinates. The mapping χ(X, t) ∈ Ω gives the physical position of each Lagrangian point at time t. Therefore, the physical region occupied by myocardium at time t is denoted as Ω c (t) = χ(X, t), and the region of non-myocardial cells at time t is denoted as Ω non (t) = Ω − Ω c . The Lagrangian and Eulerian variables are transformed by the integral transformation of the delta function. Finally, the full governing equations of the monodomain model in the GS method are www.nature.com/scientificreports www.nature.com/scientificreports/ s s c where V m (x, t) is the Eulerian transmembrane potential, I ion (x, t) is the Eulerian ionic current, and I s (x, t) is the Eulerian stimulation current density. s are the transmembrane potential, ionic current and stimulation current density in Lagrangian form, respectively. y is a Lagrangian vector of ionic fluxes and their corresponding channel gating variables are described by the suitable ordinary differential equations Eq. (14). f represents the right hand side of the ordinary differential equations used to describe ion channels. g represents a nonlinear function that relates the ionic flux to the total ionic current. Eq. (14) and Eq. (15) are cardiac membrane models used to solve the current-voltage relationship. δ(x) is a two-dimensional delta function that transforms the transmembrane potential and ion current between the Eulerian and Lagrangian coordinates. In this study, we calculate ion currents by Eq. (13) and Eq. (14) through the GPB model, and then convert  I t X ( , ) ion into the Eulerian ion current in the ghost structure region. Finally, the Eulerian transmembrane potential V m (x, t) is obtained by solving Eq. (12). Spatial discretization. The regular ghost structure region Ω is discrete by a N 1 × N 2 Cartesian grid with spatial steps ∆ = . The gradient of V m is approximated at the center point of the grid again using the following difference scheme, www.nature.com/scientificreports www.nature.com/scientificreports/ The divergence of σ i ∇V m is defined also at the centre of the grid as The initial value of V m at each Eulerian point in the entire computational domain needs to be approximated based on the transmembrane potential at the Lagrangian point and the integral transformation of the delta function. In order to solve more accurately, the transmembrane potential at the Lagrangian point in Ω c and the delta function are required to update the transmembrane potential at the Eulerian point outside the structure Ω c at regular intervals. The mutual transformation between the Eulerian variable and the Lagrangian variable will be explained in the subsection "Lagrangian-Eulerian interaction". Time discretization. For the ordinary differential equations Eq. (14), the third-order TVD Runge-Kutta method 70 is used to solve the system: n m n n m n m n 1 ( 2) 1 1 (1) 1 (2) For the nonlinear partial differential equation Eq. (12), it can be rewritten as . In this paper, the third-order TVD Runge-Kutta method is also used to solve the nonlinear reaction-diffusion equation Eq. (21) In this paper, nodes of the mesh are regarded as Lagrangian points. The Lagrangian points must be finer than the Cartesian points to avoid leaks 71 . In the calculation process, the integral transformation form of delta function is used to realize the conversion between the Eulerian variable and www.nature.com/scientificreports www.nature.com/scientificreports/ Based on the above delta function, the approximate values of physical quantities at each Eulerian point can be obtained directly by using the values of Lagrangian points around the Eulerian point. In order to obtain the approximate value of physical quantity in Eulerian coordinate system more accurately, we use the Gaussian quadrature rule with quadrature points X Q e , where ∈ Ω X Q e e , and accociated weights ω = ... Q N ( 1, , ) Q e e . In the GS method, the value of each Gaussian integral point is obtained through the basis function of the finite element mesh, and then the approximate value of the physical quantity at the Eulerian point is obtained by using the integral transformation form of the delta function. Taking I ion (x, t) as an example, the current  I ion of the Gaussian integration point in the element is obtained by the value of the Lagrangian point through the element basis function of the finite element method, and the approximate value I ion of the  I ion at the Eulerian point is Similarly, we can use the integral transformation form of the corresponding delta function to obtain the approximate value of V m at each Gaussian integral point by using the transmembrane potential V m at Eulerian points, that is In order to update the ion current at the next time step through the cell membrane model, we need to obtain the transmembrane potential ∼ V m at the Lagrangian point or the grid nodes. The transmembrane potential ∼ V m at grid nodes is obtained by using the approximate value of Gaussian integral point from Eq. 27 and a L 2 projection method 48 . All simulations are performed on a windows workstation with Intel(R) Xeon(R) Gold 5115 (20 cores, 2.40 GHz, 64 GB memory), implemented in C++.
6,640.2
2019-07-29T00:00:00.000
[ "Engineering" ]
A Database of Calabi-Yau Orientifolds and the Size of D3-Tadpoles The classification of 4D reflexive polytopes by Kreuzer and Skarke allows for a systematic construction of Calabi-Yau hypersurfaces as fine, regular, star triangulations (FRSTs). Until now, the vastness of this geometric landscape remains largely unexplored. In this paper, we construct Calabi-Yau orientifolds from holomorphic reflection involutions of such hypersurfaces with Hodge numbers $h^{1,1}\leq 12$. In particular, we compute orientifold configurations for all favourable FRSTs for $h^{1,1}\leq 7$, while randomly sampling triangulations for each pair of Hodge numbers up to $h^{1,1}=12$. We find explicit string compactifications on these orientifolded Calabi-Yaus for which the D3-charge contribution coming from O$p$-planes grows linearly with the number of complex structure and K\"ahler moduli. We further consider non-local D7-tadpole cancellation through Whitney branes. We argue that this leads to a significant enhancement of the total D3-tadpole as compared to conventional $\mathrm{SO}(8)$ stacks with $(4+4)$ D7-branes on top of O7-planes. In particular, before turning-on worldvolume fluxes, we find that the largest D3-tadpole in this class occurs for Calabi-Yau threefolds with $(h^{1,1}_{+},h^{1,2}_{-})=(11,491)$ with D3-brane charges $|Q_{\text{D3}}|=504$ for the local D7 case and $|Q_{\text{D3}}|=6,664$ for the non-local Whitney branes case, which appears to be large enough to cancel tadpoles and allow fluxes to stabilise all complex structure moduli. Our data is publicly available under http://github.com/AndreasSchachner/CY_Orientifold_database . To be more concrete, we are working at the level of triangulations and not at the level of geometries. Hence, some triangulations may correspond to the same favourable Calabi-Yau geometry. In [3], the 28,463 triangulation-wise involutions reduced to 5,660 geometry-wise proper involutions out of which 4,482 are obtained from favourable geometries. In contrast, the CICY orientifolds of [5] are counted as distinct geometries. In this sense, the stated number of ∼ 7.2 · 10 7 should be taken with a grain of salt. |Q D3 | Type D7-tadpole cancellation h 1, 1 Reference ≤ 428 KS non-local 3 [14] ≤ 72 CICY local ≤ 19 [5] ≤ 264 CICY non-local ≤ 19 [5] ≤ 272 CICY local 4 [15] ≤ 60 KS local ≤ 6 [3] ≤ 504 KS local ≤ 12 our database ≤ 6664 KS non-local ≤ 12 our database planes , since they are not localised on top of the O7 planes, allows the possibility of substantially enhancing the maximum value of the D3 charge needed to cancel the D3 tadpoles. We argue that construction with Whitney branes [11,12] significantly surpass estimates for the D3-charge from SO(8) stacks of D7-branes on top of O7planes. Similar observations have been made in [5] for general orientifolds of CICYs. Our models beat previous records for the total D3-charges obtained in type IIB setups as exemplified by table 1. Ultimately, the goal is to combine our investigation with de Sitter constructions which we will explore in an upcoming paper [13]. This paper is organised as follows. Next section is devoted to introductory material regarding the construction of CY orientifolds in terms of hypersurfaces of 4-dimensional reflexive polytopes. We describe the different types of toric divisors and their topological properties that are relevant for our subsequent discussions. In section 3 we discuss the orientifold involution and determine the different brane configurations needed to cancel the tadpoles induced by the O3 and O7 orientifold planes. In particular we point out the difference between local D7-branes and nonlocal D7 or Whitney branes and how they contribute differently to the D3 tadpoles. Section 4 describes in detail our database including the corresponding Hodge numbers and D3-brane charges, focusing on the general dependence of the D3 charges on the Hodge numbers and illustrating the maximum number of D3 charges that are relevant for the tadpole problem. First, we present a full scan for orientifold models for h 1,1 ≤ 7. Then we perform a random sampling for geometries with 8 ≤ h 1,1 ≤ 12 and identify the largest values of D3 charges for both local and non-local D7-brane configurations. We describe the model with the largest D3-charge contribution in our database explicitly in section 5. We summarise our conclusions in section 6. In appendix A we provide concrete examples of Whitney branes analysing their factorisation property depending on the topology of the divisors. In a second appendix B, we present a simple example of a CY threefold with genus one fibration. From polytopes to Calabi-Yau hypersurfaces Here we collect some elementary definitions and formulae necessary for constructing CY hypersurfaces from 4-dimensional reflexive polytopes in the Kreuzer-Skarke (KS) database [1], see [16][17][18] for details of the construction. To construct CY threefolds, one begins with two reflexive polytopes ∆ and ∆ • based on two 4D lattices M ∼ = Z 4 and N ∼ = Z 4 with a pairing · , · so that ∆ ∈ M R = M ⊗ R and ∆ • ∈ N R = N ⊗ R satisfy ∆, ∆ • ≥ −1 . (2.1) We associate to the polytope ∆ • a fan Σ in the following way. Reflexivity of ∆ • implies that the origin of N is the unique interior lattice point of ∆ • . We denote all other lattice points of ∆ • by ν i . The latter correspond to primitive generators of the rays of the fan Σ. The cones of Σ are given by a triangulation of ∆ • , i.e., special subsets of the ν i with each containing the generators of a cone. We will focus on so-called fine, regular, star triangulations 4 (FRSTs), whose fan describes a simplicial toric 4-fold denoted P Σ . One can introduce weighted, homogeneous coordinates z i on P Σ . Within P Σ , the CY threefold X is found as the zero locus of a polynomial P = m c m p m , where p m are monomials in z i 's and c m are coefficients related to the complex structure moduli of X. The individual monomials p m appearing in P are encoded by ∆, also called the Newton polytope of the hypersurface. They are easily computed from (see e.g. Eq. (A.8) in [16]) (2.2) Although P Σ does not need to be smooth, every FRST leads to a smooth hypersurface X [19]. We focus exclusively on favourable geometries where h 1,1 (X) = dim(Pic(P Σ )) , (2.3) that is, the Kähler moduli on X descend from those of the ambient space P Σ . Computationally, it is generically expensive to compute all triangulations for a given ∆ • . For sufficiently simple polytopes, that is, those with few lattice points, all triangulations were obtained in [16] up to h 1,1 (X) = 6. Here, only a small subset of the triangulation data was required to define the geometry of X. Specifically, everything happening inside faces of co-dimension one can be ignored. In our scan, we check all favourable geometries for h 1,1 (X) ≤ 7 and provide partial results up to h 1,1 (X) = 12. Toric divisors and their topologies Each weighted, homogeneous coordinate z i of P Σ corresponds to a point on the boundary of ∆ • . The lociD i = {z i = 0} are called prime toric divisors (see e.g. [18] for details). The subset of such divisors which intersect X transversely corresponds to points that lie in faces of ∆ • of dimension ≤ 2. Intersecting such a locus with the CY hypersurface equation, one gets a divisor D i ∈ H 1,1 (X, Z) which defines a 4-cycle in X dual to a 2-cycle ω i . Since we focus exclusively on favourable polytopes and geometries, all such prime toric divisors are irreducible on X. Hence, H 4 (X, Z) is generated by any basis constructed from {D i }, i = 1, . . . , h 1,1 (X) + 4. The Hodge numbers of divisors are collectively denoted as Prototypical examples include del Pezzo divisors dP n , n = 0, . . . , 8 (where dP 0 = P 2 ) and the Hirzebruch surface F 0 = P 1 × P 1 , for which h (1,1) (D dPn ) = n + 1 and h (1,1) (D F 0 ) = 2. These divisors play a special role since they can be shrunk to a point allowing for SM realisations on D3-branes placed at the tip of the singularity [15]. Rigid divisors with h (1,1) (D) > 9 are typically referred to as non-shrinkable. For later purposes, we distinguish other common types of divisors as follows, see also [3] To compute these Hodge numbers, we follow the steps outlined in [17,26], that we now review. 5 As said above, each toric divisor D i ∈ H 1,1 (X, Z) is associated with a lattice point ν i on ∆ • . Its Hodge numbers h 0,p can be obtained from the location of ν i inside ∆ • . In fact, one finds the following [19,29]: where * is the sum of all interior points of the face Θ, which is the dual of the face containing ν i . The above conditions can easily be checked using Sage [30]. The remaining Hodge numbers can then be inferred from the Euler characteristic and the arithmetic genus The RHS can be easily computed from the CY data, by using adjunction formula c 2 (X) = c 2 (D) − c 1 (D) 2 and c 1 (D) = −ι * D for a CY: Another way to computing divisor topologies uses the cohomCalg package [27,28] which is however limited when applied to models with h 1,1 (X) ≥ 6. In particular, the authors of [3] computed the Hodge numbers of divisors up to h 1,1 (X) = 6 in this way. The above can be solved for h 0,0 and h 1,1 as (2.12) Instead of computing Hodge numbers explicitly, it can also be useful to check the sufficient conditions for del Pezzo divisors using their intersection numbers. Indeed, a del Pezzo divisor must satisfy the following topological conditions We moreover look for divisors D s that satisfy the following diagonality condition [31] k sss k sij = k ssi k ssj ∀ i, j . (2.14) If this condition is satisfied, then the volume of the associated 4-cycle D s is a complete-square: where we sum over i, j but not over s. This condition is commonly used in the LVS [32,33] by ensuring that the volume form is of swiss cheese type. Furthermore, it allows to generate del Pezzo singularities by shrinking the divisor to a point along one direction of the Kähler moduli space which is heavily utilised in constructions of branes at singularities, see [15] for a recent discussion and further references. Orientifold configurations We focus on involutions of toric coordinates of the form σ k : z k → −z k for which h 1,1 − = 0 (if the corresponding geometry is favourable, see e.g. [3] for a discussion). For each involution, we obtain configurations of Op-planes given by fixed point loci of the associated involution σ k . Tadpole and anomaly cancellation is ensured by adding an appropriate D-brane setup. Orientifold data We consider involutions with O3/O7 orientifold planes. An O7-plane wraps a fixed surface D i in the CY three-fold, while an O3-plane is at an isolated fixed point of the involution. An important topological invariant that we will need later to compute the D3charge contributions is the Euler characteristic (2.9) of a (smooth) divisors D i . As said above, it is given by the integral D i c 2 (D i ) which is computed from the topo-logical data The knowledge of the topology of the fixed point set allows to compute other integers invariants of the CY orientifold. In particular the cohomology groups H p,q (X) split into even and odd subspaces of the (pull-back of the) orientifold involution. Their dimensions are called h p,1 + (X) and h p,1 − (X) respectively. To compute them, we use Lefschetz fixed point theorem which states that in terms of the even/odd Betti numbers b i ± (X). Here, we will have For CY threefolds, the expression (3.3) simplifies to Since we know h 1,1 ± (X) (in cases under study h 1,1 − (X) = 0 and h 1,1 + (X) = h 1,1 (X)), we can use this relation to obtain the Hodge numbers h 1,2 ± (X). We need to solve the equations: Below, we use this data to discard models where the computation of h 1,2 ± (X) lead to non-integer values, as this is a signal of possible unwanted singularities. To detect more subtle singularities which are not manifest in the orientifold data, we look at the underlying polytopes. Let us just reiterate again that we are interested in involutions of a single homogeneous toric coordinate z k → −z k which is associated with one of the boundary lattice points ν k of ∆ • not interior to facets (i.e., 3-faces). Recalling (2.2), the invariant CY equation for z k → −z k is obtained from the monomials where we define We argue that the properties of ∆ k are in one-to-one correspondence with the hypersurface obtained from tuning the CY equation to be invariant under z k → −z k . Removing (the non-invariant) monomials for the CY defining equation can force some singularities: either 1) the hypersurface is forced to touch singularities of the ambient space, or 2) the defining polynomial describes now a singular hypersurface (there are points where the differential of the equation and the equation itself vanish simultaneously). Since we want to work with smooth spaces, we need to discard models where the involution forces singularities. The desired invariant CY X can be obtained from triangulations of the polar dual ∆ • k . Since we are interested in collecting big numbers, we decide to keep in the analysis only invariant CY's corresponding to favorable ∆ • k and reflexive ∆ k . For these CY we can claim smoothness. We checked in several models that the excluded CY's were actually singular. 6 D7-branes In order to cancel the D7-tadpole induced by the O7-planes, we add D7-branes on the appropriate divisors. The D7-charge of an O7-plane wrapping the divisor The easiest way to cancel the D7-tadpole is then to put 4 D7-branes plus their 4 images on top of the O7-plane. The D7-brane configuration is given in this case by z 8 i = 0. The gauge group supported on such a stack is SO (8). The other extreme case is to cancel the D7-tadpole by a fully recombined D7brane in the homology class 8[D i ]. This is called Whitney brane, as it is forced to have a singular worldvolume of the form of the Whitney umbrella [11,12,34]: where z i ∈ O(D i ), η ∈ O(4D i ) and χ ∈ O(6D i ). The sections η and χ are invariant under the orientifold involution, while z i → −z i . This brane supports no continuous gauge group and has zero chiral intersection with (fluxless) D7-or E3-branes supported on an intersecting divisor [12]. For a generic toric divisor with high weights, the locus (3.9) is connected. However, there can be particular cases when the generic sections η, χ of the line bundles Then the equation of the configuration will be If this happens, we recover a stack of D7-branes on z j = 0 plus a Whitney brane of lower degree in the homology class 8[D i ] − 2m[D j ] (see e.g. [14]). A particular important example of a factorisation like (3.10) appears when D i is a rigid divisor. In this case η ∝ z 4 i and χ ∝ z 6 i and we are left with a configuration, whose locus is z 8 i = 0, i.e. we have four D7-branes plus their four images on top of the O7-plane, generating an SO(8) D7-brane stack. Also special non-rigid divisors can lead to a factorisation of the Whitney brane. In fact, whenever D i is a K3 surface the Whitney brane splits into a U(1) 4 configuration with four D7-branes plus their four images. In our analysis we found that this kind of factorization often happens for divisors with h 2,0 = 1. We provide two explicit examples in App. A. D-brane worldvolume flux Let us assume that the orientifold model contains stacks of E3/D7-branes wrapped on a divisors D. We can then turn on a gauge flux where F 2 is the field strength of the worldvolume U(1) gauge theory, B 2 is the NSNS 2-form potential and ι * : H 2 (X) → H 2 (D) is the pull-back map on D. Freed-Witten anomaly cancellation [35] requires the following quantization condition on F 2 : where c 1 (D) = −ι * D for X a CY. This implies that the following expression for F fulfills this condition: and with {D k } an integral basis of H 2 (X, Z). If D is wrapped by an O(1) ED3-instanton, then the orientifold invariance of the configuration requires F ED3 = 0 . Rank-2 E3 instantons with a non-trivial gauge bundle can also be allowed by a B 2 that does not fulfill (3.16) [36]. Let us come to the Whitney brane (3.9) in the homology class 8[D i ]. The Whitney brane can support an integral flux, that as we will see contribute to the D3charge. When this flux is present, the defining polynomial χ is forced to take the form [11]. The flux data is encoded in the choice of the line bundles and i.e. in the choice of the integral two-form F . One gets a zero flux when one of these line bundles is trivial. Notice that this can be achieved when D i + 2B is an even form (remember that B can take half-integral values), that may not happen. When one of the line bundles in (3.17) has no holomorphic sections, then either ρ or τ is forced to vanish. In this case, the Whitney brane locus factorizes as i.e. it splits into a pair of one D7-brane and its orientifold image, both in the homology class [4D i ]. Hence, in order for the Whitney brane to be non-factorised one requires that the line bundles (3.17) have holomorphic sections, i.e. Even when the condition (3.19) holds, one can set ρ = τ = 0 by a deformation of the Whitney brane. Correspondingly, the Whitney branes splits as in (3.18). The U(1) D7-brane has then a flux F = ι * (F − B 2 ), where F is the same two-form appearing in (3.17). When this happens, the D7-brane can have chiral intersections with some E3-instantons. This will be counted by the formula (3.20) A non-perturbative instanton contribution to the 4D superpotential requires the absence of chiral modes (for non-chiral modes, see footnote 21 in [15]) at the intersection of D7-branes and ED3-instantons. This generally limits the flux allowed on the D7-branes. A U(1) D7-brane with fluxes supports a generically non-zero FI-term: This term, if non-zero, requires a non-vanishing VEV for scalar modes at the intersection between D7 and its image in order to cancel the D-term potential. 7 If F satisfies (3.19), this corresponds to deforming the branes switching on non-zero ρ and τ ; this recombines the two branes into a Whitney brane. If the condition (3.19) is not fulfilled, then only ρ or τ can be non-zero and we generate a T-brane background, i.e. the two branes form a bound state whose locus is still (3.18) [37]. The D3-tadpole We now compute the induced D3-charge from the orientifold configuration. Generally, the D3-tadpole cancellation condition reads For O7/O3-planes, we collect whereas a U(1) D7-brane contributes as For later convenience, we refer to Q tot SO (8) and Q tot WD7 as the D3-charge contribution coming from rigid D7-branes and Whitney branes respectively. The first one is easy to compute: when all the four D7 branes have the same flux F, then the group is broken to SU(4) (the diagonal U(1) get a Stückelberg mass due to Green-Schwartz mechanism) and the contribution of the stack to the D3-tadpole is For the Whitney brane the situation is a bit different. The expression of its total D3-charge can be derived in a simple way [12]: the D3-charge does not change under recombination or splitting of branes; hence we can compute it in the phase where the Whitney brane splits into a U(1) brane and its image. Hence, for a Whitney brane in the class 8D i with F given in (3.19). The geometric contribution of the Whitney brane is different from the geometric contribution of the brane/image-brane system [11]. In fact, the D3-charge contributions from geometry and from the flux encoded into the line bundles (3.17) are [11,38] One can easily check that the sum of the two gives (3.27) and that Q i W D7,flux is identically zero when the line bundles (3.17) are trivial. If D i +2B 2 is an even integral form, one can actually take zero flux and make Q i W D7,flux vanish. 8 Generically this is not possible, but it is always possible to choose F such that Q i W D7,flux Q i W D7,geom . This will justify, in our analysis, to approximate the D3-charge of a Whitney brane by its geometric contribution. Before we continue, let us make a few estimates on the D3-charge contributions. Let us start from where we used (3.4) and (3.5) (with h 1,1 − (X) = 0) and where N O3 is the number of isolated fixed points in X. We conclude that there are two possibilities of increasing this value by investigating models with either many Kähler or instead many complex structure moduli. We are going to observe this scaling with respect to h 1,2 − quite frequently below for orientifolds with h 1,2 + = 0, see in particular Fig. 3. Now, assume we have O7-planes on divisors D i and that we cancel their D7charge by putting 4 D7-branes plus their 4 images in each D i (producing a bunch of SO(8) stacks). This is the choice that minimize the D3-charge contribution from O7/D7's. Let us now consider several CY's X and involutions and let us estimate what is the maximum that we can get for the D3-charge for this minimal configuration, where we cancel the D7-tadpole locally (i.e. with only SO(8) stacks). One may use (3.26) and write (in the absence of worldvolume fluxes) to arrive at [5,8,10,11] where in the last step we used Q tot O3 = − N O3 2 and the fact that In the KS database, this implies −Q D3 ≤ 504 for e.g. CYs with Hodge numbers (h 1,1 , h 1,2 ) = (11, 491) which we discuss further below. D3-tadpole in F-theory A perturbative type IIB orientifold compactification can always be described in Ftheory language. The F-theory compactification manifold is a CY fourfold that is an elliptic fibration over the base space B 3 = X/σ, that is the quotient of X by the orientifold involution. If the involution allows to cancel all the D7-tadpoles by Whitney branes, this corresponds to a smooth CY fourfold in F-theory. Splitting the Whitney branes in type IIB, producing a non-trivial gauge group G, corresponds to deforming the fourfold generating codimension-3 (abelian G) or codimension-2 (nonabelian G) singularities. If the fixed point locus includes a rigid divisor, then the D7-branes on that divisor support an SO(8) gauge group that cannot be deformed; this corresponds to a so called non-Higgsable cluster in the F-theory fourfold [39,40], i.e. in this case a non-deformable D 4 singularity. The D3-tadpole cancellation condition in F-theory takes the form: 9 is the Euler characteristic of the fourfold. When the fourfold is singular, this formula still applies, provided one uses the resolved fourfold [11,38,41]. However, the geometric contribution to the tadpole decreases as one makes a deformation from a smooth to a singular fourfold (with some gauge group and matter). This is consistent with what one observes in type IIB: splitting the Whitney brane, the D3 contribution decreases (in absolute value) [11]. The large D3-charges that are usually mentioned in literature as coming from Ftheory backgrounds, correspond typically to smooth fourfold (with no gauge group or matter). These large D3-charges can be reached in type IIB by canceling the D7-tadpole by means of Whitney branes. Orientifold database In this section, we generate a database of CY orientifolds models based on the general information summarised in Sect. 3. An essential tool in this context is the CYTools package [2] which allows us to construct FRSTs from polytopes at arbitrary h 1,1 . Beyond that, we implemented a basic algorithm to construct CY orientifolds from the polytope and triangulation data from reflection involutions. We test this implementation up to h 1,1 = 12. As an application of our database, we investigate the size of D3-charge contributions. An algorithm for finding orientifold configurations For each CY X and each choice of involution, we determine the fixed point set in the following way. 1. We first find the CY equation that is symmetric under the chosen involution, by determining the set of invariant monomials under σ k (keeping only those in (2.2) involving even powers of z k ). 2. We determine loci of points of the toric ambient fourfold P Σ that are fixed under σ k : in practice, we consider the action on the coordinates z j of σ k and σ k · ζ a , with ζ a the C * toric equivalences, and taking into account the SR ideal. 3. We check whether the invariant CY equation vanishes at a given locus. If no, a complex co-dimension n locus in P Σ determines the presence of an Om-plane with m = 3 + 2(3 − n). If yes, a co-dimension n locus corresponds to an Om-plane with m = 3 + 2(4 − n). We consider involutions that generate O3-and O7-planes, so in our scan there are no O5/O9-planes which can however arise for exchange involutions [3]. The number of O3-planes is determined from the intersection numbers either in the CY for co-dimension 3 or in the ambient fourfold for co-dimension 4 fixed point loci. The latter can be obtained from CYTools where we take special care of singularities in the ambient space. A similar algorithm to determine the O-plane configurations in the context of exchange involutions was introduced in [3]. In this sense, our work provides a complementary analysis for the geometries with h 1,1 ≤ 6, while providing additional The database we produce consists of two sets of data: 1. We compute all FRSTs of all favourable polytopes at h 1,1 ≤ 7. For each toric coordinate z k , we construct the orientifold configuration associated with the involution z k → −z k . This data is summarised in Tab. 2. 2. For each combination of Hodge numbers (h 1,1 , h 1,2 ) up to h 1,1 = 12, we generate up to 20 random FRSTs of ≤ 20 favourable polytopes. Again, we build orientifolds for involutions of each toric coordinate z k → −z k . The results are given in Tab. 3. The full data are collected in a GitHub repository which can be found here: https: //github.com/AndreasSchachner/CY_Orientifold_database. As we said, for each triangulation, we analyse each involution z k → −z k and determine the fixed point set. In Table 2 and in Table 3 we report the numbers of independent 12 involutions. Some of these involutions lead to singularities in the CY threefold. As explained at the end of Section 3.1, we can detect the singular models. We refer to models that do not present manifest pathologies as smooth involutions. We finally report the number of models that contain only O7-planes and those that contain at least two O3 planes that can collide by a complex structure deformation of the threefold. Models in both classes will be suitable for T-brane de Sitter uplift, while models in the last class are needed in order to implement de Sitter uplift by an anti-D3-brane at the tip of a highly warped throat realising the scenario outlined in [42][43][44]. As observed in [15], there is a trend that del Pezzo divisors dP n with 1 ≤ n ≤ 5 embedded into CY threefolds obtained from the KS database never satisfy the diagonality condition (2.14), cf. Tab. 4. Our analysis extends the conjecture of [15] to all FRSTs at h 1,1 = 6, 7. Hodge and Euler numbers of toric divisors In this section, we investigate the divisor data of CY threefolds with h 1,1 ≤ 6. We computed the Hodge numbers of prime toric divisors via the methods described in Sect. 2.2 which is largely consistent with the data presented in [3]. We compare the D3-charge contribution of SO(8) stacks (local D7-tadpole cancellation) with that of Whitney branes (non-local D7-tadpole cancellation). We argue that there is an enhancement of about a factor of 5 between local and non-local D7-tadpole cancellation. Recalling (3.2), it is clear that divisors with h (0,1) (D) = 0 lead to the largest Euler characteristic. That is, it seems to be profitable to have O7-planes and D7-branes wrapping divisors with Hodge numbers Clearly, maximising χ(D) is beneficial from the perspective of the tadpole (3.22). Using the results computed in our database for h 1,1 ≤ 6, we compute the Euler characteristic for every divisor finding that the maximal value is 13 A complete overview of the distribution of both Hodge numbers as well as Euler characteristics of (prime toric) divisors appearing in the KS database up to h 1,1 = 6 is shown in Fig. 1 and Fig. 2 respectively. In Fig. 2, we show the distribution of Euler numbers for different types of divisors. Clearly, non-rigid divisors result in the largest χ(D) with the maximum given by In Fig. 1, we present a correlation map for Hodge and Euler numbers of divisors. The correlations between χ(D) and the corresponding Hodge numbers is clear from (3.2). In our data, there are no significant correlations between any of the variables shown on the bottom right of Fig. 1 with the number of Kähler moduli h 1,1 (X) of the CY X nor with h 0,0 (D) which is why we omitted the later two. Interestingly, we observe that there is an anti-correlation between h 0,1 (D) with h 0,2 (D) and h 1,1 (D), while at the same time h 0,2 (D) and h 1,1 (D) are strongly correlated. This implies that there is an obvious trend where larger h 1,1 (D) implies large h 0,2 (D) plus small h 0,1 (D) and, hence, larger χ(D). We observe similar correlations of χ(D) and χ(4D) with h 1,2 (X) which will be confirmed further below in Fig. 3. D3-charge in the database and non-local D7-tadpole cancellation In this section, we will study how the D3-tadpole contribution from localised sources varies in the dataset we are taking into account. We begin by considering only the D3-charge coming from the O-planes. We present an overview of their D3-charge contribution in Fig. 3. We ignored models with positive Q tot Op . The colouring indicates the value of h 1,2 + where we clearly see the trend expected from (3.30): non-vanishing h 1,2 + decreases the absolute value of the D3-charge contribution. The models on the diagonal line have h 1,2 + = 0 and follow the expected scaling ∼ −h 1,2 − /3 as derived in (3.30). Let us introduce the D7-branes. We analyse the situation in the absence of gauge flux on the D7-branes. 15 For each model (derived from a choice of CY X and involution), we try to cancel the D7-tadpole generated by the O7-planes by a D7-brane configuration that maximizes their (absolute value of the) contribution to the D3-charge. For each O7-plane that we find we then work out the topology of the wrapped divisor. If we have O7-planes on rigid or Wilson divisors, we cancel the D7tadpole by a SO(8) stacks. For O7-planes on deformation divisors with h 0,2 (D) > 1, we cancel the D7-tadpole non-locally through Whitney branes, see App. A for details. Finally, whenever h 0,2 (D) = 1, we construct (3.9) explicitly to check for eventual factorisation; if no factorization is forced, we add a Whitney brane. For each h 1,1 (X), we pick the model (X and involution) whose localised sources contribute most to the total D3-charge. In Tab. 4, we report the absolute value of the total D3-charge from these localised sources for two cases: 1) the D7-tadpole is canceled by putting 4 + 4 D7-branes on top of all the O7-planes (local D7-tadpole cancellation) and 2) we put Whitney branes on all non-rigid O7-plane divisors (nonlocal D7-tadpole cancellation). Let us stress the difference between local and non-local D7-tadpole cancellation. If we were to simply add (4+4) D7-branes on top of each of the D7-branes to cancel the D7-tadpole locally, this would amount to 16 which leads to the conservative estimate local D7-tadpole cancellation: |Q D3 | ≤ 504 , (4.5) 15 Freed-Witten anomaly cancellation may force some flux to be non-zero; however one can always choose a flux that minimise its contribution to the D3-charge; in this situation our results are good approximations for the total D3-charge coming from localised sources. 16 We ignore the contribution from O3-planes here. For models with the minimal Q tot Op on the diagonal in Fig. 3, there are actually no O3-planes which justifies the bound given in (4.5). Complete scan at h 1,1 (X) ≤ 6 Random data at 7 ≤ h 1,1 (X) ≤ 12 as one can check in Tab. 4. This is precisely the upper bound obtained in (3.31) for (h 1,1 , h 1,2 ) = (11,491). In Fig. 4, we show that the D3-tadpole is significantly enhanced by considering more generic brane configurations, as we argued in Sect. 3.4. The results in Tab. 4 show that in this case the total D3-charge extraordinarily exceeds the bound (4.5). In particular, as we argue below in Sect. 5, using instead Whitney branes, the total D3-charge is increased by about a factor of 13, obtaining the following bound on localised sources D3-charge: change the total Q D3 , both decreasing it (it is the case of a supersymmetric flux, including the flux on the Whitney brane) and increasing it (it is the case of a flux generating a non-zero FI-terms inducing e.g. a T-brane background [45]). These fluxes typically do not change the order of magnitude of our estimations. However they must be taken into consideration in explicit models when computing D3-tadpole cancellation. Large D3-charge and genus-one fibrations An interesting observation concerns the behaviour of the D3-charge distribution at large h 1,2 . While one discovers no particular structure at small h 1,2 < 100, the regime at large h 1,2 > 100 exhibits, instead of a uniform distribution, two distinct dominant lines. We believe that this emergent structure in the distribution of D3-charges has not yet been observed in the literature. A hint for what is going on is obtained from previous investigations into the underlying fibration structure of toric CY threefolds at large h 1,2 , see [49][50][51][52] and references therein. It is in fact true that CY threefolds in the KS database at sufficiently large Hodge numbers (h 1,2 larger than 240) are associated with elliptic fibrations over complex base surfaces [49]. At the level of 4D reflexive polytopes ∆ • , it is quite straight forward to identify the corresponding fibrations. Namely, whenever ∆ • contains a 2D reflexive sub-polytope, the associated CY manifold enjoys a genus one fibration [53]. 18 This is indeed a quite common feature: out of the 473.8 million polytopes listed in [1], only 29,223 do not contain any such 2D reflexive polytope [52]. 19 There are only 16 distinct types of genus one fibrations F i which can be easily identified from the classification of 2D reflexive polytopes. 20 At least at large Hodge numbers, the KS database is dominated by polytopes exhibiting a description of a standard F 10 fibration [49] (the elliptic fiber is a hypersurface in P[2, 3, 1]) which therefore also plays a distinguished role in our analysis. Utilising the algorithm of [52], we computed the 2D reflexive sub-polytopes and the fibration type for each of the favourable 4D polytopes appearing in our analysis, checking that the presence of F 10 is dominant. We computed the D3-charge distribution for the different types of fibres. In Fig. 5 we report that the generic elliptic fibre 18 We stress that there are some subtleties occurring when relating the fibration of the polytopes to the actual toric variety, see [52] for a detailed discussion. 19 In our analysis, we encounter 2,857 (60) of these polytopes in the complete (random) data at h 1,1 ≤ 7 (7 ≤ h 1,1 ≤ 12). 20 A classification of the 16 distinct polytopes is provided in Appendix A of [51] which were previously studied in [54] and play a role in F-theory [50,[55][56][57]. [49]. Not surprisingly, it is responsible for the universal structure observed in Fig. 3 independently of h 1,1 (X). In the regime h 1,2 < 200, similar sub-dominant patterns are found also for elliptic F 6 and F 8 as well as non-elliptic F 4 (the fiber is an hypersurface in P 2 [2, 1, 1]) fibrations. All other fibrations as well as the polytopes without any fibration seem not to experience any enhancement in their D3-charge contribution (i.e. they are mostly constant as functions of h 1,2 ) nor are they showing any particularly interesting patterns. Let us try to explain what happens for the F 10 case. The CY equation takes the Weierstrass form, i.e., y 2 = x 3 + f (w)xz 4 + g(w)z 6 . (4.9) Here, w denotes collectively coordinates on the toric two-dimensional base B, whereas x, y, z are projective coordinates on P[2, 3, 1] with x and y being sections respectively ofK ⊗2 B andK ⊗2 B . For consistency of the equation, f and g must be sections respectively ofK ⊗4 B andK ⊗6 B At fixed w, the equation (4.9) describes a torus. The Z 2 involution of the torus (with four fixed points) is implemented in this algebraic setup by taking y → −y (or equivalently z → −z). The Weierstrass form is already invariant. Hence, if one takes (4.9) as the defining equation for the CY three-fold, one has the involution that inverts y. This toric coordinate is manifestly of high degree (and among the coordinates of this threefold, y is the highest degree one) and correspondingly the Euler characteristic of D y is large. This is the main reason why we find the largest D3-charges for these models. In studying the F 10 case, we realise another fact: one may add to (4.9) also a term proportional to x 2 z 2 and then consider the involution x → −x. x is also high degree and the D3-charge one would obtain from such an involution is still large, z 1 z 2 z 3 z 4 z 5 z 6 z 7 z 8 z 9 z 10 z 11 z 12 z 13 z 14 z 15 even if lower than the one obtained by y → −y. However, there is a pathology: the invariant CY equation would be that has a manifest (non crepantly resolvable) singularity at z = y = 0. Since xyz is the SR-ideal, the D7/O7's do not touch the singularity and their topological invariants do not feel the pathology. However, we excluded it from our analysis as ∆ k is not reflexive because the monomial x 3 is associated with a vertex in the full dual polytope ∆. If we had included such models, we would have obtained a second diagonal line in our plots of models with large D3-charge. 5 Example with (h 1,1 , h 1,2 ) = (11,491) To be more specific, let us describe in more detail the model with the potentially largest D3-tadpole reported in Tab 4. It turns out that this model is obtained from an involution of a CY threefold X with Euler characteristic χ(X) = −960 and Hodge numbers (h 1,1 , h 1,2 ) = (11, 491). The GLSM charges of X are collected in Tab. 6; the SR ideal is given by I SR = {z 1 z 2 , z 3 z 6 , z 3 z 7 , z 3 z 8 , z 3 z 9 , z 3 z 10 , z 3 z 11 , z 3 z 12 , z 3 z 13 , z 3 z 14 , z 4 z 6 , z 4 z 7 , z 4 z 8 , z 4 z 9 , z 4 z 10 , z 4 z 11 , z 4 z 12 , z 4 z 14 , z 5 z 6 , z 5 z 7 , z 5 z 8 , z 5 z 9 , z 5 z 10 , z 5 z 12 , z 6 z 8 , z 6 z 10 , z 6 z 12 , z 6 z 13 , z 6 z 14 , z 6 z 15 , z 7 z 10 , z 7 z 12 , z 7 z 13 , z 7 z 14 , z 7 z 15 , z 8 z 12 , z 8 z 13 , z 8 z 14 , z 8 z 15 , z 9 z 12 , z 9 z 14 , z 9 z 15 , z 10 z 14 , z 10 z 15 , z 11 z 15 , z 12 z 15 , z 4 z 5 z 15 , z 5 z 13 z 14 , z 5 z 13 z 15 , z 7 z 9 z 11 , z 8 z 9 z 11 , z 9 z 10 z 11 , z 10 z 11 z 13 , z 11 z 12 z 13 , z 11 z 13 z 14 } . Finally, the Hodge numbers of the divisors can be computed to be: Related to the discussion above, one finds that this CY exhibits an F 10 fibration with coordinates z 4 , z 5 , z 15 = x, y, z over the Hirzebruch surface F 12 as can be seen from the last line in the GLSM charge matrix in Tab. 6. 21 Our analysis shows that the allowed values of the D3-charge from Op-planes are 8 ≤ |Q tot Op | ≤ 168. The maximally allowed D3-charge from O7-planes is actually obtained from (recall (3.30) and that all other D i>5 are dP 1 divisors) χ(D 5 ) + χ(D 15 ) + 4 · χ(dP 1 ) = 2(h 1,2 + h 1,1 + 2) = 1, 008 . rigid and then support an SO(8) stack. 22 The D7-tadpole from the D 5 divisors will instead be canceled by a Whitney brane. We choose a B-field that allow to have zero flux on each D7-brane: Since the divisors D 6,8,12,13,15 do not intersect each other, the pull-back of the B-field on the divisor D i is equal to ι * D i B 2 = D i 2 and then it cancels the non-integral flux that is necessary for Freed-Witten anomaly cancellation, leading to F i = 0. As regarding the Whitney brane, we need to check that there exists an integral 2-form F that cancels either 3 2 (3.17). This happens, because D 5 + B 2 is an even form, as it can be checked rom the GLSM weights in Table 6. Taking vanishing fluxes on each D7-brane, the D3-charge contribution is only geometrical. The SO (8) 8, 12, 13, 15 , (5.8) while the main contribution to the D3-charge comes from the Whitney brane, whose geometric contribution (3.28) is where The total D3-charge contribution from localised sources is then as reported in Table 4. To stabilise all the moduli via non-perturbative effects, it would be favourable to have instantons on the other rigid divisors. Since the B-field (5.7) does not allow to have vanishing fluxes F E3 on any of these divisors we cannot have O(1) instantons. On the other side, rank-2 instantons might be allowed [36] provided that one checks that no chiral modes arise at the intersection with the SO(8) stacks. This model is of course not suitable for anti-D3 uplift since there are no O3-planes, but in principle we could engineer a T-brane background that allows for de Sitter minima [45]. Conclusions In this paper, we generated a database of CY orientifolds from holomorphic reflection involutions of CY hypersurfaces. We determined the orientifold configurations for all favourable FRSTs for h 1,1 ≤ 7. We found more than 70 million involutions of which over 20 million correspond to smooth compactifications. Singular involutions were identified and their structure deserves further investigation. We also specified the number of cases with either O3 or O7 planes suitable for antibrane or T-brane uplifts. We plotted several relevant quantities such as the Euler number and Hodge numbers of the divisors and the value of the D3 brane charges. We observed some interesting patterns in the distribution of the models. In particular the values of the D3 charges show non-trivial structures, such as higher concentration of models in some particular directions, that would be interesting to understand from the more mathematical perspective. Our algorithm is in principle capable of computing orientifolds for any h 1,1 . We provided partial results for triangulations up to h 1,1 = 12. We found several classes of models with different behaviour in their D3-charge and O-plane configuration. Most importantly, we provided evidence for a large class of models for which the D3-charge from Op-planes grows ∼ −(h 1,1 + + h 1,2 − )/3, i.e., linearly with the number of invariant geometric moduli. This constitutes an upper bound on the absolute value of the total D3-charge from D7/O7's and O3's. We further showed that cancelling the D7-tadpole non-locally via Whitney branes as opposed to locally via SO(8) stacks on top of O7-planes increases the overall D3-charge by up to factors of 12. We presented an explicit orientifold with Hodge numbers (h 1,1 , h 1,2 ) = (11, 491) = (h 1,1 + , h 1,2 − ) which led to a total D3-charge of |Q D3 | = 6, 664. This value beats previous D3-charge records in type IIB by a large margin (recall Tab. 1). It provides the necessary space to turn on background fluxes which in turn are relevant for stabilising moduli and model building. Beyond that, our database contains a plethora of other models, 357, 730 to be precise, with |Q D3 | > 504 making it an excellent starting point for the construction of trustable string vacua. An explicit calculation of moduli stabilisation for these vacua is beyond the scope of this paper. An important result of this paper concerns the non-trivial D3-charge distribution as a function of h 1,2 . We provided evidence based on the existence of 2D reflexive sub-polytopes that this is mainly a result of special genus one fibrations of the associated CY threefolds, especially elliptic F 10 (hypersurface in P[2, 3, 1]) and non-elliptic F 4 (hypersurface in P[1, 1, 2]) fibrations. The patterns observed in Fig. 3 are directly linked to reflecting either coordinates along fibre or the base. Further, we put forward an argument for F 10 fibrations that involutions involving coordinates along the fibre generically maximise the bound on the D3-charge. It would be interesting to further explore the role of genus one fibrations in the context of N = 1 compactifications of type IIB to 4 dimensions. In the future, it is desirable to extend the database in the regime h 1,1 ≥ 12. Recent works [2,18,59] demonstrated that triangulations of polytopes with large h 1,1 can be constructed efficiently. However, exhaustive scans or random sampling might be impractical which is why a more targeted approach by employing optimisation methods would be favourable as previously applied in the search for string vacua [60][61][62][63][64][65][66][67]. In the same spirit, it would also be exciting to relate our database to the one of CICYs [5] and combine it with the one for divisor exchange involutions [3]. For instance, as compared to [3], we have not glued together the Kähler cones of equivalent triangulations. Similar to [5], a large fraction of the orientifolds contained in the database are singular which can in special cases like the conifold be resolved as discussed in [68] for the CICY landscape. Such resolutions might lead to new CY threefolds that are not contained in the KS database. We have three rigid divisors D 1 , D 2 , D 4 with D 1 a dP 8 and D 2 a dP 7 , one SD1 divisor giving an SO(8) stack. In contrast, we have generic polynomials for the non-rigid divisors D 5 , D 6 , D 7 and hence proper Whitney brane configurations. The more interesting scenario concerns the SD1 divisor D 3 = {z 3 = 0}. Looking at the GLSM charges in Tab. 7, the degrees for z 3 are given by (1, 0, 1) which implies that z 3 = 0 can be modified only through combinations of z 1 and z 4 with weights (0, 0, 1) and (1, 0, 0) respectively. This is because all other coordinates z i , i = 1, 3, 4, have degrees ( * , 1, * ). Thus, we may equivalently write which is the only possible deformation of D 3 and hence h 0,2 (D 3 ) = 1. The Whitney brane is a representative of the class 8[D 3 ] with degrees (8,0,8). A generic element of this class is of the form where P 8 is a homogeneous polynomial of degree 8 in two variables. Clearly, the equation P 8 (X, Y ) = 0 admits precisely 8 zeros which allows us to write it as where we also imposed that our representative is an invariant locus under the involution z 3 → −z 3 . This generic factorisation is valid for all invariant representatives of 8[D 3 ], hence also for the Whitney brane in this class. The equation (A.7) tells us that the Whitney brane corresponding to the divisor D 3 is forced to factorise into 4 pairs of brane/image-brane, that need not necessarily be parallel, i.e., they can in principle intersect. 23 Notice that the above argument would fail if there was an additional coordinate z 0 with degrees (2, 0, 1) for which e.g. the class 2[D 3 ] is represented by The additional monomial z 0 z 1 spoils the factorisation of the branes discussed above. We see no reason for why such situations should not be realised in the KS database. Indeed, the next section provides an explicit example with a divisor with h 0,2 = 1 that looks topologically like a K3 divisor, but whose Whitney brane does not factorise. A.2 Example with a divisor with h p,q = h p,q (K3) We consider the model (POLYID: 57, TRIANGN: 3 in [16]) with weight matrix in Tab. 8 and SR ideal We find that the Hodge numbers for the toric divisors are given by We have two rigid divisors D 1 , D 2 with D 2 a dP 7 , one Wilson divisor D 7 , one SD2 divisor D 4 and two additional non-rigid (deformation) divisors D 5 , D 6 . The last divisor D 3 looks topologically like a K3 surface. Below we argue why it is not actually the case. For the rigid divisors D 1 , D 2 and the Wilson divisor D 7 , we have SO (8) stacks. For the non-rigid divisors D 4 , D 5 , D 6 , we have generic polynomials and hence proper Whitney brane configurations. For the would-be K3 divisor D 3 , a closer inspection of the weight system in Tab. 8 shows that the equation z 3 = 0 can be deformed such that This is a non-homogeneous polynomial in the three coordinates z 3 , z 2 1 z 7 and z 1 z 4 z 7 . In particular, it does not factorise which suggests that we obtain a fully recombined D7-brane in the class 8[D 3 ]. 2. Let us now consider the involution z 4 → −z 4 : the fiber is inariant under it only when it degenerates to z 2 5 = z 2 6 (az 2 4 + bz 4 6 ) (B.10) Unfortunately this singularity is inherited by the CY. Ignoring such a singularity, one may conclude that there is an O7-plane wrapping D 4 , that does not touch the singularity because of the SR ideal.
12,441.8
2022-04-27T00:00:00.000
[ "Mathematics" ]
Investigating Changes in the Morphological Structure of High-Temperature, Calendar-Aged Li-Ion Cells A physics-based impedance response model of a graphite | LiNMC cell is developed to investigate the effect of high-temperature calendar aging on electrode microstructure. Experimental data indicate unique contributions to aging from each electrode. The model relaxes the constraints placed by the Bruggeman relation, instead using a free parameter to relate the porosity and tortuosity. The incorporation of a two-layered cathode particle and adjustment of the Bruggeman relation produce mid-and low-frequency responses that quantitatively agree with the experimental data. The significance of the model modifications toward measuring the effects of high-temperature aging is discussed. Electrochemical impedance spectroscopy (EIS) is a fast, in-situ, and non-destructive technique for characterizing aging in lithium-ion batteries (LiBs). The impedance is expressed as a ratio of the voltage response to a current perturbation and is typically obtained over a range of frequencies. Impedance data are represented in formats to provide insight into governing physical processes and allow for a separation of time constants for multiple physical processes based on the frequency of measurement. However, several processes with similar time constants exist in cells which complicate the interpretation of impedance spectra. Interpretation of impedance measurements are most commonly resolved by equivalent circuit analogs. [1][2][3][4] These analogs require assumption of the physics of the processes involved, potentially leading to multiple interpretations during data fitting. Resistors, capacitors, and artificial circuit elements are useful for evaluating trends in the impedance response of the system. Often, the circuit elements are sufficient in fitting the data. However, there remains difficulty in expressing the chosen circuit elements as measureable physical parameters for a quantitative assessment. Physics-based impedance models offer a more fundamental method to interpret experimental impedance data. These physicsbased electrode models are convenient in relating the measured impedance response to fundamental physical parameters and can be easily modified to capture non-ideal impedance responses. In some cases, the models are used for parametric studies, to distinguish the overlapping physical processes that influence the impedance response. In other cases, the models are utilized to fit experimental impedance spectra, to elucidate the causes of cell degradation. Two of the most frequently referenced models were developed by Doyle et al. and Meyers et al. 5,6 Doyle et al. used a porous electrode model to describe variation in kinetic and transport parameters for a lithium polymer chemistry. The individual responses of each electrode were found to be linked to the physical parameters chosen to describe lithium kinetics and mass transport. They concluded that solid-phase lithium diffusivities are measurable from the low-frequency response. However, the interpretation of the low-frequency portion of impedance spectra can be complicated by overlapping time constants from electrode processes. Because of the overlapping processes, the solid-phase diffusivity requires impedance data at low enough frequencies such that that process is the dominating impedance at the measured frequencies. Meyers et al. developed an entirely analytical porous electrode model that incorporated a particle size distribution and a surface film. The model ignored electrolyte-phase concentration gradients. An important conclusion from their work was that contributions from particle size distributions influence the curvature of the low-frequency impedance response. Both Doyle et al. and Meyers et al. assumed the existence of a steady-state solution, around which the system was perturbed. By allowing for a small perturbation in each variable, the solution exhibited linearity. The linear system of equations was transformed from the time domain to the frequency domain, leaving a set of ordinary differential equations, from which the impedance was readily calculated. This approach to obtaining the impedance response has been replicated by many authors. An extension of these two works has allowed several groups to focus on investigating the low-frequency impedance response by developing physics-based models. Devan et al. presented the analytical solution of the impedance response of a single porous electrode with linear kinetics and electrolyte-phase concentration gradients present. 7 The analytical expression indicated that electrolyte-phase concentration gradients manifest at low frequencies. More recently, Gambhire et al. developed a model that incorporated particles that underwent phase transitions. 8 They found that phase transitions caused changes to the low-frequency impedance response, creating a finite length Warburg impedance element at low frequencies. This unique feature was determined useful in estimating the battery state of charge. Chen et al. generated an artificial porous electrode with various active material morphologies and constructed microstructures. 9 The generated microstructures were then defined in terms of their physical properties and parameterized as impedance model inputs. They found that the coupled effect of particle shape and electrode microstructure caused additional curvature in the impedance response at mid and low frequencies. Most qualitative analyses have found multiple processes influence the mid-and low-frequency impedance response of the cell, which supports the need for a physics-based fitting approach. Several investigators have attempted to validate physical models using experimental impedance spectra, 4,10 with some microstructural analysis of either positive or negative electrode to support the proposed model. 11,12 For example, Dees et al. investigated the postcycling impedance response of an LiNi 0.80 Co 0.15 Al 0.05 O 2 (LiNCA) positive electrode. 13 Microscopic imaging of cycled secondary particles indicated the presence of a core-shell particle structure. The shell was primarily of nickel that formed during cycling. The features of the simulated impedance responses better agreed with the experimental impedance measurements after incorporating a core-shell structure for the positive electrode particles. Abraham et al. developed a model to obtain physical parameters for a LiNCA cathode. 14 They fit lithium diffusion coefficients in the solid phase and kinetic exchange-current densities at states of charge between 3 and 4.7 V vs. Li/Li + . The trends in the fitted diffusion coefficients were qualitatively supported by changes in electrode microstructure structure observed by in-situ X-ray diffraction measurements. To obtain improved experimental data fits, they introduced a tri-modal particle size distribution. The tri-model distribution improved low-frequency agreement of the simulated response with experiment. Zavalis et al. investigated both calendar aging and cycling in LiFePO 4 | mesocarbon microbead (MCMB) pouch cells at 22 • C. 15 Cycling was found to cause a significant loss in capacity and increase in impedance, whereas calendar aging effects were less extensive. Scanning electron microscopy (SEM) micrographs of cycled electrodes indicated significant structural changes in the LiFePO 4 electrode causing a lessening of the electrode porosity. Parameter estimates obtained by fitting the impedance responses supported a changing porosity hypothesis, consistent with SEM observations. In this work, we attempt to validate a physics-based model based on the work of Fuller et al. for a graphite|LiNi 0.33 Mn 0.33 Co 0.34 O 2 (LiNMC) cell. 16 Our objective is to gain an understanding of the effect of structural changes in the electrodes due to high-temperature calendar aging on the mid-and low-frequency impedance response. The model includes multiple particles in each electrode, a two-layered positive electrode particle, and uses an additional free parameter to describe electrolyte transport. The model is validated using experimental EIS data and post-mortem micrographs. A discussion on the impact of calendar aging on the impedance spectra is presented. Experimental Calendar aging experiments were conducted using commercial electrodes. The negative electrode used was graphite. The positive electrode used was LiNMC. The electrolyte is 1:1 (wt) EC:DEC with 1 M LiPF 6 salt. The separator material was a Celgard 2325 PE/PP membrane. Electrodes of 1.6 cm 2 in area were punched out and used to assemble coin cells. Half-cell and full-cell coin cells were assembled from the graphite and LiNMC punched electrodes. Lithium for the half-cells was obtained from MTI. The lithium metal electrodes were gently brushed before coin cell assembly to remove any unwanted debris. First, an open-circuit voltage obtained at 25 • C temperature for the cell chemistry was measured from the fresh cells. The open-circuitvoltage curve vs. state of lithiation (SOL) of the cell was obtained by applying a C/25 discharge current for 1 hour starting at 4.2 V (100% state of charge). The cell potential was allowed to relax to the equilibrium potential for two hours after the constant current discharge. The cell potential was then recorded. The graphite open-circuit potential vs. SOL was assumed from the literature. The LiNMC electrode open-circuit potential vs. SOL curve was obtained from the difference of the two curves. The negative to positive electrode capacity ratio was estimated to be approximately 1.15. Second, five coin cells were calendar aged. The coin cells were housed in an environmental chamber at 75 • C. The cell voltage was held at the open-circuit voltage obtained at 25 • C corresponding to 90% SOC for 3200 hours. After aging, the cells were then discharged to the voltage corresponding to 50% SOC at a C/25 discharge current. The cell potential was held for 3 hours at 50% SOC. Then, the cells were removed from the environmental chamber and soaked at room temperature (ca. 25 • C) overnight. Following soaking, the impedance response of the cells was measured. EIS was performed using a MetroOhm Autolab Potentiostat. A 5 mV RMS perturbation was applied around the open-circuit potential over a frequency range of 1 MHz to 5 mHz, for 50 total frequencies logarithmically spaced. Impedance spectra of fresh cells were obtained at the start of the experiment, at 0 hours of calendar aging. Impedance spectra were periodically measured, with a final measurement obtained at 3200 hours of calendar aging. Following the impedance measurements, the coin cells were disassembled in the glove box and electrodes removed. The electrodes were taken for transmission electron microscopy (TEM). Micrograph of the LiNMC cathode samples were obtained by a Tecnai G2 F30 (FEI, Netherlands) operated at an accelerating voltage of 300 kV. Prior to the imaging, the electrodes were submerged in dimethyl carbonate (DMC) in the glove box. The electrodes were then sonicated to separate NMC particles from the current collector surface. NMC powders, suspended in DMC, were drop cast onto a TEM grid. The samples were exposed to air for a short period of time for transfer to the sample chamber. Figure 1 is a generic illustration of a full-cell sandwich. The porous electrodes consist of secondary active insertion material particles, binder, and additives to improve electronic conductivity. An electronically inert separator is contained between the two porous electrodes. The secondary particles are composed of smaller primary particles in each electrode. These particles are spherical and can be enveloped in a surface film. Current collectors close the ends of the cell. Model Formulation Solid phase variables are represented by subscript 1, while electrolyte-phase variables are given as subscript 2. Negative electrode variables are denoted by the subscript -and positive electrode variables by +. The position in the cell is given by the independent variable x. The negative electrode is of thickness L − . The separator is of thickness L sep . The positive electrode is of thickness L + . The total cell thickness is given as L cell . The current collectors (CC) are represented as ±CC. The radius of the spherical particles in the positive and negative electrodes is given by r p,± . In the next sections, the model is described. First, the governing equations for the cell are given in the time domain. The equations are then transformed into the frequency domain, for which the impedance response is calculated. Time domain representation.-Intercalation particle equations.-Charge transport across a surface film, doublelayer charging, lithium charge-transfer kinetics, and solid phase diffusion of lithium are considered in the single particle equations. At the film-electrolyte interface, the current density leaving the electrolyte, i tot , passes into the film i tot,± = k film,± L film,± ϕ film,± − ϕ 2,± . [1] In this formulation, k film is the film conductivity, L film is the film thickness, ϕ film is the potential at the solid-film interface, ϕ 2 is the potential at the film-electrolyte interface measured in the electrolyte phase. The current density that passes through the film resistor reaches the surface of the particle. At this interface, the current can pass across the interface by double-layer charging or an electron-transfer reaction. The current density that crosses is given as where i dl is the current density due to charging the double layer, and i int is the intercalation current density. The current associated with double-layer charging is given as C dl is the double-layer capacitance of the particle. ϕ 1 is the solid potential which is set to zero at the graphite current collector. The faradaic reaction rate is governed by the potential drop across the particle-film interface, In the expression, i 0 is the exchange-current density, α a and α c are charge-transfer coefficients, F is Faraday's constant, R is 8.314 J-mol −1 -K −1 , T is temperature, and U is the equilibrium potential. c ± | r =R± is the concentration of lithium at the surface of the intercalation particle. The equilibrium potential is a function of the state of lithiation, x, of the respective electrodes The positive electrode open-circuit potential function is measured in this work. The particles into which lithium intercalates are described as spherical in shape, with a constant diffusivity, D. The governing equation for the spherical particles is given as c is the concentration of lithium in the spherical particles, and r is the radial coordinate. Porous electrode equations.-Porous electrode transport is described charge conservation equations. Concentration gradients in the electrolyte phase are ignored in this formulation. Conservation of charge for each porous electrode is given as where i 2 is the current density in the electrolyte phase, and a a is the specific interfacial surface area of the electrode assuming a spherical particle packing. Similarly, in the solid phase The electrolyte-phase current density is given by Ohm's Law, k 2 is the conductivity of lithium in the electrolyte phase. Electronic current flows in the solid phase and is given by Ohm's Law, where σ 1 is the electronic conductivity in the solid phase and ε 1 is the fraction of solid particle. The total current density, I , across the porous electrode is conserved between the solid and electrolyte phases Frequency domain representation.-The impedance response is obtained by solving the governing system of equations in the time domain for the full cell. The model equations are transformed from the time domain to the frequency domain using the technique described by Meyers et al. 6 In this formulation, system variables, X, are rewritten as the summation of steady-state variables and perturbed variables X =X + Re X e jωt . [13] whereX is the steady-state response,X is a complex phasor, containing both magnitude and phase angle information, j is the imaginary unit, and ω is the angular frequency. Allowing only for small amplitude perturbations, implying a linear response, the function for which a variable is evaluated is expressed as where the first derivative is evaluated around the condition of steady state. Intercalation particle impedance equations.-Transformation of the particle Equations 1-4, into the frequency domain gives [15] i tot,± =ĩ dl,± +ĩ int,± , [16] i dl,± = jωC dl,± φ 1,± −φ film,± , [17] i int, Diffusion into graphite electrode particles.-The negative electrode particles are treated as single-layered particles. Equation 7 is rewritten for the graphite particles to give For solid diffusion of spherical particles, the flux across the pore-wall interface is given asĩ At the center of the spherical particles, the no flux boundary condition applies The analytical solution of the solid phase concentration of lithium in the spherical particles is obtained by solving Equations 19-21 leads toc wherec − is evaluated at the surface of the particle for the graphite electrode and γ = ω [23] Figure 2. Two-layer LiNMC particle illustration. The two-layer particle structure is the result of manganese dissolution. The core radius is R c . The total particle radius is R + . The diffusivities through the core and shell layer are D +c and D +s respectively. Diffusion into LiNMC particles.- Figure 2 is an illustration of the two-layer structure of the spherical LiNMC electrode particles. The core is represented as layer '+c' and shell layer '+s'. The core-shell structure is constructed from manganese dissolution from the solid particle into the electrolyte. Therefore, diffusion of lithium into a core-shell structure is applicable for particles that have degraded due to aging effects. The shell is on the order of nanometers in thickness. To note, the open-circuit voltage functions in both the core and shell layer are assumed identical, despite the differences in composition of the two phases. Without this assumption, it is impossible to obtain an estimate of thickness and diffusivity of this layer. Equation [29] Equations 24-25 are solved using the boundary conditions expressed in Equations 26-29 to give the analytical solution for the concentration profile through the shell of the particlẽ . [30] D+s , and K c = jω D+c . The dimensionless impedance, Z one , is defined as . [31] In the limiting case that R c = R + and D +c = D +s , Equation 31 reduces to Equation 23. Combining the linearized Butler-Volmer equation for spherical particles, 18, and the dimensionless impedance Equation 23 gives The spherical-particle impedance is given as where R ct,± = [35] Rearranging Equation 35 leads tõ [36] Addition of Equation 15 and Equation 36 gives the combined interfacial current density [37] Finally, the overall particle impedance is given as [38] Overall cell impedance equations.-The impedance of the porous electrode is obtained by combining the particle impedance with the equations that govern transport in the porous electrode. Equations 8-12 can be rewritten in the frequency domains as [40] To obtain an analytical solution for the impedance response across the porous electrode, a differential equation expressing the potential differences across the electrodes and separator is required. First, an equation for the potential difference at any position in the electrodes is obtained. Subtracting Equation 41 from Equation 42 and then substituting Equations 39, 40, 43 gives i tot,± is obtained from Equation 37. For each electrode, boundary conditions are required to solve Equation 44. At the current collectors of the cell, the current density flows through the solid matrix, At separator-electrode interfaces, the current flows in the electrolyte phase,ĩ 1,− x=L− = 0,ĩ 1,+ x=L−+Lsep = 0. [46] For the negative electrode, solving Equation 44 using the boundary conditions of Equations 45, 46 gives [48] Evaluation of Equation 47 at the negative electrode current collector defines the potential difference between the solid and electrolyte phases,φ 1,− | x=0 −φ 2,− | x=0 . Second, the electrolyte potential is obtained by solving the differential equation obtained combining Equations 39, 41, 43. This equation can then be evaluated at the current collector and electrodeseparator interfaces and subtracted from Equation 47. The potential difference across the electrode is theñ [49] Rearranging Equation 49 gives the impedance of the negative electrode, Z p,− , as, [50] Similarly, the impedance of the positive electrode, Z p,+ , is [51] where The impedance of the separator obtained by the integration of Equations 39, 41. The result is given as [52] The contribution to the impedance response from the current collectors, Z cc , is given as where R cc is the resistance from the current collectors, and C cc is the capacitance of the current collector. The impedance response of the cell, Z cell , is then expressed as the sum of Equations 50-53 [54] All capacitors in the frequency domain are redefined as constant phase elements to better represent the capacitive behavior observed in real battery systems. That is, where n C P E is the constant phase element exponent, and Q is analogous to the capacitance. Multiple particle sizes.-Two spherical particle sizes are used in each electrode. The particles are micron sized and submicron sized, represented by the subscripts large and small. The total solid volume fraction, ε 1,tot± , of large and small particles sums to the small and large particle volume fractions ε 1,tot± = ε small,± + ε large,± . [56] The total overall particle impedance, Z over,tot± , is combination of the impedances of the large and small particles. where the total surface, a a,tot , is the sum of the large and small particle surface areas. The total solid volume fraction, a a,tot± , of large and small particles sums to the small and large particle volume fractions a a,tot± = ε small,± a small,± + ε large,± a large,± . [58] Parameter fitting.-Model equations are solved in MATLAB. Parameter estimation is done by minimizing the difference between the experimental and simulated impedance spectra using the fminsearch algorithm in MATLAB. The total error between the two measurements is given as [59] Err is the summed difference between the experimental and simulated impedance responses. The counter i is used to denote the frequency of measurement. The subscripts sim, exp, Re, and Im are representative of the simulated impedance response, the experimental impedance response, the real component of the impedance, and the imaginary component of the impedance, respectively. Results and Discussion There are several methods that can be used to determine electrodeelectrolyte parameters and perform model validation. The approach taken in this work was to use fresh and degraded half-cell and full-cell EIS measurements for model validation. As a supporting measurement, TEM micrographs were used to estimate the positive electrode surface film thickness, shell thickness, and particle radius. In Table I, model parameters for the graphite|LiNMC cells are given. The parameter values are assumed, estimated, measured, or from literature. The exact composition of the commercial graphite and LiNMC materials are unknown. Therefore, select parameters are not measurable quantities and require assumption. Positive electrode particle parameters.-Experimentally obtained TEM micrographs are shown for fresh and degraded cathode particles in Figure 3a and Figure 3b, respectively. 20 The crystalline structure of the particles is evident from both micrographs. A surface film of a few nanometers is visible in Figure 3b. A film on the particle surface is consistent with electrolyte decomposition. The process is likely accelerated by cation dissolution from the host structure, a known degradation mode of manganese containing electrode materials. 19 A consequence of film formation on the cathode particles is a reduction in electrode porosity and increase in electrolyte mass transport resistance. Similarly, growth of the solid-electrolyte interphase on graphite leads to an increase in anode electrolyte mass transport resistance. Dees et al. observed the formation of a surface oxide layer on the outer edges of LiNi 0.80 Co 0.15 Al 0.05 O 2 particle by high-resolution TEM. 21 The thickness of the surface layer in their work was found to be on the order of five nanometers. There appears to be a difference between the bulk of the particle and the outer edge of the particle of Figure 3, although it is difficult to identify a sharp interface. Under the assumption that a multi-layer particle exists, a small fraction of the positive electrode particle would then have a different composition than the core of the particle. To address this possibility, the physical model that is used to investigate the impedance response assumes a core-shell structure for the positive electrode particles. The LiNMC kinetic response is found at higher frequencies than the graphite response. The graphite electrode has a wider kinetic arc width than the LiNMC electrode over the same frequency range. The degraded LiNMC response contains an additional arc at approximately 5.6 Hz. Figure 4, fresh and degraded half-cell impedance responses at 50% SOC are shown. Generally, the impedance responses obtained for each cell contain similar features. The spectra in Figure 4 have responses that include high frequency peaks with some inductance (ca. >100 kHz). The inductive processes pull the impedance responses below the x-axis and tend to be an artifact of the experimental peripherals. For clarity, this high-frequency arc is cutoff at approximately 100 kHz. As the frequency lowers (<100 kHz), a capacitive peak is observed in the spectra. Solid electrolyte interphase (SEI), otherwise referred to as surface film, has been speculated to be a contributing source to the capacitive response observed at high frequencies. 1,22,23 Although the possibility cannot be eliminated, the cells have not been exposed to high-temperature aging and a significant SEI contribution is not expected. More likely, the bending in the imaginary component of the impedance response is a result of the sum of the charge-transfer reaction at the lithium counter electrode and double-layer charging of the current collectors. The bending in the full-cell response is less visible as compared to the half cells but still present. The full cells do not contain lithium counter electrodes. Therefore, the high-frequency capacitive loop in the full-cell response is only a result of the contact resistance double-layer charging across the electrode current collectors, electrolyte interface. Both the graphite and LiNMC kinetic responses are visible in the mid-frequency range (10 Hz-1 kHz). The kinetic arcs are the result of the lithium redox reaction occurring at the surfaces of each electrode. Both responses have similar characteristic frequencies, which can cause the appearance of one large, overlapping peak in the full cell spectrum. A comparison of Figure 4a and Figure 4b in the midfrequency range illustrates that the graphite and LiNMC electrode responses have unique characteristic frequencies. The LiNMC kinetic response is visible at slightly higher frequencies and is smaller in arc width than the graphite electrode kinetic response. Kinetic, mass transport, and physical parameters.-In In the lower frequency range of the response (<10 Hz), electrolytephase and solid-phase transport dominate the spectra. Inflection points between 0.1 Hz-3 Hz imply the separation of the kinetic and mass-transport processes in the system. Additionally, the degraded Li|LiNMC includes a new arc at approximately 5.6 Hz. The new arc is partially produced in the model by defining an additional masstransport regime across multilayer, positive electrode particles. In Figure 5, impedance spectra of fresh cells at 50% SOC are given with simulation results overlaid on the experimental measurements. The impedances of the cells are repeatable with an identical feature set and similar magnitudes. A clear separation of characteristic frequencies for the kinetic and mass-transport processes is visible for the cell between approximately 10 Hz and 1 Hz. The kinetic arc is depressed compared to an ideal RC process and includes a high frequency shoulder (ca. 10 kHz). The mass transport tail has an inflection point (ca. 10 mHz), which suggests two partially overlapping processes. The model includes processes that match each feature of the impedance spectra and can produce a very close fit. Using this physics-based impedance model, each feature of the impedance spectra can be interpreted. The high frequency response capacitive loop is visible due to the current-collector double-layer charging and contact resistance. In Equation 53, R cc and Q cc are lumped values that account for the resistance and modified capacitance of the current collectors. The magnitudes of the extra resistor and modified capacitor are constrained so that their characteristic frequencies are greater than 10 kHz. The arc is slightly flattened in some cases. The magnitude of n CPE adjusts the flatness of the arc that is formed from this process. The mid-frequency arc is a result of the charge-transfer reactions at both electrodes. The sum of both electrode responses is captured in the single arc. The magnitude of the exchange-current densities, double-layer capacitances, and Bruggeman exponent alter the height and width of the kinetic arc. The exchange current densities and capacitances are fitted based on the observed order of imaginary impedance responses measured of the half-cells in Figure 4. A common value assigned to the Bruggeman exponent is 1.5. In this work, the Brugge-man exponent is a fitted parameter with magnitude greater or equal to 1.5. Setting the exponent as a fitted parameter improves the fit significantly. Increasing the magnitude of the Bruggeman exponent flattens and widens the kinetic arcs of both electrodes. Effectively, the current perturbation traverses across the thickness of the electrode as frequency decreases. Along the current path, the perturbation signal encounters more and more electrolyte resistance and double-layer capacitance. The additional electrolyte resistance and capacitance is analogous to a transmission line, causing the appearance of 45 • sloped imaginary vs. real in the high-frequency portion of the response. The electrolyte conductivity, the porosity, and Bruggeman exponent are the parameters that determine the magnitude of the electrolyte resistance. A larger Bruggeman exponent means that more electrolyte resistance is encountered, leading to more flattening of the kinetic arc. The low frequency response is the sum of processes that occur on long time scales. These processes include, but are not limited to, contributions from concentration impedances in the electrolyte and solidphase mass-transport processes. In this work, we ignore electrolyte concentration gradients in our mathematical representation. Therefore, the model presented in this work only reproduces solid-phase mass-transport impedances. Particle size distributions are known to occur in lithium-ion battery porous electrodes. 19 The fittings shown are with two particle sizes. The small-sized particles have short diffusional tails, and primarily behave as capacitors at low frequencies. The sum of these low-frequency capacitive small particles and large particles increase the slope of the low frequency response to improve agreement. In Figure 5c, simulated fresh graphite and LiNMC individual electrode responses for cell 5 are given. The ratio of the two electrode impedance responses is comparable to the half-cell measurements in Figure 4a. The characteristic frequency for LiNMC electrode is greater than the graphite electrode. Additionally, the LiNMC arc width is less than the graphite arc width, which is consistent with the measurements. From the fresh cell simulations, we conclude that the model accurately reproduces the frequency of measurement of the kinetic responses. In Figure 6, impedance spectra of degraded cells at 50% SOC are given with simulation results overlaid on the experimental measurements. The simulated impedance response reproduces the prominent features of the measured response. As with the fresh cell responses, the current-collector charging process is used to fit the high frequency arc formed. There is a slight increase in the high frequency real impedance of the data but is hardly noticeable when comparing Figure 6b to Figure 5b. The increase is consistent with film formation in the negative and positive electrodes, assumed ohmic resistors. However, the small increase indicates that the contribution to the high frequency impedance from the film is not significant. The kinetic arc is now Figure 5b. This arc appears at a slightly lower frequency in the aged vs. fresh full-cell response, which is consistent with a higher kinetic resistance. As in the case of the fresh cell response, the Bruggeman relation is critical to fitting the data in the aged cell response. An additional time constant appears after degradation in the midfrequency region of the response (1-15 Hz), similar to the half-cell positive electrode response. The core-shell spherical particle model allows capturing part of this portion of the response. The shell portion of the response is located close to 1 Hz and creates an open-boundary finite Warburg element, due to the fixed boundary conditions that are applied. The core portion of the response is a reflexive boundary, finite Warburg element. The remaining portion of the third arc is a sum of the contributions of the second particles in each electrode. At the low frequency end of the spectrum (<1 Hz), the sum of the impedances from the solid-phase diffusion processes in both electrodes dominates the response. The shape of the aged-cell response is similar to the fresh-cell data provided in Figure 5c. The length of the tails are slightly different. The difference can be a result of a slight change in the SOL of each electrode, which would alter the magnitude of the open-circuit potential of each electrode. Such a change is possible due to a self-discharge current developing in the cell during the calendar aging period. Another cause of the lengthening of the low frequency tail is from increased concentration impedances in the electrolyte due to the degradation. Any changes in the length of the low frequency diffusional impedance in this model are due to the inclusion of the core-shell cathode particle and ratio of large to small particles in the electrodes. In Figure 6c, simulated degraded graphite and LiNMC individual electrode responses for cell 5 are given. Similar to the fresh electrode responses, the ratio of the two electrode impedance responses is comparable to Figure 4b. The characteristic frequency for kinetic LiNMC electrode is greater than the graphite electrode. The LiNMC arc width is less than the graphite arc width, which is also consistent with the half-cell measurements. Additionally, the third new arc that appears in the LiNMC degraded half-cell spectra is reproduced in the simulation with the inclusion of the core-shell particle diffusion representation. Identification of degradation modes.-There are several degradation modes that manifest themselves in the impedance spectra. Estimates of the contribution of each of these modes is obtained following fitting of the impedance spectra. Each of the two particles in each of the two electrodes (4 particles in total) is allowed to exhibit a unique capacitance and kinetic exchange-current density. Table II shows the average of the fitted parameters for the five fresh and degraded cells. Electrode kinetics and Bruggeman exponent.-The characteristic frequencies of the kinetic arcs are defined by the large and small particle double-layer CPEs and exchange-current densities. The arc widths are determined by the magnitude of the exchange-current densities and Bruggeman exponents. The exchange-current densities decrease as the electrode decomposes due to the prolonged high-temperature exposure. Aging increases the Bruggeman exponent values for both electrodes. The increase in the exponent is consistent with film formation in both the negative and positive electrodes due to electrolyte decomposition. The films in both electrodes fill the pores of the active material, decreasing the effective conductivity of the electrolyte across the electrode. The positive electrode film is measured only a few nanometers in thickness but is able to clog the electrolytecontaining pores. A potential consequence of pore clogging is loss of intercalation surface area and cell capacity. The magnitude of the Bruggeman exponents for the negative and positive electrodes are particularly large compared to the most frequently assumed value of 1.5. The large values of the exponents result from the model reproducing the flattened kinetic arcs for both electrodes and ignoring of concentration gradients in the electrodes and separator. Effectively, the kinetic arc width increases with increasing values of the Bruggeman exponent. The increase in arc width is directly attributable to an exaggerated porous electrode effect, which is contained within the high-frequency portion of the kinetic arc. With the larger magnitude Bruggeman exponents, the perturbation signal encounters more transport resistance due to a higher resistive path. The response is a 45 • straight line at high frequencies. The length of the straight line increases with increasing magnitude of Bruggeman exponent. The line begins to curve over after the perturbation signal reaches the back of the electrode, at which point the electron transfer process dominates the response. The combination of the electron transfer process and porous electrode effect produces a slightly flattened arc that can be used to better represent the experimental impedance responses. To note, the negative electrode exponent is significantly larger than that of the positive electrode. Small particle fraction.-The small negative and positive electrode particles are set to average to approximately 70 nm and 60 nm, respectively. The surface area of each electrode is a sum of the surface areas of large and small particles. The particles are assumed spherical with idealized packing density. The small particle surface areas decrease in each electrode after degradation, consistent with loss of active surface area from pore blocking or current collector delamination. Figure 7, the diffusivity of lithium through the shell of the positive electrode particles is shown for each cell. The shell thickness is set at 3 nm, which is assigned based on the TEM images provided. The diffusivity of lithium through the shell is calculated to be approximately two orders of magnitude less than that of the core. The shell arc width and characteristic frequency are two independent physical properties that fit to the two parameters. The diffusivity is consistent with slow diffusion through the shell of the particle, followed by faster diffusion through the core of the particle. The core-shell is consistent with structural degradation of the solid particles. Dissolution of metal cations from the host structure is assumed to be the predominant cause of such a structure. A cation dissolution mechanism would create a shell with different physical characteristics than the core of the particle. Shell thickness and diffusivity.-In In Figure 8, the role of the shell thickness and negative electrode small particle fraction is explored. The parameter set used has been chosen to exaggerate the interesting portions of the impedance response. In Figure 8a, the shell thicknesses of the positive electrode The negative electrode response to illustrate the effect of the small particle fraction on the arc width and low-frequency mass-transport slope. particles are doubled from 3 nm and 6 nm. Doubling the thickness doubles the width of the low-frequency arc. The thicker shell response is found at a slightly lower frequency. As mentioned previously, the core-shell structure of the positive electrode particles reproduces a finite-Warburg element in series with a closed Warburg element. The finite-Warburg element has an initial slope of one on the higherfrequency end of the response and eventually curves toward the real axis. Using the finite Warburg for the shell structure is an effective method to introduce another arc in the spectra. In Figure 8b, the negative electrode impedance response is simulated for two values of ε small,− . The second condition, ε small,− = 0.15, is half of the initial condition. The decrease in the fraction of small particles alters the mid-and low-frequency portions of the impedance spectrum. The mid-frequency arc width is increases because the surface area for intercalation decreases. At low frequencies, the slope of the Warburg tail decreases. This behavior is because the small particles behave primarily as capacitors at low frequencies. Capacitors are vertical lines on a Nyquist plot. Hence, less curvature is observed in the low-frequency response for the negative electrode with more small particles (ε small,− = 0.30 at 1.6 mHz). In this work, two particles are included in each electrode which represent the average of all of the intercalation particles in the electrode. An alternative method to model the low-frequency response is to include a complete distribution of particle sizes. Again, the smaller particles in such a distribution would have shorter finite diffusional lengths than the larger particles. Conclusions Degradation modes of a graphite|LiNMC cells are investigated for 3200 hours of 75 • C calendar aging using EIS. Physical parameters are estimated through model-based fitting and are used to identify the potential causes of the changes in the impedance response. The model is validated using experimental impedance response measurements, both half-and full-cell data. TEM micrographs are used to obtain estimates on positive-electrode surface characteristics. Experimental fresh electrode, half-cell impedance testing indicates that the cathode kinetic response is at higher frequencies than the anode response. Micrographs of the positive electrode indicates significant surface structural changes on the particles post aging. The structural changes are difficult to distinguish but appear to be a combination of a surface film and a shell on the positive electrode particles. High-temperature calendar aging causes an increase in the measured full-cell impedance. Despite film formation in both electrodes, the high-frequency intercept does not appreciably change after aging. The shift is consistent with thicker and less conductive surface films, potentially in both electrodes. The graphite and LiNMC kinetic rate constants decrease following aging. The kinetic arc widths are also dependent on the Bruggeman exponents in both electrodes. The exponent, which describe transport in each electrode, increases after aging. The change in the exponent is consistent with the observed film formation in the electrodes. The magnitude of the Bruggeman exponent is greater in the graphite electrode than the LiNMC electrode. The magnitude of the exponents are greater than typically reported (i.e. approximately 3 to 8). The positive electrode impedance response includes a new lowfrequency arc following aging. The new arc is reproduced using a core-shell model the spherical LiNMC electrode particles, which is consistent with TEM observations. The core-shell structure is attributable to a manganese dissolution process that is known to plague NCM electrodes. The shell thickness has slower diffusivity than the core of the particle. The result indicates that manganese dissolution is potentially diagnosed by impedance spectroscopy. The addition of a small particle improves simulation and experimental agreement, primarily in the low-frequency portion of the response.
9,440.2
2018-01-01T00:00:00.000
[ "Materials Science" ]
Finite resource performance of small satellite-based quantum key distribution missions In satellite-based quantum key distribution (QKD), the number of secret bits that can be generated in a single satellite pass over the ground station is severely restricted by the pass duration and the free-space optical channel loss. High channel loss may decrease the signal-to-noise ratio due to background noise, reduce the number of generated raw key bits, and increase the quantum bit error rate (QBER), all of which have detrimental effects on the output secret key length. Under finite-size security analysis, higher QBER increases the minimum raw key length necessary for non-zero secret key length extraction due to less efficient reconciliation and post-processing overheads. We show that recent developments in finite key analysis allow three different small-satellite-based QKD projects CQT-Sat, UK-QUARC-ROKS, and QEYSSat to produce secret keys even under very high loss conditions, improving on estimates based on previous finite key bounds. This suggests that satellites in low Earth orbit can satisfy finite-size security requirements, but remains challenging for satellites further from Earth. We analyse the performance of each mission to provide an informed route toward improving the performance of small-satellite QKD missions. We highlight the short and long-term perspectives on the challenges and potential future developments in small-satellite-based QKD and quantum networks. In particular, we discuss some of the experimental and theoretical bottlenecks, and improvements necessary to achieve QKD and wider quantum networking capabilities in daylight and at different altitudes. I. INTRODUCTION The emergence of terrestrial quantum networks in large metropolitan areas demonstrates an increasing maturity of quantum technologies.A networked infrastructure enables increased capabilities for distributed applications in delegated quantum computing [1,2], quantum communications [3,4], and quantum sensing [5].However, extending these applications over global scales is currently not possible owing to exponential losses in optical fibres.Space-based segments provide a practical route to overcome this and realise global quantum networking [6][7][8].Satellite-based Quantum Key Distribution (SatQKD) has become a precursor to long-range applications of general quantum communication [9,10].Although a general-purpose quantum network [11] will require substantial advancements in quantum memories and routing techniques, a satellite-based QKD system adds to the progress of global-scale quantum networks by driving the maturation of space-based long-distance quantum links. There has been growing interest in satellite-based quantum key distribution.The recent milestone achievements by the Micius satellite [12] which demonstrated space-to-ground QKD and entanglement distribution have energized this interest.Micius, being a relatively large satellite, leaves open the question of using smaller satellites to perform satellite-based QKD-there have been feasibility studies for small-satellite-based QKD and CubeSat-based pathfinder missions [13] for QKD applications. The recent surge in efforts emphasizes the importance of understanding specific limitations to the performance of different SatQKD systems.For low-Earth orbit (LEO) satellites, a particular challenge is the limited time window to establish and maintain a quantum channel with an optical ground station (OGS).This limitation disproportionately constrains the volume of secure keys that can be generated due to a pronounced impact of statistical fluctuations in estimated parameters [14,15]. In this work, we give a scientific perspective on the progress of small satellite-based quantum key distribution under resource constraints.More specifically, we analyze three different mission configurations: the Singapore Centre for Quantum Technologies' CQT-Sat, the UK Quantum Research CubeSat/Responsive Operations for Key Services (QUARC/ROKS) satellite, and the Canadian Quantum Encryption and Science Satellite (QEYSSat) on which the authors are actively participating.In addition, these three missions are representative of near-term small satellite-based QKD missions. The quantum channel configuration for each mission is illustrated in Figure 1.With the exception of a downlink entanglement-based channel, these configurations cover uplink and downlink with entanglement-based and CQT-Sat -Downlink Entangled source on satellite QUARC/ROKS -Downlink Decoy state source at satellite QEYSSat -Uplink/Downlink Entangled/decoy source at OGS FIG. 1.Quantum channel configuration for three different SatQKD missions.Each mission implements a different combination of QKD protocols and quantum channel configurations between an OGS and an orbiting satellite.The Singaporean CQT-Sat mission (left) implements the entanglement-based BBM92 protocol (blue arrow) in a downlink configuration.For this mission, one of the photon pairs is measured on board and the other is transmitted to the OGS.The UK QUARC/ROKS mission (middle) implements the weak coherent pulse (WCP) decoy-state BB84 protocol (red arrow) in a downlink configuration.The Canadian QEYSSat mission (right), implements both the decoy BB84 and BBM92 protocols (purple arrow) in an uplink configuration and intends to also incorporate a decoy BB84 downlink. prepare-and-measure based QKD.These configurations give representative quantum channels to support the capability of a range of distributed applications.Depending on the ground station location and the specific LEO orbit, a satellite may have a limited number of passes over the OGS for which QKD key generation is possible-for example, current technology requires that passes are conducted during nighttime.Therefore, it is important to understand the conditions that allow a SatQKD system to produce secret keys successfully from a single pass over the OGS.More specifically, for any given satellite overpass, how many secret key bits can be generated?We answer this for the three mission configurations by revisiting the supporting theory and modelling of key generation [16][17][18][19][20].It is shown that all three missions demonstrate enhanced key generation with the latest advancements in finite key analysis.We conclude by looking at the prospects for satellites at higher altitudes where the longer access time for a ground receiver does not overcome the increased diffraction loss. Based on the performance analysis of each of these missions, we provide a broad and future-looking perspective for global quantum communications with a specific outlook on the outstanding challenges for SatQKD and long-term perspectives.In particular, we explore how improved finite-key analyses can improve SatQKD performances and more widely how advances in hardware can support greater capabilities for networked quantum technologies.This perspective provides a view of the medium to long-term challenges and milestones that present building blocks for enabling the quantum internet. II. SATELLITE-TO-GROUND ENTANGLEMENT-BASED QKD USING BBM92 CQT-Sat is a concept for 12U nano-satellite capable of performing space-to-ground entanglement-based QKD.Its precursor SpooQy-1, demonstrated [13] successful launch and operation of a miniaturized polarizationentangled photon pair source in LEO.The subsequent instruments will build upon this to perform spaceto-ground entanglement distribution and demonstrate entanglement-based BBM92 QKD. During a satellite's overpass over the ground station, the link loss for the downlink quantum channel will depend on the relative distance between the satellite and the OGS.Using a variable attenuator, a tabletop setup can emulate a time-varying satellite-to-ground link loss (a similar experiment was conducted previously in the context of QEYSSat [21]).This enables us an estimation of the achievable raw key length and overall QBER for various satellite passes.Using these parameters we perform finite key analysis and show that CQT-Sat can successfully generate shared secret keys between the satellite and OGS when the maximum elevation is as low as 33 • . A. System configuration The satellite quantum source generates polarizationentangled photon pairs by superposing orthogonally polarized photons created from spontaneous parametric down-conversion using two pump decay paths [22].Detailed design of a functional model of the source and associated design trade-offs can be found in [23].The source generates pairs of polarization-entangled photons where each pair consists of a 785 nm wavelength signal photon and an 837 nm wavelength idler photon.For the purpose of QKD, each of the idler photons is measured aboard the satellite in either the computational or the diagonal basis with probability 1/2.The signal photon is sent to the satellite's optical terminal using an optical interface. A subsystem inside the optical source also generates a synchronization beacon.Both the beacon and the signal photons are transmitted to the OGS through the satellite's optical terminal.Optical terminals on both the satellite and in the ground station help establish a space-to-ground freespace optical link.The terminals consist of optical telescopes and fine-pointing mechanisms for transmitting and collecting the signal photons, and synchronization and tracking beacons.Table I presents the parameters of the quantum source and the optical link. B. Emulating space-to-ground QKD using a tabletop setup To emulate a space-to-ground QKD link we built the entanglement source and the detection apparatus representative of both the satellite and ground systems.The system parameters for this setup are listed in Table II.We consider a Sun-synchronous low Earth orbit with 500 km altitude above sea level.This orbit choice provides us with daily passes over the CQT ground station at a pre-specified time of the day [24].We compute a time series of the satellite's angular elevation with respect to the OGS and the loss at that elevation for a pass using a simulation model that we have developed [23,25].Regarding the simulation details, we compute the satellite range with respect to the OGS using the orbital simulation model and use parameters from vation of 30 • to 90 • .For example, in Figure 2 we show a pass with 88 • maximal elevation and associated loss that the optical link experiences. Using a variable attenuator we introduce different losses, and record detection timestamps for both signal and idler photons.Due to physical limitations, we only use a finite number of attenuator settings and stitch the experimental data together to emulate the predicted loss of the optical downlink.The blue and orange lines in Figure 2 illustrate the predicted and experimentallyemulated loss respectively at various segments of the satellite pass. This technique enables an investigation of satellite overpasses with different maximum elevations and to generate the associated detection timestamps both onboard the satellite and in the OGS.These timestamp sets are processed through the rest of the QKD protocol stack including finite key analysis to compute the secure key length achievable from each pass.A recent demonstration of a QKD system with a similar emulated satellite overpass was capable of establishing a 4.58-megabit secure key between two nodes [26].Depending on geographical location and satellite orbit, a ground receiver might observe 2 to 6 satellite passes each day.An ideal satellite pass would transit directly over the ground station with maximal elevation of 90 • (zenith).At zenith, the satellite is closest to the OGS, and in clear weather this pass would exhibit the lowest transmission loss and longest link time.However, such a pass is less likely than more "glancing" passes.For a given detector dark count rate, higher losses would result in a poorer signal-to-noise ratio and increase the QBER.Moreover, the pass duration and the number of photons successfully received from the satellite also decrease.Figure 3 shows how the secret key length changes with different satellite passes.Here we use the finite key analysis from [20] taking security parameter 10 −10 where error correction efficiency is 1.18. The analysis shows that below an elevation of 30 • no secret key is generated.This is acceptable for CQT-Sat which was designed to avoid operation at low elevation.The ground receiver in this case is sited at sea level in a tropical, urban environment and the optical channel below 30 • suffers more loss and light pollution due to the thicker atmospheric column. Maximum elevation angle (degrees) Secret key size (kbits) FIG. 3. Secret key lengths achievable by CQT-Sat for passes of given maximal elevation. The blue curve gives results for the default simulation setting where the minimum loss at zenith is 30.5 dB.Other curves show the behaviour for optical links with additional 1 to 4 dB losses.All lines without markers are based on laboratory-based experimental data constructed to emulate pass-loss profiles.The results corresponding to the lines with markers contain loss profiles that are simulated numerically.The red marker on the blue line corresponds to the simulation depicted in Figure 2, where the satellite pass has maximum elevation 88 • . III. SATELLITE-TO-GROUND QKD USING DECOY-STATE BB84 The UK Quantum Research CubeSat (QUARC) project provides a design and architectural foundation for the Responsive Operations for Key Services (ROKS) mission in the National Space Innovation Programme (NSIP) [27,28].ROKS uses a continuation of the same 6U CubeSat platform as QUARC and will first implement decoy-state BB84 protocol in a downlink configuration for QKD service provision using a weak coherent pulse (WCP) source (Figure 1).The satellite quantum modelling and analysis (SatQuMA) open-source software has been developed to estimate expected key generation performances for such satellite QKD missions [29].SatQuMA models the efficient BB84 WCP two-decoy (three intensity) protocol and can optimise over the entire protocol parameter space and transmission segment time.It also incorporates recent results in finite-block composable secure key length estimation [20,30,31].SatQuMA can be applied to model the expected key generation performance for ROKS for a general satellite pass geometry in a Sun-synchronous orbit (SSO) at altitude h. A. System configuration We use published empirical Micius mission measurements of the total optical loss of the SatQKD channel [32] to construct a representative total system link efficiency as a function of elevation angle during a satellite pass.To account for local horizon constraints around the OGS, we restrict quantum transmissions to elevations above 10 • . The link efficiency (loss) is highly dependent on the system parameters, OGS conditions, and orbits.The nominal system parameters are summarized Table III, where the minimum total system loss at zenith is computed to be 34 dB.One can scale the minimum system loss at zenith to allow the comparison of differently performing SatQKD systems.Changes to the minimum system loss at zenith would then account for differences in the transmit and receive aperture sizes, pointing accuracy, atmospheric absorption, turbulence, receiver internal losses, and detector efficiencies.For the current simulations, we consider a nominal baseline value of 34 dB.SatQKD missions with differing performance can be modelled by linearly scaling the link efficiency vs elevation curve to account for different constant efficiency factors, such as a change in OGS receiver area. To evaluate the sensitivity of the achievable secret key length to different errors, we categorise different contributions associated with sources and detectors in two key parameters.First, errors from dark counts and background light are combined together into a single extraneous count probability p ec , here assumed to be constant and independent of elevation.In practice, it will de- pend strongly on the environment of the OGS and the light from celestial bodies.Second, all other error terms, such as misalignment, source quality, and imperfect detection, are combined into an intrinsic quantum bit error rate QBER I independent of channel loss/elevation.This allows an efficient method to determine the sensitivity of the secret key length to different categories of errors, which helps identify targeted improvements for future SatQKD missions. B. Optimised finite key length We model the efficient BB84 protocol, adopting the convention of key generation using signals encoded in the X basis and parameter estimation using signals in the Z basis, chosen with biased probabilities.For a two-decoystate WCP BB84 protocol, one of three intensities µ j for j ∈ {1, 2, 3} are transmitted with probabilities p j .An expression for the final finite key length, ℓ, for this protocol is given in Ref. [16].The key is extracted from data for the whole pass as a single block without partitioning, the security proof of Ref. [16] makes no assumptions about the underlying statistics.This avoids having to combine small data blocks with similar statistics from different passes-thus, it is both quicker and avoids the need to track and store a combinatorially large number of link segments until each has attained a sufficiently large block size for asymptotic key extraction.The limited data sizes from restricted pass times results in key length corrections to account for finite statistics of the link.To improve the analysis, we use the tight multiplicative Chernoff bound [19] and improve the estimate of error correction leakage λ EC ≤ log |M|, where M characterises the set of error syndromes for reconciliation [17] (see Ref. [30] for more details). For a defined SatQKD system, we optimise the finite key length ℓ by optimising over the protocol parameter space that includes the source intensities (with µ 3 = 0) and their probabilities, and the basis encoding probability p X .We also optimise the portion of the pass data used for key generation. The SatQuMA code is used to generate simulated measurement data.The QBER and phase errors for the key bits are estimated using only the data from the complimentary basis.This is a classic sampling without replacement problem, which is usually solved in QKD using an approximation for the hypergeometric distribution [33].Recently, however, an improved sampling bound has been proposed [20].This can be used to estimate the QBER and phase error.The formalism from Ref. [20] is used together with data generated from SatQuMA to determine the secret key length for different satellite passes, which we characterize through the maximum elevation angle.The secret key is plotted in Figure 4 as a function of the maximum elevation angle of a pass. IV. SATQKD USING BBM92 IN UPLINK OR DOWNLINK CONFIGURATIONS The Quantum Encryption and Science Satellite (QEYSSat) mission [34] is a Canadian initiative to develop and launch a microsatellite-hosted quantum receiver instrument into low-Earth orbit.The primary objective of the mission is to demonstrate QKD via quantum uplink from sources located at two or more ground stations.To support this, the QEYSSat instrument will possess a large front-end telescope for light collection, polarization discriminating optics, and single-photon avalanche diodes [35].Support for a WCP downlink protocol is also being developed.As of writing, QEYSSat is in the late design/early construction phase and on schedule to launch in 2025. A. System configuration With QEYSSat's nominal configuration being an uplink, and the quantum sources located on the ground, lower secure key rates are expected when compared to a downlink with equivalent parameters.This is due to the steering effect of atmospheric turbulence on the beam at the beginning of its propagation in contrast to atmospheric steering at the end of propagation for a downlink channel.However, a satellite receiver affords considerably greater source flexibility.For this reason, two source types are baselined: WCP with decoy states in an unbiased BB84 protocol, and entangled photons (with one photon of each pair kept at the ground) in a BBM92 protocol.It is expected that other quantum source types-e.g., quantum dots (see, e.g., Ref. [36])-will also be employed during the experimental phase of the QEYSSat mission. Commencement of the QEYSSat mission in 2018 was preceded by several theoretical and experimental investigations into the mission's feasibility, both as a whole [37] and with a focus on critical subsystems including pointing [38,39] and photon measurement [40,41].Of these, one early work [42] numerically modelled the quantum optical link to establish the loss and fidelity of polarizedphoton transmission under the assumptions of the expected orbital configuration and (generally conservative) atmospheric conditions.Multiple scenarios were considered, consisting of notional WCP or entangled-photon sources in both uplink and downlink configurations.Although some details of the in-development QEYSSat apparatus and conditions have been refined since, the values remain generally very similar. In this section, we present the secret key generation performance of QEYSSat while executing the entanglement-based BBM92 protocol in both satellite-toground (downlink) and ground-to-satellite (uplink) quantum communication configuration modes.Although the QEYSSat instrument has ultimately been designed without entanglement downlink, we include its analysis here for two reasons: (1) for comparison with the uplink configuration that is planned, and (2) as an extension of the prior feasibility study performed in Ref. [42].We expect these updated results may influence future designs. B. Key length analysis We calculate the secure key rate for the QEYSSat Mission using updated secure key length analysis [20], which has improved performance with smaller raw key block size.Performance with smaller block size is important because it has implications on QKD feasibility under high-loss conditions and during low maximum elevation passes.This improved key length analysis enables higher key rates than prior analysis [42]. In this analysis, we set the error correction efficiency to 1.18 and the security parameter to 10 −10 , which is consistent with the values taken for the CQT-Sat and the QUARC/ROKS missions.Table IV summarizes the parameters that describe the quantum source and the optical link.The assumed satellite orbit (Sun-synchronous noon/midnight at 600 km altitude) was simulated for one year's duration of nighttime passes over a notional ground station located 20 km outside of Ottawa, Canada.Optical link conditions for each pass were modeled at ten-second intervals.The background light was determined from the Defence Meteorological Satellite Program's Operational Linescan System measurements [42][43][44] and combined with an assumed half-moon at 45 • (contributing via Earth reflection using its mean albedo) along with Earth's thermal (blackbody) radiation, taking into account the geometry of the optical field of view which changes over the pass of the satellite, and 1-nmbandwidth spectral filtering.Detector dark counts of an additional 20 cps were also included in the total noise detected. Here, we consider a transmitter and receiver diameter of 50 cm and 30 cm respectively as the baseline configuration.Optical losses were calculated from the contributions of numerically modeled diffraction given a central obstruction (secondary mirror), an assumed mean pointing error of 2 microrad, atmospheric attenuation modeled by MODTRAN 5 [45] for a "rural" profile with 5 km visibility, and Hufnagel-Valley model of atmospheric turbulence at sea-level.Photonic states were simulated in a 7-dimensional Fock-space (0 to 6 photons).Intrinsic reduction in quantum visibility was included via an operation equivalent to a small rotation. Detector count rate statistics were calculated using assumed EPS pair production rate of 100 Mpairs/s via spontaneous parametric down-conversion (SPDC) pumped at ϵ = 0.22 (corresponding to a mean 0.1 pairs per pulse-see Ref. [42]) with 98% intrinsic entanglement visibility, and a 0.5 ns coincidence window.Such a source is challenging, but possible with current techniques on ground (for uplink), and can be reasonably foreseen as achievable with expected advances for space platforms (for downlink).Intervals, where the simulated measurement visibility was below 85%, were filtered out (see Ref. [46]).In this analysis, we aggregate the remaining statistics at each pass and sorted these by the maximum elevation achieved by the satellite with respect to the ground station for that pass. In Figure 5(a) and (b) we show the secret key generated for passes with different maximum elevation in the downlink and uplink configurations respectively for entanglement-based BBM92.We use the finite key analysis from Ref. [20], with security parameter 10 −10 and error correction efficiency 1.18 to compute the secure key lengths.Note that in comparison with the analysis done in Ref. [42], the secret key size is considerably greaterwe expect this is largely a consequence of the faster source rate and assumed enhancements to intrinsic QBER and pointing accuracy, coupled with the highly nonlinear effect of finite-size statistical analysis. V. MISSION COMPARISONS As the three SatQKD missions discussed here have different design specifications for the ground stations, satellites, and protocols implemented, a direct quantitative performance comparison of the missions is difficult.Despite this, we provide a qualitative discussion on the respective strengths and weaknesses of each mission.First, the uplink configuration employed by QEYSSat has the advantage that it relieves the source design from the strict size, weight, and power (SWaP) constraint imposed by a satellite.Moreover, it potentially allows QEYSSat to swap the entanglement source to a prepareand-measure source at any point of the mission to perform a different SatQKD protocol and could benefit from abstract beam pointing.However, it has a disadvantage in that the optical link suffers larger environmental tur-bulence during the initial part of the optical path, generating higher beam wandering compared to a downlink configuration. The QUARC/ROKS mission uses a prepare-andmeasure decoy state BB84 protocol where the source emits a periodic signal.This allows higher repetition rates to achieve larger raw key rates to counter the loss experienced in the satellite-to-ground optical link.In addition to a higher performance requirement for the detection and time-stamping circuits, the repetition rate is also constrained by the speed of quantum random number generation for the choice of quantum signal transmission.A similar attempt to counter the link loss by increasing the brightness of an entanglement source quickly hits a bottleneck because the SPDC-based source does not produce periodic signals, therefore due to the finite resolution of time-stamping devices and the limits imposed by detector jitters the source ends up producing too many multi-photon events per time-slot increasing the QBER to an unacceptable value.However, entanglement-based QKD implementations act as precursors to entanglementsharing links, which are essential for future development toward a general-purpose quantum internet. VI. OUTLOOK FOR GLOBAL QUANTUM COMMUNICATIONS We compared three upcoming SatQKD missions in the preceding sections and show that an individual small satellite can satisfy finite-key requirements for SatQKD.For each of the three missions considered, we show this leads to non-zero finite keys generated for a single overpass.While an individual satellite in an appropriately chosen orbit can cover the Earth's surface each day, increased performance in the network will probably require constellations of these satellites [25].Aside from putting more satellites into space, it is important to consider how the performance of each individual satellite could be enhanced.We note from the preceding sections that LEO satellites operate at the edge of performance in terms of SatQKD that satisfy finite-key security.In this section, we report on specific challenges that, if overcome, can provide improved SatQKD performance over a wider range of operations.We also provide a long-term perspective on the demonstration of key milestones towards global quantum communications and applications beyond QKD. A. Outstanding challenges for SatQKD Progress in finite-key security analyses presents an immediate and fundamental challenge to improving the achievable key rates.An improved finite-key analysis handles parameter estimation and post-processing tasks more efficiently.This would enable higher finite keys and successful distillation of non-zero finite keys at higher operating losses (e.g., lower elevation passes or at ground locations with worse atmospheres) without any hardware changes.Beyond improvements in finite-key analysis, there are specific challenges to hardware that would provide improved performances. A conventional research programme would revolve around improved transmitters and detectors.We propose that building a system that can operate effectively in daylight would be a major step.Practically, every SatQKD mission for the foreseeable future will operate during nighttime to avoid excess background light from the Sun.In order to operate during daylight, the spectral window of the transmitted light has to be sufficiently narrow for effective filtering, while the transmission system has to be built to avoid reflecting sunlight directly into the receiver.We note that adaptive optics (AO) for an optical ground receiver [47][48][49], to effectively couple light into a single mode fiber for direct transmission of the QKD signal to end-users located away from the ground receiver, will also become increasingly important.This has the added advantage that an AO system will act as a spatial filter, reducing the amount of stray light entering the quantum channel.To be useful, the AO system will need to be able to operate with high coupling efficiency, so that the overall system throughput is not compromised.A ground AO system can also improve uplink configurations [50].Research into transmission and detection systems that can penetrate cloud and fog would also be highly desirable. Aside from the transmitter and detector aspects, we note that a major transmission loss contribution is the diffraction of the beam from the transmitter.This loss could be mitigated using several different approaches.First is to operate the spacecraft at Very Low Earth Orbit (VLEO).This orbit has a nominal altitude below the International Space Station (approximately 400 km) and is often not considered due to satellites experienc- ing significant drag and re-entry into the Earth's atmosphere within a year.However, with space propulsion systems being developed for station keeping [51,52], this approach may become feasible and would afford significantly lower losses owing to shorter optical links.In designing a VLEO system, factors such as micro-buffeting from the residual atmosphere, degradation from atomic oxygen, and the shorter overhead time of the satellite remain open challenges.Second is the use of enhanced transmit/receive apertures.The use of larger apertures has been the primary route to minimise link loss, with a doubling in aperture sizes providing 6 dB improvements [30].However, aperture sizes are restricted for small satellites.Recent efforts rely on deployable and active optics [53,54]. Finally, diffraction losses can also be mitigated by developing larger and more capable satellites at very high altitudes.The advantage is that such satellites can be equipped with large transmit apertures while increasing the ground coverage area as well as improving access time for a ground receiver.The drawback is a dramatic increase in diffraction loss that must be compensated by enlarging the transmit/receive aperture and improving pointing accuracy. We have modelled the performance of SatQKD systems for varying orbital altitudes, by imposing similar capabilities on the satellites for LEO, Medium Earth Orbit (MEO) [55] and Geostationary Orbit (GEO) [56].Under a downlink configuration using the receiver (groundbased) and sender (onboard-satellite) telescope and beam parameters as shown in Table V, we study the space-toground optical link loss (Figure 6) for higher altitude orbits.We see that the optical link loss increases rapidly with the increase in altitude as expected from the beam expansion.One approach to counter increased losses at higher orbits and ensure successful operation of SatQKD is to use ultra-bright sources capable of operating at GHzbandwidth repetition rates [57].SatQKD missions typically operate with a source rate in the order of 10 8 Hz.Increasing the source rate to the GHz-range and beyond requires low timing jitters that are on the sub-ns scale.This is possible with superconducting nanowire singlephoton detectors (SNSPDs) [58] at the expense of greater SWaP due to the requirement of cryogenic cooling.An alternative approach to compensating the increased loss at higher orbits is to increase parameters such as the sending or receiving telescope apertures and pointing ac- V. Green shaded region indicates the altitude range for VLEO orbits, gray for LEO orbits, red for MEO orbits, orange for HEO, and the solid black line for a GEO orbit.The red dashed line corresponds to a representative MEO altitude of 7 × 10 3 km that we consider later.Link loss at zenith rapidly increases with increasing orbital altitudes. curacy.In Figure 7 we show a trade-off heat-map where the satellite's transmitter telescopes aperture and pointing accuracy are varied to show how it affects the transmission for a satellite in LEO, MEO, and GEO.These trade-off calculations show that for orbits higher than LEO it is not sufficient to only change the satellite's parameter to achieve the transmission gain necessary for successful SatQKD.For higher altitudes, one would also need to improve other parameters, such as the ground telescope's aperture, pointing accuracy, and detector performance. Due to increased losses at higher altitude orbits, obtaining a secure key from a single pass over a ground station may not be possible in these cases.It is possible to accumulate the raw key bits over multiple passes, increasing the block size to reduce finite block size effects sufficiently to achieve a secure key.In Figure 8, we show block sizes and associated QBERs for the raw key bits accumulated over one year for entanglement-based SatQKD operated at various orbital altitudes for a single link.The drawback of key aggregation is that large amounts of data will have to be stored for a long time onboard the satellite which might introduce vulnerabilities due to storage security.In Figure 8, we discard passes that yield QBER higher than 11%, which generates a key with minimal information known to an eavesdropper after post-processing [59].However, given the limited number of bits acquired at each pass it might not be feasible to determine the QBER reliably by exchanging a subset of these bits between the satellite and the OGS. Missions that choose to implement larger operating apertures to counter larger losses from higher-altitude orbits should also consider the increased costs associated with the optics and mass of the satellite.Specifically, the estimated cost variation for larger telescopes is T 1.7 x [60] largely due to bulk optics and increased mass, making them considerably more expensive than smaller telescopes.Further, space-based telescopes are estimated to be 30 times more expensive than ground telescopes. B. Long-term perspectives In this section, we provide a perspective on medium to long-term challenges and milestones that present a blueprint for enabling additional capabilities for the quantum internet.These milestones relate to the implementation of different QKD protocols to extend the use cases of quantum communications, and the development of improved hardware that could enable the demonstration of distributed quantum technologies beyond communications. To extend the range of quantum communications, the horizon of efforts in developing SatQKD systems is likely to involve the improvement of instrument components such as the sources, detectors, classical communication systems, and optical systems.This would directly generate improved key rates and would enable the implementation of a number of additional QKD protocols. First, the development of quantum memories will provide synchronization of probabilistic events to enable the implementation of memory-assisted (MA)-QKD protocols.Recent theoretical studies have shown that MA-QKD protocols can yield higher key rates over global distances and provides improved robustness against atmospheric weather and multiple-excitation effects [4,6,61].The advantage is that MA-QKD is less demanding on the performances of quantum memories than those required for probabilistic quantum repeaters.Demonstration of MA-QKD protocols can herald a major progress towards improved rate-versus-distance performance of SatQKD.Beyond communications, satellite-based quantum memories can enable distributed quantum sensing and imaging [62][63][64].However, any distributed applications making use of quantum memories will require active tracking and compensation of Doppler shifts that arise from the rapid relative motion of satellites, which provides a further challenge.For example, the typical speed of a LEO satellite is 7800 ms −1 , which has a maximum fractional Doppler shift (generally, elevation dependent) of β = v/c = 2.6 × 10 −5 .Compensating for this shift is important to enable the signal to efficiently couple with a narrow line width of the quantum memory.Compensation for Doppler shifts may also require inter-conversion between flying and static quantum systems. Second, continuous-variable (CV) protocols operate with conventional telecommunication devices and homodyne measurements that can be implemented at room temperature.This improves integration with existing ground-based networks and circumvents the need for bulky systems to cryogenically cool single-photon detectors which may be necessary to support the discrete variable (DV) protocols, including BB84 and BBM92, that have been our focus.Despite these advantages, CV-QKD provides limited key rates at high-loss, large distances that are typical in SatQKD, which explains their limited use in proposed SatQKD.There are studies that explore the feasibility of CV QKD over large distances [65,66].In addition, CV-QKD does not have the same maturity of proof techniques in the finite key regime.This in particular leaves an open question about the security of finite keys and how keys can be more efficiently distilled for overpass data.Initial studies on this provide an optimistic outlook [66], but further studies are required to establish the same maturity and rigour that DV QKD protocols have.Particular challenges include determining the optimal approach to partition overpass data and finding analytic finite-key methods that do not rely on numerical approximations.Third, if the polarization degree of freedom of a photon is used to encode qubit in a SatQKD link (as is common), it becomes crucial that both the sender and the receiver have their reference frame well aligned, as misalignment results in quantum bit errors in the output [67].The relative motion between the satellite and the ground station may introduce additional challenges in acquiring and maintaining a common reference frame during quantum communication.Several reference frame independent quantum communication protocols [68][69][70][71][72][73] have also been proposed to account for this. Finally, time-bin encoding offers another way of encoding key bits in DV-QKD [74].In principle, time-bin encoding can allow more than one bit to be encoded on each photon [75,76].The motivation for this approach is that increasing data rates would normally require an increase in the source repetition rate, but this eventually reaches a practical limit due to detector dead time.Higher-dimensional encodings could circumvent this issue by allowing multiple bits to be encoded on each photon.Currently, such systems have only been demonstrated in laboratory settings [75].Considerable work remains to develop setups that could be deployed in the challenging conditions of a satellite.Further, many of the QKD systems investigated to date use fiber, not free space.Only recently have there been investigations into the effects of errors due to free-space transmission in such setups [77]. Phase-randomised weak-coherent-pulses have become the most well-studied and implemented information carriers in satellite-based missions due to their maturity and ease of implementation [9].However, recent progress in quantum source development provides access to alternative QKD sources.For example, true single photon source (SPS) based on Nitrogen-vacancy centers [78] and quantum dots [79] are being developed and may become suitable for small SatQKD applications in the near future.The use of SPSs would provide enhanced security given their inherent immunity to photon number splitting attacks, and would also provide advantages in general purpose quantum communications such as quantum repeaters [80], optical quantum memory [81], and ondemand entanglement generation [82]. For applications beyond QKD, small satellites could benefit by using multiple, independently steerable telescopes to distribute entanglement to multiple OGSs.This will help minimise latency in distributed quantum technologies when multiple ground stations are in view.Although steering multiple telescopes on small satellites increases the mechanical complexity and mass of the instrument, possible disturbance to the alignment of optical systems could be mitigated with twin tethered nanosatellites.This naturally raises the possibility of formation flying of small satellite clusters that can extend the range of applications.This would be particularly important for distributed quantum technologies. A LEO satellite is inherently limited in the geographical area it can cover at any one time.A satellite-based global quantum network will therefore need a constellation of satellites [25] that may involve small-satellitebased quantum communication between satellites [83] and other high-altitude flying platforms [84].Along with the DV SatQKD systems described in this work, there are studies [85] investigating the feasibility of continuous variable satellite-based QKD systems. VII. CONCLUSION There is growing interest in deploying satellites to enable a global QKD network.To ensure this goal remains feasible and to guide experimental and engineering efforts, it is crucial to understand how SatQKD can yield efficient secret key generation under finite transmission times and high loss regimes.Previous works have shown that secret key generation with SatQKD is possible using finite-key analyses.Recent advancements in the treatment of finite key effects have improved the efficiency of key extraction, which greatly decreases the requirements on the minimum raw key length necessary for key extraction. We use these latest finite key bounds in the performance analyses of three different LEO-based satellite mission concepts; the 12U CQT-Sat mission implementing an entanglement-based BBM92 downlink, the 6U QUARC project implementing a WCP decoy-state BB84 in downlink configuration, and the QEYSSat mission im-plementing both the decoy BB84 and BBM92 protocols in an uplink configuration, in addition to a decoy BB84 state downlink.All three SatQKD missions achieve good secure key yields on the order of kilobits from a single pass over a ground receiver, even for the missions based on resource-constrained and aperture-limited CubeSats.This provides reassurance that planned SatQKD missions are on course to achieve important milestones that can lead to an effective global QKD network. The long-term vision of a satellite-based global quantum network remains a principal motivation behind SatQKD.Therefore, developing the infrastructure for a global QKD network sets the stage for future theoretical, experimental, and engineering milestones.We list these milestones together with outstanding challenges in the field and discuss potential routes to overcome them.Prominent challenges discussed include the daylight operation of SatQKD, the cooperation of multiple OGSs with a constellation of satellites to improve the reliability of general applications beyond QKD services, and implementing SatQKD from different altitudes to enable longer-range communications and inter-satellite links.We extend our discussion by modelling the performance of SatQKD systems with varying orbital altitudes and quantify system design trade-offs to offset the increased link losses at higher altitudes.While our calculations demonstrate that all three LEO SatQKD missions considered here have the ability to yield secure finite keys, it is clear that implementing SatQKD from higher altitudes require overcoming numerous hardware challenges and further improving security analyses simultaneously. For applications beyond QKD, the most demanding technological challenge is to implement general purpose quantum communications that has applications in distributed quantum technologies, such as quantum computing, error correction, and quantum sensing.This will require a constellation of satellites, each synchronised and equipped with entanglement sources and quantum memories to dynamically create multi-link connections between any two points on Earth.Our discussion on the short and long-term perspectives of satellite based quantum communications help build a blueprint for enabling the global quantum internet. 65 FIG. 6 . FIG.6.Link loss as a function of orbital altitude using parameters from TableV.Green shaded region indicates the altitude range for VLEO orbits, gray for LEO orbits, red for MEO orbits, orange for HEO, and the solid black line for a GEO orbit.The red dashed line corresponds to a representative MEO altitude of 7 × 10 3 km that we consider later.Link loss at zenith rapidly increases with increasing orbital altitudes. FIG. 7 . FIG. 7. Trade-off between telescope aperture onboard the satellite and pointing error with respect to the link gain at zenith.(a) The trade-off for a representative low Earth orbit (LEO) at an altitude of 500 km.(b) The trade-off for a representative medium Earth orbit (MEO) at an altitude of 7000 km.(c) The trade-off for a geostationary orbit (GEO). FIG. 8 . FIG.8.Raw key length accumulated over a year as a function of orbital altitude.The gray dashed vertical line indicates a GEO altitude, where the satellite remains stationary at the zenith leading to a higher raw key and lower QBER compared to other nearby orbits where the satellite has varying angular elevation with respect to the OGS during an overpass.To model this, we assume a satellite telescope with aperture 0.5 m, OGS telescope with aperture 1.8 m and the entangled pair production rate 100 Mcps. TABLE I . Space-to-ground optical link parameters for CQT-Sat. Table II to compute the link loss at every point of the satellite pass.A satellite may pass over a ground station with a different maximum elevation.We simulate overpasses with a maximum ele- TABLE III . Reference system parameters.We take published information of the Micius satellite and OGS system as representing an empirically derived set point for our finite key analysis.The total loss at zenith can be linearly scaled to model other systems with smaller OGSs or differing source rates.
9,731.4
2022-04-26T00:00:00.000
[ "Physics" ]
Coal Seam Roof: Lithology and Influence on the Enrichment of Coalbed Methane How to cite item He, Y., Wang, X., Sun, H., Xing, Z., Chong, S., Xu, D., & Feng, F. (2019). Coal Seam Roof: Lithology and Influence on the Enrichment of Coalbed Methane. Earth Sciences Research Journal, 23(4), 359-364. DOI: https://doi. org/10.15446/esrj.v23n4.84394 To identify the lithology of coal seam roof and explore the influence of these roofs on the enrichment of coalbed methane, low-frequency rock petrophysics experiments, seismic analyses and gas-bearing trend analyses were performed. The results show that the sound wave propagation speed in rock at seismic frequencies was lower than that at ultrasound frequencies. Additionally, the P-wave velocities of gritstone, fine sandstone, argillaceous siltstone and mudstone were 1,651 m/s, 2,840 m/s, 3,191 m/s and 4,214 m/s, respectively. The surface properties of the coal seam roofs were extracted through 3D seismic wave impedance inversion. The theoretical P-wave impedance was calculated after the tested P-wave velocity was determined. By matching the theoretical P-wave impedance of the four types of rocks with that of the coal seam roofs, we identified the lithology of the roofs. By analyzing known borehole data, we found that the identified lithology was consistent with that revealed by the data. By comparing and analyzing the coal seam roof lithology and the gas-bearing trends in the study area, we discovered that the coal seam roof lithology was related to the enrichment of coalbed methane. In the study area, areas with high gas contents mainly coincided with roof zones composed of mudstone and argillaceous siltstone, and those with low gas contents were mainly associated with fine sandstone roof areas. Thus, highly compact areas of coal seam roof are favorable for the formation and preservation of coalbed methane. ABSTRACT Coal Seam Roof: Lithology and Influence on the Enrichment of Coalbed Methane Introduction Coalbed methane exists in coal rock, and coalbed methane enrichment remains a difficult research topic. In addition to the coal seam, which influences methane enrichment (Groshon, Pashin, & McIntyre, 2009;Kędzior, 2009;Hemza, Sivek, & Jirásek, 2009;Gao et al., 2012;Moore, 2012), the coal seam roof also affects the enrichment of coalbed methane. Thus, the role of the roof has attracted increased research attention (Chen, Cui, Liu, & Lang., 2006;Zhou, 2013;Song et al., 2013). The properties of coal seam roofs are difficult to determine based on sparse borehole data; however, 3D seismic data collected at low frequencies (5-100 Hz) can greatly improve the density of control points and identification accuracy. The theories regarding the propagation velocity of sound waves in rock at seismic frequencies have not been supported by experimental data, although some have described the velocity and attenuation of sound waves in fluid-saturated rock at seismic frequencies (Ba, 2010;White, 1975;Biot, 1956). These theoretical models are mainly based on experimental data at ultrasonic frequencies (MHz). Tutuncu, Gregory, Sharma, and Podio (1998) measured the P-wave and S-wave velocities of a saline-saturated sandstone at frequencies of 10 Hz-1 MHz, and the results showed that both the P-wave and S-wave velocities displayed upward-increasing trends. Batzle, Han, and Hofmann (2006) tested and compared the propagation velocity of sound waves at seismic and ultrasonic frequencies using a low-frequency stress-strain method and an ultrasonic test. Wang, Zhao, Harris, and Quan (2012) measured the bulk modulus of different types of rocks at 600 Hz using resonance spectroscopy and compared the results with those at ultrasonic frequencies. Using a low-frequency stress-strain method, Wei, Wang, Zhao, Tang, and Deng (2015a;2015b) discussed the factors that influence the propagation velocity of sound waves in sandstone and studied the propagation velocity of sound waves and their dispersion characteristics under different conditions. In this paper, the propagation velocity of sound waves in four types of rocks was measured and studied using a low-frequency stress-strain method. By matching the tested velocity with 3D seismic data, we identified the coal seam roof lithology to explore the influence of the roofs on the enrichment of coalbed methane. We hope that this study will improve studies of the propagation velocity of sound waves at low frequencies and the enrichment trends of coalbed methane (Hungerford, 2013;Li & Wu, 2016;Liu, 2018). Low-frequency rock petrophysics experiments The samples used in the low-frequency rock petrophysics experiments included four representative types of rocks: gritstone, fine sandstone, argillaceous siltstone, and mudstone. The rocks were collected from the top of the #3 coal seam of the Permian Shanxi Formation at the Sihe Coal Mine in the southern Qinshui Basin, China. The gritstone was coarse-grained quartz sandstone with no primary pore development, some micro-fissures, and a porosity of less than 3%. The fine sandstone was fine quartz sandstone with primary pore development and a porosity of approximately 20%. The argillaceous siltstone was composed of fine particles and miscellaneous bases with poor separation, no primary pore development, and porosity of almost 0%. The mudstone was collected from the immediate coal seam roof, was composed of argillaceous rocks, and had a porosity of 0%. In this experiment, the physical system used to test the rock samples at low frequencies ( Fig. 1) was developed by the Colorado School of Mines, and a low-frequency stress-strain method was used as the experimental method. The tested rock samples were cylindrical, with a diameter of 38 mm and a length of 50-80 mm. Each sample had cylindrical aluminum block bases and P-and S-wave probes at both ends. Additionally, longitudinal and transverse strain gauges were fixed on the aluminum block bases and rock samples, respectively. The physical testing system can produce acoustic waves with frequencies of 3-2,000 Hz, and the vertical and transverse strains can be tested with fixed strain gauges on the aluminum block and rock samples. The Young's modulus of the aluminum block was 69.8 GPa. Each rock sample and the aluminum block bases comprised a complete unit, so their stresses were the same during the test. Notably, the stress on the aluminum block  Al ( ) was equal to that on the rock sample  s ( ) (Formula 1). σ =σ Al s (1) The stress on the aluminum block  Al ( ) was as follows. The Young's modulus of the rock sample (E s ) can be deduced from Formulas 1 and 2. The Poisson ratio of a rock sample (v) can be obtained by Because the Young's modulus and Poisson ratio of a rock sample were given, the other parameters of the rock sample at different frequencies and conditions could be obtained by the formulas listed in Table 1. Results of the low-frequency experiments The propagation velocity of sound waves in the four types of rocks was measured at 16 frequencies from 3-100 Hz. The P-wave and S-wave velocities of the rocks at the 16 frequencies are shown in Fig. 2. Fig. 2a shows the relationship between the P-wave velocity and frequency. The P-wave velocity of the gritstone varies little at different frequencies, with an average of 1,651 m/s. The P-wave velocities of the fine sandstone, argillaceous siltstone and mudstone exhibited little variability, with average values of 2,840 m/s, 3,191 m/s and 4,214 m/s, respectively. Fig. 2b shows the relationship between the S-wave velocity and frequency, and the resulting trends are similar to those for the P-wave velocity, but with smaller variations and average values of 1,233 m/s, 1,492 m/s, 1,778 m/s and 3,514 m/s for the four materials. By comparing the propagation velocity of sound waves in the four types of rocks, we found that both the P-wave and S-wave velocities successively increased at different frequencies. Wave impedance properties of the coal seam roof Because wave impedance inversion is important for identifying rock properties, it is a common method used to identify sandstone and mudstone based on the sound wave velocity and density log curves. Figure 3 shows the P-wave impedance surface properties of the #3 coal seam roof in the study area. The depth of the roof was 10 m higher than the bottom boundary of the #3 coal seam, and the P-wave impedance varies between the roof and the boundary. According to the characteristics of the P-wave impedance of the roof, the impedance was divided into three types: yellow-red P-wave impedance (6,400 m/s·g/cc to 7,800 m/s·g/cc), blue P-wave impedance (7,800 m/s·g/cc-9,400 m/s·g/cc) and purple P-wave impedance (> 9,400 m/s·g/cc). The P-wave impedance, with a range of approximately 6,400 m/s·g/cc-10,000 m/s·g/cc, greatly differed and displayed obvious partitioning and zoning. Gas content of the study area The gas properties in the study area were analyzed with a Kriging interpolation method based on the gas contents of the boreholes in the #3 coal seam (Fig. 4). The overall gas content in the area was high but considerably Transverse wave velocity V G s =  3D seismic data and gas content of the study area The 3D seismic data had a CMP mesh density of 5 m×10 m, with 16 stacked layers. First, the data underwent a series of processes, including static correction, amplitude preservation, denoising, deconvolution, and velocity analysis. Then, post-stack wave impedance inversion was performed using the processed data. In terms of the stratum of the coal seam roof, the depth of the coal seam roof can be determined by observing the strong reflection interface formed by the coal seam and the surrounding rock. From the post-stack wave impedance inversion data, the P-wave impedance surface properties of the coal seam roof strata were extracted. To identify the coal seam roof lithology, the properties of the coal seam roof were correlated and matched with the product of the propagation velocity of the sound waves and the density of rock measured in the low-frequency experiments. Using the gas contents measured in boreholes, the content difference and distribution were analyzed. The study area was 3.57 km long and 0.87 km wide, with an area of approximately 3.10 km 2 . The gas contents were collected from 20 wells, and measurements were obtained from cores via a direct analytic method for the #3 coal seam. When analyzing the gas content, a content distribution map was drawn using the Kriging interpolation method according to the gas contents of the boreholes. Figure 2. Relationships between the propagation velocity of sound waves in the four types of rocks and the frequency varied. The study area (3.10 km 2 ) exhibited a gas content ranging from approximately 19 m 3 /t to nearly 29 m 3 /t, a difference of nearly 10 m 3 /t. Three sub-areas with gas contents greater than 27 m 3 /t were observed and are colored in red in Figure 5. Four sub-areas with gas contents lower than 20 m 3 /t were observed and are colored in yellow in Figure 5. Discussion Wave impedance inversion can be used to analyze the properties of coal seam roof strata. Wave impedance is the product of the propagation velocity of a sound wave in rock and the density of the rock, and it can be used to identify the lithology. However, sandstone and mudstone cannot be distinguished simply by wave impedance and can be influenced by many factors. Moreover, the logging velocity was obtained at high frequencies in the process of wave impedance inversion. After well logging calibration was performed for the velocity and density log curves and 3D seismic data, wave impedance inversion was performed for data interpolation. Low-frequency experiments were performed to obtain the propagation velocity of sound waves in four types of rock, and the velocity was used to calculate the theoretical wave impedance. The results did not match the wave impedance values obtained by 3D seismic inversion at higher frequencies. For example, Murphy (1984) measured the propagation velocity of sound waves in sandstone at different frequencies using the resonance bar technique, and the results showed that the propagation velocity of sound waves in sandstone at 500 kHz was 10-25% higher than that at 5 kHz. Tutuncu et al. (1998) measured the propagation velocity of sound waves in saline-saturated sandstone, and the P-wave velocity and S-wave velocity increased by 33% and 20%, respectively, as the frequency increased from 10 Hz to 1 MHz. Batzle et al. (2006) measured the P-wave velocity of high porosity and high permeability sandstone (with a porosity of 35% and a permeability of 8.7×10 -12 m 2 ) using a stress-strain method and ultrasonic measurement technique. The P-wave velocity of the 90% saturated sandstone at 0.8 MHz was 32% higher than that at 5-10 Hz (Batzle et al., 2006). These experiments confirmed that the propagation velocity of sound waves at seismic frequencies is lower than that at ultrasonic frequencies. According to the collected borehole data, the coal seam roof was mainly composed of gritstone, medium sandstone, fine sandstone, sandy mudstone, siltstone, and mudstone. In this study, the propagation velocity of sound waves was monitored at both low and ultrasonic frequencies (1 MHz). The P-wave velocity at low frequencies was lower than that at ultrasonic frequencies, but the P-wave velocities of rocks with different lithologies considerably differed. To better match the theoretical wave impedance calculated according to the experimental results, the P-wave velocity was increased by 20% to compensate for the difference in velocity due to the difference in frequency. The specific data are shown in Table 2. *V p-Lf is the V p value of the low-frequency condition, and V p-1MHz is the V p value of the 1 MHz frequency condition. The difference in wave impedance for different lithologies was essential for matching the theoretical wave impedance with the wave impedance of 3D seismic inversion. The P-wave velocity of the gritstone was low, possibly due to the existence of micro-fissures. The P-wave impedance did not match that based on 3D seismic inversion, and the weighted P-wave impedance of the gritstone was 4,557 m/s·g/cc. The P-wave impedance of fine sandstone was 6,646 m/s·g/cc, which fell within the scope of the yellow-red wave impedance (6,400 m/s·g/ cc-7,800 m/s·g/cc) when matching the wave impedance obtained by 3D seismic inversion. The P-wave impedance of argillaceous siltstone was 9,228 m/s·g/ cc, which is within the scope of blue wave impedance (7,800 m/s·g/cc-9,400 m/s*g/cc). Additionally, the P-wave impedance of the mudstone exceeded 10,000 m/s·g/cc, which is within the scope of purple P-wave impedance (> 9,400 m/s·g/cc). The lithology of the identified coal seam roof was tested according to the known borehole data. The depth of the coal seam roof was 10 m higher than the bottom boundary of the coal seam, and 2000 m/s was selected as the average propagation velocity of the sound wave. Fig. 5 shows the coal seam of the selected test well and the top and bottom lithology. The identified coal seam roof lithology was consistent with that revealed by the borehole data. In terms of lithological identification, sandstone could not be distinguished from fine sandstone, and sandy mudstone could not be distinguished from mudstone. However, the fine sandstone, siltstone and mudstone could be distinguished. In addition to coal seams, coal seam roof also influence the enrichment of coalbed methane. During the diagenetic evolution of coal, a ton of coal, from lignite to anthracite, can produce 268-393 m 3 of gas (Zhang & Li, 1988), but only a fraction of this coalbed methane is eventually preserved. The coal seam roof directly contacts the coal seam and undergoes diagenetic evolution together with the coal seam, and the compactness of the coal seam roof has a direct influence on the residual coalbed methane content (Li et al., 2003;Liu, Li, Yang, Liu, & Yan, 2012). By comparing and analyzing the coal seam roof lithology and gas content, we found that the coal seam roof lithology is related to the enrichment of coalbed methane. The coal seam roof in the three areas with high gas contents was mainly mudstone and argillaceous siltstone, and the coal seam roof in the four areas with low gas contents was mainly fine sandstone. The mudstone and argillaceous siltstone, which are highly compact, were more conducive to the preservation of coalbed methane, and the fine sandstone, which is poorly compact, was not. Conclusion In this paper, the variable propagation velocity of sound waves in four types of rock was measured at 3-100 Hz using a low-frequency stress-strain method. The P-wave and S-wave velocities of the gritstone, fine sandstone, argillaceous siltstone, and mudstone successively increased at the same frequency. The propagation velocity of sound waves in the four types of rock at seismic frequencies was lower than that at higher frequencies. After the P-wave velocities obtained via testing were considered, the theoretical P-wave impedance of the four types of rocks was calculated, and the values were then matched with the wave impedance values of seismic inversion for the coal seam roof. The roof was mainly composed of mudstone, argillaceous siltstone, and fine sandstone, and the boreholes verified that the lithology. The lithology of the roof is related to the enrichment of coalbed methane. The roof in areas with high gas contents was mainly mudstone and argillaceous siltstone, and that in areas with low gas contents was mainly fine sandstone. Compact areas of the coal seam roof were more conducive to the formation and preservation of coalbed methane.
3,970.4
2019-10-01T00:00:00.000
[ "Environmental Science", "Geology" ]
Improved drug targeting to liver tumor by sorafenib-loaded folate-decorated bovine serum albumin nanoparticles Abstract Background: To prepare sorafenib-loaded folate-decorated bovine serum nanoparticles (FA-SRF-BSANPs) and investigate their effect on the tumor targeting. Methods: The nanoparticles were characterized and evaluated by in vivo and in vitro experiments. Results: SRF-loaded BSA nanoparticles (SRF-BSANPs) was first prepared and modified with folic acid by chemical coupling to obtain FA-SRF-BSANPs. The average particle size, zeta potential, entrapment efficiency, and drug loading of the optimized FA-SRF-BSANPs were 158.00 nm, −16.27 mV, 77.25%, and 7.73%, respectively. The stability test showed that FA-SRF-BSANPs remained stable for more than 1 month at room temperature. The TEM analysis showed that the surface of FA-SRF-BSANPs was nearly spherical. XRD analysis showed that the drug existed in. the nanoparticles in an amorphous state. FA-SRF-BSANPs can promote the intracellular uptake of hepatoma cells (SMMC-7721) with the strongest inhibitory effect compared with SRF-BSANPs and sorafenib solution. Furthermore, the tumor targeting of FA-SRF-BSANPs (Ctumor/Cblood, 0.666 ± 0.053) was significantly higher than those of SRF-BSANPs (Ctumor/Cblood, 0.560 ± 0.083) and sorafenib-solution (Ctumor/Cblood, 0.410 ± 0.038) in nude mice with liver cancer. Conclusion: FA-modified albumin nanoparticles are good carriers for delivering SRF to the tumor tissue, which can improve the therapeutic effect and reduce the side effects of the drug. Introduction Sorafenib (SRF, Figure 1) is a multi-target kinase inhibitor, which is a first-line drug for the treatment of advanced liver cancer. It has obvious antitumor activity (Minami et al. 2008;Kudo 2012). Its mechanism of action is to inhibit the proliferation of tumor cells by inhibiting the receptor tyrosine kinase KIT and FLT-3 and the Raf/MEK/ERK serine/threonine kinase pathway (Strumberg et al. 2007;Miller et al. 2009;Abdel-Rahman & Elsayed 2013;Shimada et al. 2014). It inhibits neoplastic angiopoiesis by inhibiting the upstream VEGFR and PDGFR and the downstream Raf/MEK/ERK pathway. However, its low oral bioavailability (Liu et al. 2016) and side reactions (Strumberg 2012) greatly limit its clinical application. Therefore, developing a new and effective targeting drug delivery system is urgent to increase the drug concentration in the tumor and to reduce the effect of the drug on normal tissue cells, thereby improving the antitumor effect of the drug. Folate (FA) can enter cells through a receptor-mediated endocytosis pathway. The folic acid receptor (FR) is expressed in the lungs, glands, kidney, choroid plexus, and placenta at low level in physiological state, and it is highly expressed in tumor tissue with 100-300 times higher than that in normal tissues (Weitman et al. 1992;Ross et al. 1994;Sudimack & Lee 2000). The endocytosis mediated by FR can be used to absorb the FA, FA conjugate, and FA antagonists effectively. Therefore, the FR is a good target for the target delivery system for many solid tumors (Lu & Low 2002). As a first-line drug for treatment of advanced liver tumors, very few studies on active targeting preparations for SRF have been reported. In previous reports, Li et al. (2015) developed a nanosized SRF/FA/PEG-PLGA-NP with both anticancer and magnetic resonance properties, which can improve the antitumor activity in vitro, but this conclusion was only verified at the cellular level. Zhang et al. (2013) prepared FA-functionalized polymeric micelles loaded with SPIONs and SRF, which can increase the concentration of SRF in HepG2 cells, but no results in vivo studies were presented. In the current study, SRF-loaded FA-decorated bovine serum nanoparticles (FA-SRF-BSANPs) were prepared through loading with bovine serum albumin (BSA) as the drug carrier and were modified with FA by chemical coupling. FA-SRF-BSANPs were characterized, and the active targeting functions of it were evaluated by in vivo and in vitro experiments, including cytotoxicity analysis, cell uptake test, liver tumor targeting evaluation, and so on. Materials and methods Materials SRF was kindly provided by the Gz Eastbang Pharmaceutical Technology Co., Ltd. (Guangdong, China). FA, N-hydroxysuccinimide (NHS), and 1-ethyl-3-(3-dimethyllamino propyl) carbodiimide hydrochloride were purchased from Chengdu Aikeda Chemical Reagent Co., Ltd. (Chengdu, China). Megestrol acetate (internal standard, IS) was purchased from the TiXiAi Chemical Company (Shanghai, China). All animals in this experiment were from the Department of Animal Science, Nanchang University. Preparation of SRF-BSANPs BSA (50.0 mg) was weighted and dissolved in 10.0 mL of normal saline to form 0.5% (w/v) carrier solution. The pH of the carrier solution was adjusted to 9.0 using 0.2 mol/L NaOH solution. An appropriate amount of SRF was dissolved in ethanol (12.0 mg/mL) as the oil phase. At room temperature, 417.0 mL of the oil phase was added dropwise to carrier solution and stirring for 1.0 h at 500 rpm/min, then 0.2 mL of 0.5% glutaraldehyde was added to the system and continuously stirred for 3 h. In the end, SRF-BSANPs were obtained. Preparation of FA-SRF-BSANPs The FA-SRF-BSANPs were prepared on the basis of SRF-BSANPs by the carboxyl group of FA amidatint with the active amine on the surface of albumin. The specific preparation process was as follows: (1) Preparation of folic acid active ester (FA-NHS) according to previous literature (Zhang et al. 2014) with slight modification. Briefly, 0.50 g of FA was dissolved in 15.0 mL of dimethyl sulfoxide (DMSO) containing 0.45 g of NHS and 0.25 g of EDCÁHCl. The mixture was allowed for stirring for 24 h at room temperature and dark environment. Insoluble impurities were removed through filtration. The filtrate was mixed with two times volume of acetone and ether mixture (V:V ¼ 30:70) to precipitate FA-NHS Dubey et al. 2015) and filtrated to obtain the yellow precipitate. After washing three times with ether and vacuum drying, light-yellow FA-NHS was obtained. (2) The FA-NHS was dissolved in 1.0 mL of sodium carbonate/sodium bicarbonate buffer solution (pH 10.0). Newly prepared SRF-BSANPs (10.0 mL) was obtained and adjusted to pH 10.0 using 0.2 mol/L NaOH solution. FA-NHS solution was added dropwise to SRF-BSANPs, and FA-SRF-BSANPs were obtained after stirring for a period of time at room temperature and dark environment. The flowchart for the preparation of nanoparticles is shown in Supplementary Figure S1. The FA-SRF-BSANPs were placed in a dialysis bag (MWCO 3500 Da) and dialyzed for 60 h with phosphate buffered solution (PBS, pH 7.4). The dialysate was replaced every 3 h to remove excessive FA-NHS. The purified FA-SRF-BSANPs suspension was freeze dried to obtain FA-SRF-BSANPs powder. Evaluation of FA content in FA-SRF-BSANPs The successful coupling of FA and albumin is the key to this experiment. It was characterized by Fourier Transform infrared spectroscopy (FTIR) and proton nuclear magnetic resonance ( 1 H NMR). FTIR: FA powder, SRF-BSANPs freeze-dried powder, and FA-SRF-BSANPs freeze-dried powder was mixed with KBr powder and made into thin slices. The thin slices were then analyzed using FTIR-8400 infrared spectrophotometer (SHIMADZU, Japan). 1 H NMR: Appropriate amount of FA, BSA, and FA-SRF-BSANPs freeze-dried powder were dissolved in deuterated DMSO (containing 0.03% TMS). After centrifugation at 13,000 rpm for 5 min, the supernatant was transferred into a 5 mm NMR tube. The samples were analyzed using NMR analyzer Brucker A-Vance-600 (Brucker, Switzerland). FA working curve for quantification: the amount of FA was accurately weighed and dissolved in deuterium DMSO, and then the FA solution with concentrations of 1.0 2.0, 4.0, 6.0, and 8.0 mg/mL was tested for 1 H NMR. After adjusting the baseline and phase, the peaks of TMS and FA (a multiple peak between 2 ppm and 2.1 ppm) were integrated. The ratio of the area of FA to that of TMS was recorded as A, FA concentration C was used as the x-axis, and A was the y-axis to develop the regression equation. Three samples of FA-SRF-BSANPs freeze-dried powder prepared from the same batch were dissolved in deuterated DMSO and analyzed by 1 H NMR. According to the above integration method and the working curve, the FA amount in FA-SRF-BSANPs was calculated. Particle size, zeta potential, encapsulation efficiency, and drug loading The particle size and zeta potential of FA-SRF-BSANPs were detected using Malvern Mastersizer (PSA NANO2590, Malvern Instruments, Malvern, UK). Entrapment efficiency and drug loading were determined by high speed centrifugation method (Dreis et al. 2007). The concentration of free SRF in the supernatant was determined by HPLC method (Wang et al. 2018). Surface morphology After dilution to a suitable concentration, nanoparticles were dropped onto copper grids and negatively stained with 2.0% phosphotungstic acid. The morphology of SRF-BSANPs and FA-SRF-BSANPs were observed under a transmission electron microscope (TEM, JEM-2100, JEOL, Tokyo, Japan). In vitro cytotoxicity assay Human normal hepatocyte LO2 cell line and liver cancer cell line SMMC-7721 were purchased from Beijing Solarbio Science & Technology Co., Ltd. and they were incubated in DMEM medium (including 10% fetal bovine serum, 100 U/mL penicillin, 100 U/mL streptomycin) in cell culture incubator (37 C, 5% CO 2 ). LO2 and SMMC-7721 cells were inoculated with the density of (5 Â 10 3 /well) in 96 well plates and incubated for 24 h. The cell culture media without drug were used as the control group, and the SRF solution, SRF-BSANPs, and FA-SRF-BSANPs were used as the experimental group. After the cells were adhered, the old medium was removed, and 0.2 mL of medium containing drug was added to each well (three SRF preparations were diluted to 60.0, 40.0, and 20.0 mg/mL with the medium, respectively.) and incubated for 24 h. Then 15.0 mL MTT solution (5 mg/mL) was added to each well in the dark. The medium was removed after 4 h, and the DMSO was added to dissolve formazan, followed by measurement of the absorbance at 490 nm (A) with DNM-9602A microplate reader (Beijing PERLONG medical company) to calculate the inhibition ratio. Hepatoma carcinoma cell uptake experiment In order to investigate the uptake of albumin nanoparticles and FA modified albumin nanoparticles in hepatoma cells, fluorescein isothiocyanate (FITC), instead of SRF, was used to prepare FITC-BSANPs and FA-FITC-BSANPs. The preparation process of FITC-BSANPs and FA-FITC-BSANPs were the same as that of SRF-BSANPs and FA-SRF-BSANPs. SMMC-7721 cells from the logarithmic growth period were digested with trypsin and was inoculated in the six-well plate (1 Â 10 5 /well) and cultured in the incubator. After the cells were adhered, the old medium was removed, and 3.0 mL of medium containing nanoparticles (FITC-BSANPs and FA-FITC-BSANPs 20.0 mg/mL) were added to the wells and incubated for 2 h. The cells were washed three times with PBS and were observed and photographed under the fluorescent inverted microscope (OLYMPUS IX71 inverted fluorescence microscope, Beijing OLYMPUS Sales Service Co., Ltd.). In addition, another 3.0 mL of medium containing nanoparticles (FITC-BSANPs and FA-FITC-BSANPs were diluted to 20.0, 10.0, and 5 lg/mL with the medium) were added to the wells and incubated for 2 h. The cells were washed three times with PBS and digested with trypsin. After centrifugation at 1000 rpm for 5.0 min, the obtained cell precipitation was lysed in 4.0 mL of methanol with ultrasound, and centrifuged again. The fluorescence intensity in the supernatant was measured using fluorescence spectrophotometer (k ex ¼494.0 nm, k em ¼518.0 nm, PerkinElmer LS55 fluorescence spectrophotometer, Perkin Elmer Enterprise Management Co., Ltd.) Liver targeting of FA-SRF-BSANPs in healthy rats Animal experimentation followed the approval of the Animal Care Committee of Nanchang University in this study. Fortyfive healthy adult female SD rats were randomly divided into three groups. Each rat was given orally a single dose (7.5 mg/kg) of SRF suspension, SRF-BSANPs, or FA-SRF-BSANPs. Three rats were sacrificed in each group at 2.0, 6.0, 10.0 24.0, and 58.0 h, the blood and liver tissues were collected. The serum and liver tissues were stored at À40 C. Drug targeting index (DTI) and drug selectivity index (DSI) were used as indicators to quantitatively evaluate the distribution characteristics of target preparations in vivo. DTI was used to compare the differences in the tendency of different preparations to an organ. DSI was used to compare the distribution of target preparations between target organs and nontarget organs at a certain time. The formulas for both of them were as follows: DTI ¼ Drugcontent of the liver tissue at T moment after the administration of target preparation Drug content of liver tissue at T moment after administration of nontargeted preparations DSI ¼ DrugcontentoflivertissueatTmoment DrugcontentofbloodatTmoment Tumor targeting of FA-SRF-BSANPs in nude mice Four-week-old male athymic nude mice were obtained from the Department of Animal Science of Nanchang University and adapted to the environment for a week. A total of 0.1 mL (1 Â 10 7 ) of SMMC-7721 cells of logarithmic growth phase was subcutaneously injected into the right hind limb of each nude mice, and the size of the tumor was observed. When the tumor grew to approximately 1 cm 3 , the nude mice were randomly divided into three groups, five rats in each group, with a single intraperitoneal injection of 7.5 mg/ kg SRF-suspension, SRF-BSANPs, or FA-SRF-BSANPs. At 1.0 h after administration, the nude mice were killed, and blood, liver, as well as tumor tissues were collected. A small amount of tumor tissue was fixed with 4% paraformaldehyde for HE staining. Serum, liver, and tumor tissue were kept at À40 C for further SRF quantification. The methods for biological sample treatment and quantification of SRF concentration were referred to our previous study (Wang et al. 2018). Preparation of nanoparticles In this study, albumin nanoparticles were prepared by an improved self-assembly method. This method is simple and convenient without special equipment requirements. In order to obtain high-quality nanoparticles, five factors, including the concentration of the carrier solution, pH value, amount of the oil phase solution, amount of glutaraldehyde, and curing time of the crosslinking, were investigated in this experiment. The optimum conditions were described. In addition, in order to obtain the nanoparticles with high FA coupling amount during the preparation of FA-SRF-BSANPs, we also optimized the amount of FA-NHS and the reaction time. The results showed that FA coupling was the highest when the dosage of FA-NHS was 5.0 mg, and the reaction time was 8 h. Evaluation of FA content in FA-SRF-BSANPs The infrared spectra of FA, SRF-BSANPs, and FA-SRF-BSANPs are shown in Supplementary Figure S2(a). In the infrared spectrum of FA-SRF-BSANPs, the characteristic peaks of FA belonging to (-OH) disappeared at 932 cm À1 , and the amino characteristic peak of BSA at 612 cm À1 on the lysine was not obvious, and probably the reaction occurred between the amino group and the carboxyl group. In addition, three characteristic peaks of 1664 cm À1 (C ¼ O), 1548 cm À1 (N-H), and 1036 cm À1 (C-N) represented the new amide bonds formed by the coupling of FA and BSA in the infrared spectrum of FA-SRF-BSANPs. According to the above data, FA was successfully coupled with BSA. The 1 H NMR diagram of FA, SRF-BSANPs, and FA-SRF-BSANPs is shown in Supplementary Figure S2(b). We can see that FA and FA-SRF-BSANPs had multiple peaks at 1.85-1.95 2.00-2.10, and 2.25-2.35 ppm, while SRF-BSANPs had no multiple peaks at these three sites, indicating that FA and BSA were successfully coupled. In addition, the characteristic peak (11.40 ppm) of active H on FA disappeared in the 1 H NMR pattern of FA-SRF-BSANPs, which also demonstrated that FA reacted with BSA. According to the standard curve equation of FA (Supplementary Figure S3, A ¼ 0.5317C þ 0.149, R 2 ¼0. 9989.), the FA coupling capacity was calculated to be 62. 46 ± 1.98 mg (FA)/mg (BSA). Particle size, zeta potential, entrapment efficiency, drug loading, and stability of the nanoparticles The morphological structures of SRF-BSANPs and FA-SRF-BSANPs were observed under a transmission electron microscope (Supplementary Figure S4(b,d)). According to the diagram, the nanoparticles were spherical and smooth in surface. The particle size, zeta potential, entrapment efficiency, and drug loading of SRF-BSANPs and FA-SRF-BSANPs are shown in Table 1. The average particle size of SRF-BSANPs was 129.42 ± 0.61 nm, and the size of albumin nanoparticles increased slightly (158.00 ± 2.43 nm) after the coupling of FA, which was consistent with the literature (Dubey et al. 2015). The particle size distributions of SRF-BSANPs and FA-SRF-BSANPs are shown in Supplementary Figure S4(a,c), which had relatively uniform particle size distribution. The physical stability of the prepared FA-SRF-BSANPs was evaluated in brown bottle over 2 months at 25 ± 2 C. FA-SRF-BSANPs were sampled on the 1st, 15th, 30th, 45th, and 60th day of storage. The stability of nanaparticles was evaluated by the appearance, EE, and particle size. The results are shown in Supplementary Table T1. During 30 days of storage, the appearance, EE, and particle size of FA-SRF-BSANPs did not change substantially, indicating that the FA-SRF-BSANPs are stable at room temperature for 1 month. XRD analysis The results of XRD analyses on FA, BSA, SRF, physical mixture, and FA-SRF-BSANPs are shown in Supplementary Figure S5. The SRF had sharp characteristic peaks at 2h ¼ 13.3 , 2h ¼ 17. 7 , and 2h ¼ 22.9 , indicating that SRF exhibited a crystal structure, and the three characteristic peaks can be observed in the XRD Atlas of physical mixture. However, the three characteristic peaks in the XRD Atlas of freeze-dried FA-SRF-BSANPs powder completely disappeared, which indicated that SRF was amorphously enclosed in the nanoparticles. In vitro cytotoxicity assay of nanoparticles The results of cytotoxicity test are shown in Figure 2(a,b). As shown in Figure 2(a), the toxicity of SRF-solution on LO2 cells was slightly stronger than that of SRF-BSANPs and FA-SRF-BSANPs under the same concentrations, but no statistical difference was observed. Interestingly, when SRF concentration was at 40.0 mg/mL, the inhibition rates of SRF-solution, SRF-BSANPs, and FA-SRF-BSANPs to LO2 cells (49.93%, 47.59%, and 48.18%, respectively) were significantly stronger than that in 20.0 mg/mL (the inhibition rates: 19.96%, 15.63%, 15.01%, respectively), but when SRF concentration was increased to 60 mg/mL, the cell inhibition rate (51.42%, 48.47%, and 49.47%, respectively) was not significantly increased. This might be because the optimal concentration was between 40.0 and 60.0 mg/mL. Figure 2(b) shows that FA-SRF-BSANPs exerted the highest SMMC-7721 cell inhibition rate at three concentration levels, compared with SRF-solution and SRF-BSANPs. The FA-modified SRF-BSANPs had significant targeting ability to hepatoma cells, which can enhance the anti-cancer effect of SRF in vivo. Figure 2(c,d) shows that the fluorescence intensity of FA-FITC-BSANPs group was obviously stronger than that of FITC-BSANPs group. The fluorescence intensity of the FA-FITC-BSANP group was 2.84, 3.63, and 6.43 times that of the FITC-BSANP group at concentrations of 20.0, 10.0, and 5.0 mg/mL, respectively. The uptake of FA-FITC-BSANPs by SMMC-7721 cells was greater than that of FITC-BSANPs, further demonstrating that FA modified albumin nanoparticles had good targeting to hepatoma cells. Investigation of liver targeting of FA-SRF-BSANPs in healthy rats The values of DTI and DSI after single oral administration of SRF-BSANPs, FA-SRF-BSANPs, and SRF-suspension are shown in Table 2. The mean values of DTI in the SRF-BSANPs group and FA-SRF-BSANPs group were 26.85 ± 7.62 and 24.21 ± 7.94, respectively, which showed that the two nanoparticles exhibited good liver targeting compared with SRF-suspension. Table 2 also shows that both SRF-BSANPs and FA-SRF-BSANPs had higher DSI values at all time points after oral administration than those in SRF-suspension group. The average values of DSI in SRF-BSANPs group (6.14 ± 0.69) and FA-SRF-BSANPs group (6.93 ± 0.43) were 2.79 and 3.15 times those of SRF-suspension group (2.20 ± 0.48), respectively. SRF-BSANPs and FA-SRF-BSANPs exhibited good targeting in rat liver compared with SRF-suspension, but no difference was observed between SRF-BSANPs and FA-SRF-BSANPs. Investigation of tumor targeting of FA-SRF-BSANPs in nude mice In the current study, the nude mice model of liver tumor was successfully established (Figure 3). The nude mice were randomly divided into three groups. The mice were given SRF-solution, SRF-BSANPs, and FA-SRF-BSANPs in a single IP administration. After 1 h, the nude mice were killed and samples of blood, liver, and tumor were collected. The content of SRF in each sample was measured. In order to more intuitively characterize the targeting of SRF preparations, the ratio of the SRF concentration in the nude mouse liver or tumor sample to the SRF concentration in the blood sample (C liver / C blood or C tumor /C blood ) was analyzed, and the results are shown in Figure 4. The C liver /C blood of FA-SRF-BSANPs group or SRF-BSANPs group was significantly greater than that of the SRF-solution group, but no statistical difference was observed between the FA-SRF-BSANPs group and the SRF-BSANPs group, indicating that FA-SRF-BSANPs or SRF-BSANPs had a certain liver tissue targeting. However, the FA did not increase the drug concentration in normal liver tissue, which was consistent with the results of the nontumor-bearing healthy rats. More importantly, the C tumor /C blood values of SRF-BSANPs group (0.560 ± 0.083) and FA-SRF-BSANPs group (0.666 ± 0.053) were significantly higher than those of the SRF-solution group (0.410 ± 0.038). The difference between the SRF-BSANPs group and the FA-SRF-BSANPs group was significant (p < .05), indicating that FA-SRF-BSANPs and SRF-BSANPs had obvious targeting of tumor tissue, and FA-SRF-BSANPs had a stronger targeting effect on tumor tissue, and can significantly increase the concentration of SRF in tumor tissue. Self-assembled polymer systems at the nanometer scale have attracted much attention especially in biomedical fields (Zhao et al. 2014;Li et al. 2016). Drugs are dispersed, encapsulated, and adsorbed on polymer particles. They are released through cystic wall leaching, permeation, and diffusion and can be released by the dissolution of the matrix itself. It has the advantages of targeting, releasing, increasing the absorption and bioavailability of drugs, increasing solubility of insoluble drugs, improving drug stability, and reducing adverse reactions (Wang et al. 2015). Since FDA approved the clinical usage of the albumin-bound paclitaxel nanoparticle suspension injection in 2005, albumin nanoparticles have received considerable clinical attention for targeting drug carriers (Zhao et al. 2015). Albumin is an endogenous substance and has the advantages of biodegradation, nontoxicity, harmlessness, non-immunogenicity, easy to purify, and solubility in water. Drugs can be physically adsorbed or covalently bound to the surface of albumin nanoparticles (Hawkins et al. 2008;Kim et al. 2011;Altintas et al. 2013). It has been recognized as an ideal carrier material for preparing nanoparticles. In addition, the active groups on the surface of the albumin nanoparticles are the effective parts of the chemical modification, and various target ligands can be coupled with them (Steinhauser et al. 2008;Ulbrich et al. 2009;Kouchakzadeh et al. 2012;Kouchakzadeh et al. 2013), thereby making the nanoparticles multifunctional. FA coupling and FA antagonists may be effectively used to the target delivery system in many tumor cells (Sudimack & Lee 2000;Pan & Lee 2004). Aim to enhance the antitumor effect of SRF, we prepared the FA-SRF-BSANPs by amidation of FA and albumin nanoparticles under alkaline conditions, giving A series of optimization and characterization of the nanoparticles were carried out. An approximate spherical nanoparticle with a particle size of 158.00 ± 2.43 nm and a zeta potential of À16.27 ± 0.97 mV was obtained. The particles with zeta potential values ranging from À30 mV to þ30 mV never settle down quickly and should be stored in powder form or suspended in saline solution or water and shaken well before administration (Basu et al. 2012). Under transmission electron microscope, FA-SRF-BSANPs was observed to be spherical and uniform in size. XRD results showed that SRF was encapsulated in nanoparticles in amorphous form. The encapsulation efficiency and drug loading of the nanoparticles were high, and their physical stability could be stored for more than one month at room temperature. To the best of our knowledge, this is the first report on active-targeting nanoparticles with improved drug targeting to the liver tumor. LO2 cells were normal human hepatocytes, and the FA receptors were expressed at low levels on its surface. Therefore, no significant difference was observed in the inhibitory effect of SRF-BSANPs and FA-SRF-BSANPs on LO2 cells. SMMC-7721 cells were human hepatoma cells, and FA receptors were highly expressed on their surface. The FA molecules on the surface of FA-SRF-BSANPs specifically bound to the FA receptor on the surface of SMMC-7721 cells, thereby delivering drugs to HCC cells. In addition, to understand the difference of cellular uptake of nanoparticles in hepatoma carcinoma cell, the uptake of FITC-BSANPs and FA-FITC-BSANPs by SMMC-7721 was studied in this experiment. FA-FITC-BSANPs showed much higher fluorescence intensity in SMMC-7721 cells in comparison to FITC-BSANPs. Therefore, compared with SRF-BSANPs, higher cellular uptake and the inhibitory effect of FA-SRF-BSANPs on HCC cells were observed, demonstrating that FA modified albumin nanoparticles had good targeting to hepatoma cells. The liver and tumor targeting of SRF-BSANPs and FA-SRF-BSANPs was studied in this study. DSI and DTI values were chosen as indicators to quantitatively evaluate the distribution characteristics of target preparations in vivo. When the DSI value is more than 1.0, it indicates that the content of drugs in liver tissue is higher than that in blood at a certain time. The greater the DSI value, the better the selectivity of the preparation to liver tissue. The results showed that the DSI values of . Ratio of SRF content in liver (a) or tumor tissue (b) and SRF content in serum after IP administration of SRF-Solution, SRF-BSANPs, or FA-SRF-BSANPs (Mean ± SD, n ¼ 5). Ã Indicates a statistically significant difference between two groups. (p < .05, independent sample t-test). the two nanoparticles were higher than those of the suspension group at all time points after oral administration, indicating that nanoparticles exhibited good liver targeting compared with SRF-suspension. Compared with SRF-suspension, SRF-BSANPs and FA-SRF-BSANPs exhibited good liver targeting in healthy rats, but no differences between them were observed. Furthermore, the liver tumor model was successfully established by subcutaneous injection of SMMC-7721 cells in nude mice. Our results showed that the C tumor /C blood values of the FA-SRF-BSANPs group were significantly higher than those of the SRF-solution and SRF-BSANPs group, indicating improved tumor targeting of FA-SRF-BSANPs. This is in agreement with the results of cell experiments. Conclusion In this study, an FA-modified SRF albumin nanoparticle with active targeting function was successfully prepared. A series of optimization and characterization of the nanoparticles were carried out. An approximate spherical nanoparticle with a particle size of 158.00 ± 2.43 nm and a zeta potential of À16.27 ± 0.97 mV was obtained. The encapsulation efficiency and drug loading of the nanoparticles were high, and their physical stability was good. Compared with SRF-solution and SRF-BSANPs, FA-SRF-BSANPs can significantly increase the inhibitory effect of SRF on hepatoma carcinoma cells. Cellular uptake experiments also showed that FA-SRF-BSANPs significantly increased the uptake of SRF by hepatoma carcinoma cells. The tumor targeting of FA-SRF-BSANPs was significantly higher than those of SRF-BSANPs and SRF-solution in nude mice. FA-modified albumin nanoparticles were a good carrier for delivering SRF, which was relatively enriched in tumor tissue and can improve the therapeutic effect and reduce the side effects of the drug.
6,185.2
2019-01-01T00:00:00.000
[ "Medicine", "Materials Science" ]
User Stress in Artificial Intelligence: Modeling in Case of System Failure The uninterrupted operation of systems with artificial intelligence (AI) ensures high productivity and accuracy of the tasks performed. The physiological state of AI operators indicates a relationship with an AI system failure event and can be measured through electrodermal activity. This study aims to model the stress levels of system operators based on system trustworthiness and physiological responses during a correct AI operation and its failure. Two groups of 18 and 19 people participated in the experiments using two different types of software with elements of AI. The first group of participants used English proofreading software, and the second group used drawing software as the AI tool. During the tasks, the electrodermal activities of the participants as a stress level indicator were measured. Based on the results obtained, the users’ stress was determined and classified using logistic regression models with an accuracy of approximately 70%. The insights obtained can serve AI product developers in increasing the level of user trust and managing the anxiety and stress levels of AI operators. I. INTRODUCTION According to numerous official dictionaries, artificial intelligence (AI) is the capability of a machine to imitate intelligent human behavior [1]. The main modern application areas of AI are machine learning, big data, and driverless cars [2]. Widespread adoption of AI can be attributed to the positive perception of novel technologies and innovations by users and customers; however, issues of user acceptance and trust in AI technology are becoming increasingly pressing every year [3]. Positively perceived technological characteristics of AI improve technology acceptance and use. These characteristics can improve the safety and performance of AI systems. For example, human actions and movement The associate editor coordinating the review of this manuscript and approving it for publication was Wei Jiang . recognition can be used in smart homes and automated office AI environments to improve user comfort and safety [4], [5]. A prior study [4] elucidated this connection based on AI environments, which could detect user actions to increase user comfort. Subsequently, the corresponding safety issues were analyzed, and an automatic crime detection method for AI environments was proposed [5]. The positive impact of this approach was supported by previously developed technology acceptance models (TAMs). Various TAMs [6]- [8] demonstrated that personal characteristics such as usefulness, ease of use, and behavioral intention are important factors that influence technology acceptance and trust. Perceived usefulness and ease of use affect users' intentions and how they accept computer technologies [6]; they can also be used for TAM development. The three-layered trust model [7] demonstrated that operators' trust and perceived characteristics differ for each AI system type. In turn, the AI trust model, which incorporates the dynamics of trust, contextual AI use, and influence of display characteristics, was proposed [8]. A connection between trust and stress existed when there were AI mistakes and unreliability. This aspect was explained through writing task performance using AI software [9], [10]. In case of AI system failures, the user's trust gradually decreases and their stress increases. In other words, AI errors lead to a higher cognitive workload, mental stress, and decreased user trust. Previous studies have shown that establishing a positive relationship between user trust and their emotional stability during the AI system operation eased the adoption of new products; people tend to distrust AI products that exhibit failures during operation. This is evident in the case of autonomous vehicles and medical equipment because failures in these are directly related to the lives of users. A car accident report [11] showed that more than 25 crashes were related to autonomous vehicles in California from 2014 to 2017. In addition, it was found that proper automated system operations built trust and increased reliance on automated technology [12]. Moreover, AI mistakes and failures increased the cognitive workload of operators and the mental stress of users [13], decreasing their work efficiency [14]. User trust in AI technology is strongly related to its reliability and accuracy [7]. However, as the study indicates, it is difficult to achieve 100% accuracy, particularly in systems with high intelligence. Moreover, in a similar trend, users have exhibited varying degrees of sensitivity to AI reliability depending on the level of automation. The above studies demonstrated the importance of AI technology acceptance by users and its connection with AI adoption and user trust. Based on this, one of the primary objectives of current research is to encourage adopting new innovations and developing human trust in AI technologies using a modeling approach. The growth of user-perceived trust in AI is an important issue that can be implemented in two main ways. First, AI technology can be improved to prevent an AI failure. Second, the user's emotions and state of stress should be considered to protect the user from dangerous failure-related situations such as a loss of control while using medical equipment with AI elements, driverless cars, and other devices. One of the important conditions for this is the use of objective instruments for stress measurements. Previous studies [15]- [17] have reported that an accurate indicator of physiological stress and states can be human responses, such as heart rate and electrodermal activity (EDA). EDA produces continuous changes in the electrical characteristics of the skin [18]. It refers to the variation of the electrical conductance of the skin in response to sweat secretion [18]. An experimental scheme based on EDA signals, which allowed one to recognize stressful events, was proposed [15]; it was found that the correct processing of EDA signals was the base for driving stress detection. Three psychological stress levels (low, medium, and high) were detected [16] based on EDA signal metrics, Fischer projection, and linear discriminant analysis. The accuracy of the proposed methods reached the satisfactory level of 81.82% and cemented the ability of the EDA signal to characterize human emotional states. Physiological responses obtained from sensors such as changes in heart rate, skin conductance, and respiration were cemented as accurate indicators of human rest and activity states [17]. The heart rate variability metric has been proposed as the base to predict individual human severity of congestive heart failure using the Bayesian belief network algorithm [19]. Study [20] showed that an EDA signal is an accurate measure of stressful conditions. Research [21] presented methods for analyzing EDA data to detect driver stress; it was found that EDA and heart rate metrics are the most correlated with a state of stress. Studies [22], [23] also supported findings that EDA is an indicator of emotional and stressful changes in human cognitive activity. Based on previous studies [13], [14], it can be concluded that a failure in the operation of an artificial intelligence system impacts the user physiological state through user stress occurrence, and user stress, in turn, can be measured through EDA. Additionally, it was found [15]- [17] that the main metrics characterizing human stress and its levels are psychophysiological indicators such as EDA and heart rate. Machine learning methods, including regression analysis, are most commonly used to apply these metrics and separate stress levels. Previous studies have demonstrated several standard approaches to assessing human emotional states and cognitive processes. Research [24] discussed the prospect of using different approaches to evaluate cognitive processes in AI, including machine learning. They described the possibility of using machine learning to increase the efficiency of explainable AI in decision-making for the well-being of people. Machine learning methods were discussed [25] for data storage improvement in cloud computing and big data systems. The layer-wise perturbation-based adversarial training method used to predict hard drive health degrees based on different levels was proposed. Research guidelines were proposed to assess the scope of model explanation methods [26]. During this study, the following two approaches were adopted for predicting a learned model: linear and sum pooling convolutional network models. Researchers and designers have long recognized the importance of modeling stress and trust as significant influences on the acceptance and adoption of new technologies. On the basis of the aforementioned studies [6]- [8], [16], [24]- [26], the standard approaches to evaluate cognitive processes and human emotional states can be divided into the following five main groups: 1) survey to measure qualitative characteristics of an AI system; 2) regression modeling; 3) exploratory and confirmatory factor analysis including TAM; 4) predictive modeling; and 5) advanced machine learning modeling (such as random forests and support vector machines). In many studies, including the present research, user stress based on trust in the AI system depends on the reliability of the system and the success of the task performed. When the task is performed successfully and the system operates reliably, then the user's trust is at a low-stress level and vice versa. Research [27] reported that if a particular task is simultaneously performed by AI and humans then, the failure of AI may induct a higher level of mistrust even if the human error causes more damage. In this case, the application of AI may be further reduced. Study [28] modeled user trust in AI and found that transparency, while the AI system is in use, can have negative effects on operator trust. Contradictions occur when the user has high trust in the event of an AI system failure and vice versa. A system calibration has been proposed as an approach to improve the performance and interruptions when using AI tools. The impact of trust in the adoption of AI for financial investment services was studied [29]. A prediction regression model of the intention of AI use was developed, including user trust. Trust was found to be one of the variables with the ability to significantly predict AI technology adoption. The methodology of perceived trust evaluation in AI technology was proposed [30]. It was found that the perceived difficulty, perceived performance, success/failure of the task, and task difficulty were extracted as the important predictors of perceived trust in AI system use. Physiological signals (heart rate) were studied [31] during the modeling of perceived trust and purchase intention in the apparel business. Messages about an apparel firm's malevolent business practices caused the heart rate of the users to decelerate and the perception of the firm as untrustworthy to increase. It was found that perceived trust has a greater impact on a participant's overall purchase intention for a malevolent business. The existing literature is mainly devoted to the dependence of trust on subjective assessments of perceived characteristics. Despite the fact that previous studies have recognized the importance of combining qualitative and quantitative approaches of analysis and assessment of the AI user psychological state [7], [23], it was reported that commonly adopted modeling approaches could be related to factor analysis, development of TAMs or separation of the subjective and objective personal scales. The present research, by contrast, links an objective assessment (physiological EDA signal) with user stress and AI system trustworthiness. The aforementioned studies demonstrated the mutual influence between users' trust, stress, physiological signals, and task success. In this regard, this study investigated the relationship between users' physiological stress and physiological signals and how a user's trust in an AI system depended on its reliability-level of success or failure for each event. This helps understand how an AI user's stress and physiological state can be affected by a reliable or unreliable AI system if the performed task fails or is successfully completed. The proposed models also demonstrate the ability of physiological signals (EDAs) to detect and classify the stress levels of AI users. The methods developed in this study can be used to define the AI operator's stress levels. This study describes the mechanisms for building operator trust in AI technology from the user's perspective. This will help to adapt the AI systems to the psychological state of the operator and reduce the stress and fatigue of the users during the interaction. The insights from this study can help AI developers improve the attractiveness of their product among users and increase trust in their technologies throughout society. Designers can introduce our findings in interactive systems with AI elements such as mobile phones and apps, wristbands, wristwatches, tablets, and laptops. The objective of this study is to understand how the perceived trust and physiological responses of users, specifically an EDA signal, are affected during tasks using reliable and unreliable automation. The present research includes two different approaches to detect user stress when using an AI operation system based on two experiments with AI software, which are described in detail in the sections below. In this study, the uninterrupted operation of AI was understood as the correct operation of the AI system in accordance with its purpose. Correct operation of the AI software had to occur without delay in time and with the implementation of all intended functions. In the case of the performed drawing experiment, AI correct operation is recognition of the drawings and the provision of professional versions of the sketches, and in the English proofreading experiment, the provision of word verification with the correct translation. The productivity and accuracy of the tasks performed were assessed through the success and completeness of the final result obtained in accordance with the AI user expectations. In the case of a drawing experiment, this is a recognized image and correctly proposed options for sketches, and in the case of an English proofreading experiment, this is correct recognition of an error in a word and a satisfactory proposal for its replacement. A brief description of the general model development process (Figure 1) contains data collection, data preprocessing, analysis, results, and comparison of the classifiers. The data collection step describes the collected datasets and the EDA device during both experiments. Data preprocessing introduces the preliminary data processing for each experimental set. The analysis and results steps show the analysis methods used with the main results. A comparison of the classifiers provides a general comparison of the developed models. The model application shows the most applicable areas of AI for the developed methods. II. EXPERIMENTAL FRAMEWORK A. EXPERIMENT 1: DRAWING SOFTWARE USING AI 1) PARTICIPANTS A total of 18 healthy students (9 males and 9 females) from the same university with an average age of 22 years (standard deviation of 2.1 years) participated in this study. The participants did not have prior experience using this software and were informed that they could discontinue the experiment at any time. 2) EXPERIMENTAL SETUP A Samsung Galaxy tablet (SM-T536; Samsung Group, Seoul, Korea) with a display of 10.07'' (∼255 mm), pixels resolution of 1280 × 800, and running the Android operating system was used as the experimental equipment. The participants used the stylus pen supplied with the tablet for interaction with the software. The correct operation of the devices was verified throughout the experiments. Samsung Galaxy tablet was chosen owing to its satisfactory quality that includes a thin and light structure, low power consumption, convenient surface temperature, bright display, and expandable storage system. These characteristics, combined with its reasonable price, make this tablet suitable for the experiment. Google AutoDraw was selected as a representative AI. AutoDraw allows drawing objects based on AI principles by converting the user's inaccurate and rough input sketches into stylized drawings. Specifically, AI-based processing of the input generates candidate drawings for the users to choose and replace their original sketches. 3) DRAWING OBJECTS AND DIFFICULTY LEVEL A preliminary experiment [30] was conducted to determine the drawing objects corresponding to words and to confirm the difficulty levels of the objects. The preliminary experiment consisted of the selection of target words by five participants who did not participate in the main experiment. The participants drew the objects corresponding to the proposed words using AutoDraw. The success of the task was determined from the correct recognition by the AI application. A total of 50 words were selected using the Quick, Draw! game (Google LLC, Mountain View, CA, USA) from different topics to avoid biasing. The five participants then drew the objects corresponding to the words for up to 30 s. If the participant and experimenter agreed that the word was mapped onto drawings correctly, it was considered a success. The degree of difficulty of the word was determined by the following approach. A scale from 1 to 10 was used for the assessment by the participants, with 1 representing the minimum difficulty level and 10 indicating the maximum. The success or failure of the tasks was assigned scores VOLUME 9, 2021 of 0.5 and 1, respectively, and multiplied by the score of each participant. For example, the final score for ''blueberry'' was retrieved using the equation 0.5 × 1 + 1 × 10 + 0.5 × 7 + 1 × 7 + 0.5 × 5 = 23.5, where each term corresponds to one participant, with the left factor being the success/failure score and the right being the subjective score. The final scores allowed the classification of 50 words into low (score range of 2.5 to 16.5), moderate (score range of 17 to 33), and high (score range of 33.5 to 50) difficulties. After classification, 18 words were selected from the 50 words to avoid redundancy, such as that between ''home lamp'' and ''street lamp,'' with varying interpretations according to cultural norms and conflicting, albeit correct, sketches of parts from larger objects. The remaining words, listed in Table 1, were classified according to their difficulty and used to conduct the main experiment. 4) MEASURES An Empatica E4 wristband (EDA sensor) was used for the physiological signal collection in this experiment. The wristband [32] is a wearable and wireless device designed for comfortable, continuous, and real-time data acquisition in daily life. Data from this sensor were used as an objective measure with a sampling rate of 4 Hz throughout the tasks. In this study, for physiological EDA signals, the features proposed in [21] and the amplitude and duration calculated from signal peaks and valleys were used. The signal feature extraction process allows us to extract the following EDA characteristics of duration (OD) and amplitude (OM): the minimum (ODMin and OMMin), maximum (ODMax and OMMax), mean (ODMean and OMMean), standard deviation (ODstdev and OMstdev), summation (sum of ODsum and sum of OMsum), and the number of occurrences of duration and amplitude (ODN and OMN). 5) EXPERIMENTAL PROCEDURE The 18 words (Table 1) were selected for the 18 participants to sketch in AutoDraw. The order of the selected words was arranged using the Latin square design. Each participant was then asked to sketch the object corresponding to the selected word. The words were not shown to the participant in advance. While drawing, the experimenter checked the success/failure, and the physiological signal of EDA was recorded using the Empatica E4 wristband. B. EXPERIMENT 2: ENGLISH PROOFREADING SOFTWARE USING AI 1) PARTICIPANTS A total of 19 native English speakers (10 females, 9 males) participated in the experiment, with a range of 18-82 years in age (mean = 33.6 years old; SD = 18.0). One participant's results were excluded due to an error in recording the EDA signal (data showed zero). The participants had at least two years of experience in using AutoCorrect in Microsoft Word. 2) APPARATUS A previous program developed using Visual Studio C# (Visual Studio 2015, Microsoft Co., USA) was applied to conduct the experiment [10], [33]. The program included four different auto-proofreading sessions (i.e., sessions A, B, C, and D) [9]: Session A: A reliable auto-proofreading condition indicating a grammatical error with an underline and without providing a suggestion (word). Session B: A reliable auto-proofreading condition with a correct suggestion. Session C: An unreliable auto-proofreading condition indicating a correct word with an underline and without providing a suggestion. Session D: An unreliable auto-proofreading condition indicating a correct word with an underline and providing an incorrect suggestion. Sentences used for the proofreading tasks were selected from online sentence completion test sets, for example, the Scholastic Aptitude Test (SAT) for the easy level and Graduate Record Examinations (GRE) for the difficulty level. A total of 34 sentences, i.e., 17 for the easy and difficult levels each, were chosen based on their readability scores, which were measured using the Readability Test Tool. The training session contains 4 sentences, the manual proofreading session contains 10 sentences, and 20 sentences were included for each of the 4 sessions (sessions A, B, C, and D). The 13.5-inch laptop (Q524UQ, AsusTek Computer Inc., USA) was used for the experiment with a screen resolution of 1920 × 1080. The font type and size were Times New Roman and between approximately 14 and 16 points, respectively. 3) MEASURES An Empatica E4 wristband (EDA sensor) was used for the physiological signal collection in our experiment. In this experiment, the aforementioned features proposed in the study [21] were also applied. The room temperature was controlled at approximately 22 • C to block the effect of temperature on the skin conductivity. An example of data collected from this sensor throughout the tasks performed by one of the participants is shown in Figure 2. In Figure 2, the difference between reliable and unreliable sessions is indicated by a dotted line. The EDA values during the reliable experimental session were lower in comparison with those during the unreliable session. This is preliminary evidence that an unreliable system is associated with an increased stress level. 4) EXPERIMENTAL PROCEDURE The experiment was divided into three stages: preparation, practice, and the main experiment. During the preparation stage, an Empatica E4 wristband was attached to each participant's wrist to measure the EDA signal, and the experimental procedure was described. A 2-min rest period was applied before starting the practice stage. The starting time and time for each task were recorded to synchronize with the measured EDA signals. For understanding the proofreading system, a practice stage was conducted for the participants. The users performed the proofreading tasks quickly and correctly during this stage. For increase the stress levels during the proofreading tasks, each sentence had to be corrected within 20 s. If the sentence was not completed within the time limit, the program would move automatically to the next sentence. Next, after a break, the manual proofreading session for the 10 sentences was conducted without an automated proofreading system. After the manual proofreading session, the participant had a 2-min rest period before starting the main experiment. During the main experiment stage, each participant was randomly assigned to one of the four sessions. During each session, the participant was asked to complete a set of five sentences as quickly and correctly as possible. The participants were asked to complete a total of 20 sentences, randomly separated into 4 sequential sets; perceived trust was measured at the end of each set. A short break period was included between the sets to observe a change in the physiological response. III. ANALYSIS A. EXPERIMENT 1: DRAWING SOFTWARE USING AI 1) PARTICIPANTS Data analysis from the drawing AI software used in experiment 1 was based on the assumption that if the drawing task was completed successfully by the participant, then the participant has trust and a low-stress level (event ''0''). A lack of trust with a high-stress level (event ''1'') corresponds to a failed drawing task. The analysis method was developed using a second-order polynomial logistic regression model. The dependent variable was the failure/success of the drawing AI software in the drawn word recognition. The independent variables were linear terms of extracted EDA features, products of their pairs, and squared terms of EDA features. A second-degree model was developed to find a more effective combination of predictors to increase the model performance because the first-order model showed a low accuracy of approximately 50%. At the same time, the degree of the regression model no longer increased owing to the possibility of an overload with a large number of terms in the equation. Finally, 36 variables were in the equation. The research framework that explains the entire study system, including analysis, is illustrated in Figure 3. Figure 3 divides the complete research framework into three systems with their respective elements. The First AI system in Experiments consists of proofreading and drawing AI software. The second system of data collection and extraction includes an EDA signal with extracted features. The third system of analysis comprises the machine learning method of binary logistic regression, wherein AI reliability and success were dependent variables, and EDA features were independent variables. B. ANALYSIS OF EXPERIMENT 2 DATA Data analysis from the English proofreading AI software used in experiment 2 was based on the assumption that a reliable auto-proofreading condition (with correct suggestion) corresponds to a low level of stress with trust (''0'') or a lack of trust with high-stress level (''1'') under nonreliable auto-proofreading conditions (with errors in the suggestion). The predictors were the only linear terms of the EDA features. The analysis method was developed using a first-order logistic regression model. The dependent variable was reliable/unreliable proofreading conditions. Independent variables were the only linear terms of the extracted EDA features. In this case, the linear model was sufficient to show a satisfactory result in the balance between model performance and the number of variables. Finally, seven variables were used in the equation. A schematic of the analysis process of both models is shown in Figure 4. As shown in Figure 4, the analysis process consists of model development and cross-validation stages. The data collection step describes the collected data and information during both experiments. The model development step introduces the models obtained with dependent and independent variables (a detailed description is shown above in section 2.3). ''Extracted variables'' show the number of predictors obtained in each model. The cross-validation step provides a description of which parts of the cross-validated dataset the developed model was applied to. The ratio between the number of extracted variables from the AI experiment model and the number of cases from the cross-validated proofreading experiment makes it possible to apply the model equation only to the full cross-validated dataset. In cases of half and a quarter of the cross-validated dataset, this process was inaccessible because the number of variables obtained exceeded the number of cases in the dataset. Fewer variables in the proofreading AI model made it possible to apply this to all sections of the cross-validated AI dataset. The ''Results'' show data extracted after cross-validation. The model performance metrics obtained include the accuracy, sensitivity, specificity, and positive predictive value. A. EXPERIMENT 1: DRAWING SOFTWARE USING AI During the drawing AI software experiment, stress classification was performed on the binary scale with low and high levels based on detected physiological responses from the measured EDA signal. During the performance of the task, the EDA signal was directly measured from a wristband sensor attached to the participant. In the case of successful task performance, it was assumed that trust existed along with a low-stress level (this event was coded as ''0''). Otherwise, a task failure caused a lack of trust with a high-stress level (this event was coded as ''1''). The second-order polynomial logistic regression equation of 36 terms with the obtained coefficients can be described as follows: In equation (1), Y is a dependent variable assuming a user stress level through a failure or success of the task. All independent variables X1-X35 is squares and multiplications of extracted EDA data features (in Appendix A). Variables in the obtained model were significant with p-values not exceeding 0.04; the exceptions were only two insignificant variables, OMMean 2 and Constant with p-values of 0.1. The model performance for drawing the AI software experiment is shown in Table 2. The developed model was applied and cross-validated using a full dataset of the second above Table 2 shows that the accuracy, specificity, and sensitivity of the models ranged between 67% and 82%, whereas PPV ranged between 80% and 86% for the original model developed, i.e., ''original,'' and the cross-validated model, ''Cross-Val.'' The goodness of fit was evaluated using Cox and Snell pseudo-R-squares with values between 0.2086 and 0.2125. In general, a model based on an AI failure event has a satisfactory performance for both datasets. B. ENGLISH PROOFREADING SOFTWARE WITH AI In the English proofreading AI experiment, stress classification was also binary (low vs. high) and based on the VOLUME 9, 2021 EDA signal, which was measured by the wristband sensor attached to the participant during the performance of the task. The proposed hypothesis is that a reliable auto-proofreading condition (with correct suggestion) corresponds to a low level of stress with existing trust (the event was coded as ''0'') or a lack of trust with a high-stress level (the event was coded as ''1'') under non-reliable conditions (with errors in the suggestion). The coefficients of the regression model explaining the reliability of the English proofreading AI software as a dependent variable are shown in Table 3. The model performance, along with the cross-validated results, are shown in Table 4. This model was cross-validated by applying the coefficients obtained to the dataset from the first presented AI drawing experiment. In this case, it was possible to cross-validate the model on different sections of the drawing experiment dataset (full, half, quarter) because of the balanced numbers of predictors and validating cases. The basic and validated confusion matrices obtained for the full datasets are shown in Figures 7 and 8. In Table 4, ''Original'' is the result of the developed basic model, ''C/V Full'' indicates the results of the crossvalidated full set, ''C/V_Half1'' shows the first half of the cross-validated set, ''C/V_Half2'' indicates the second half of the cross-validated set, and ''C/V_Quarter1-4'' shows the results of all cross-validated set quarters from 1-4 respectively. For the originally developed model, the accuracy is over 70%, with other characteristics of between 69%-75%. The goodness of fit was evaluated using the Cox and Snell pseudo-R-squares with a value of 0.214. For the crossvalidated set, the accuracy varies between 56%-60%, with other characteristics of between 44%-97%. Based on the results obtained, the original model achieves a satisfactory performance. A. PERFORMANCE OF MODELS The present study proposed binary classification models of stress levels (high and low) of AI operators during system failure. Correct work and reliability of the AI system corresponded to low stress and the presence of trust in AI. Otherwise, if the AI system demonstrated failure or unreliability, mistrust and high-stress level occurred. The developed logistic models show a satisfactory accuracy, sensitivity/specificity, and positive predictive values (PPVs) of approximately 60%-80% on average for both models. In particular, the general PPV results show high values of approximately 90% or more. This indicates the high ability of the developed models to detect the lack of trust and high-stress level of operators while using AI systems. The goodness of model fit is assessed using various measures [34]. In our study, Cox and Snell pseudo-R-squares were used to evaluate the goodness of fit. Cox and Snell pseudo-R2 is unable to reach a value of ''1'' even for a perfect model [34]. The results obtained show that the original models developed explain between 0.20 and 0.22 of the variance at low and high-stress levels. In previous studies, there is no consensus on how to interpret the values of the pseudo-R-squares, but some sources [35], [36] have evaluated a Cox & Snell level from 0.2 to be satisfactory and acceptable. Previous studies used the EDA signal as the base for emotional recognition and reported the following results. A method to detect human emotions using EDA data in a word remember/recall task was proposed [22]. The authors used the positive and negative affect schedule method and support vector machine to classify the EDA response, with an accuracy of approximately 75.65%. This study [37] used various physiological responses, including EDA signals to study the cognitive and mathematical task performance. The accuracy of the models was between 75 and 95%, depending on the type of physiological signal. In this study, the authors performed the Stroop test, the Trier social stress test, and the Trier mental challenge test to evaluate emotions using EDA and speech features [23]. EDA data indicated that the best accuracy was approximately 70%. Another study [38] used the driver database and main object analysis to select the appropriate features for the identification of the stressful state. These factors resulted in an accuracy of 78.94%. An accuracy of approximately 89% was achieved [39] using the support vector machine approach to detect the stress state of participants in three types of tasks: Stroop color-word test, an arithmetic test of counting numbers, and talking about stressful experiences or events. During the comparison of previous and present methods, it was proposed that the EDA signal shows potential to recognize human emotions and stress levels. Average accuracy was between 70 and 90%, and the performance of the proposed model was within this range with scope for future improvement and development. Based on the results obtained, the original stress models based on AI failure show satisfactory performance. This is connected with the fact that stress applied in this study is an objective measurement as well as a failure of the AI. This finding supports and expands the previous result of relations between stress and AI failure events, which were provided in studies related to autonomous driverless vehicles with AI tools. Research [40] provides the results of a survey of 1028 randomly selected Americans aged 18 and older. This study reported that 37% of men and 55% of women have anxiety about driverless car safety owing to the possibility of failure, and only 6% of people would put a child alone in a driverless car. Study [41] showed that people have a high level of anxiety when driving autonomous cars with AI systems owing to failure events. Supporters of autonomous driving have declared that AI technologies secure the driving process; however, despite this, consumers are under stress with the idea of being in a car that can break down or fail at any time without their control. AI system operators have psychological roadblocks in using automated technology because of a lack of control and understanding of how the system works, the risk of injury, and the unpredictability of failure moments [42]. Previous studies have shown that stress, anxiety, and AI failure are related to each other. In contrast to existing research, the present study proposed two validated stress models with satisfactory accuracy and performance. The proposed models are significantly different from the previously developed models. First, the combination of the trust concept with real physiological data was conducted in a single model for each individual experiment. Second, the developed models confirmed the relation between AI failure and the emotional state of the AI operator based on the objective measurements of the EDA signal. Third, a majority of the previous research was focused on building models based on subjective user assessments of the perceived characteristics of AI systems. In turn, the present study did not use subjective assessments but provided a further perspective on the combination of subjective and objective measurements of the emotional state of AI operators and users. The results obtained confirm previous research and provide new knowledge regarding sensors, AI/automation engineering, and physiological science for researchers, engineers, and designers. B. RELATIONSHIPS AMONG EMOTIONAL STATE, PERCEPTION, AND AI OPERATION OF USERS Both models developed in the present study have a satisfactory classification ability and demonstrate the mutual connection between user stress levels, AI failure, and system reliability. It was found that AI failure and unreliable AI systems have a positive influence on the stress of the users. The general assumption of this study is that an AI failure and unreliability cause increased stress based on a low trust in AI system operations owing to unpredictable AI reactions. A connection between trust and stress was described in previous research [9], [10], where it was found that AI mistakes and unreliability cause a higher cognitive workload and mental stress with decreased user trust. In other words, if the AI system fails or is unreliable, then the operator stress increases, and the trust level decreases. In the present study, the stress of the users was confirmed if the AI drawing software does not recognize the user's sketch or if the proofreading AI proposes an unsatisfactory suggestion, which corresponds to a low trust level. The present research expands and novelizes previous studies, which also found a general connection between user perception, trust, emotional state, and AI or automation system failures. In addition, [43] and [44] show the negative effect of automation errors on user trust. If the error occurs earlier, then the negative effect on trust is stronger. It was also found that the first impression of the system is the most important and forms the foundation of trust. Study [45] showed that if the operating system fails quickly and easily, it undermines the user's trust, and the operator's subsequent impression of the system will be ''untrustworthy.'' Based on this, one of the important problems for AI system designers is the prevention of early and easy errors by improving the feedback connection between themselves and users after a failure event. Examples were demonstrated in [46] on a collision warning system for drivers. Driver trust was significantly lower if the system gave a warning after pressing the brake pedal than before because of less benefit gained by users. The study [47] connected human trust, stress, and physiological signals while using a computer interface with a VR tool. Electroencephalography (EEG), EDA, and heart rate variability (HRV) were used to measure trust with a virtual agent and find the connection between trust and stress. It was found that in low cognitive load tasks, EEG data reflect the trust in VR, and the cognitive load (stress or anxiety) of the user is reduced when the VR is accurate. The routine performance of automated systems causes a high level of user trust [48]. Trust becomes significant if the user does not know how to prevent the occurrence of AI system failures. This uncertainty influences the workload and further error management of the operator, particularly under time pressure conditions. A few principles were proposed to reduce human stress and increase trust during automated car driving [49]. One of the important factors in trust is the ability of the system to provide the operator with information about what the vehicle senses when a failure occurs. The interface should help predict the failure and its effects to provide the best performance. The information should be provided as quickly as possible so that users can react proactively, and in this case, the trust in the AI system increases. According to [50], stress levels and user stress responses are different and depend on the personal characteristics of the users in video gaming task performances. Users with higher experience in video gaming have lower distress and better performance. This indicates that an AI system operation causes lower stress and workload to experienced users regardless of the failure event. Trust positively affects human satisfaction and is negatively related to stress [51]. The present findings confirm mutual relationships among the user's stress, anxiety, trust, and AI operation with failure events. The developed models demonstrate the new sets of variables capable of classifying user stress during a failed AI operation. C. LIMITATION OF THIS STUDY AND FUTURE RESEARCH There are few limitations of this study related to the AI software experiment. First, the developed model was based on an AI success/failure event, which is not an entirely independent indicator because it is connected with the characteristics of the participants, such as their drawing skills and personal experience. These personal features cannot be predicted or controlled during the experiments. Another limitation is the short time to complete the drawing task and accordingly to precisely determine the psychophysiological signal, which could lead to mixed results of EDA detection in certain cases. The time condition was also impossible to control because the drawing time was strictly provided by AI software. Additionally, for the proofreading AI software, experimental results may vary depending on the native language and literacy of the participant. In this regard, the choice of participants for the proofreading AI experiment was limited to native English speakers. Another limitation is the difference in the number of fully analyzed cases between the two experiments. AI drawing software experiments provide 4-times the number of cases than AI proofreading experiments. This could have influenced the results of the cross-validation, particularly in the case of the polynomial model, owing to the imbalance between the numbers of analyzed cases and predictors. In the future, the presented research can be supplemented and expanded with a greater variety of AI tools. Future AI experiments will also be based on more versatile software that does not depend on the talents and special characteristics of the participants (e.g., talent for drawing, singing, native language, or literacy). The developed models can be improved by considering additional variables; for example, we can use physiological signal features together with another type of stress assessment tool. The stress levels presented were divided into ''low'' and ''high,'' and these categories can be extended by including a ''middle level'' as an example. The applied methods of data analysis and event prediction can be expanded by applying machine learning methods such as random forest and support vector machines. VI. CONCLUSION In the present paper, two cross-validated models were proposed for stress level classification (high and low) based on the physiological response (EDA), system reliability, and AI system failure. The original logistic models developed show a satisfactory performance and goodness of fit. It was found that the EDA signal features of the users can be reasonably accurate predictors for stress level classification in an AI system failure and reliable/unreliable AI system operation. The following conclusions were drawn: 1) The originally developed models achieve a satisfactory classification ability and acceptable goodness of fit and demonstrate the mutual connection among stress, AI failure, and unreliability. 2) Both stress models applied to the original experimental datasets show a satisfactory performance with an accuracy of approximately 70%. 3) Relationships among EDA signal features, stress, and AI system trustworthiness during an AI operation were found. 4) The combination of EDA features as polynomial and linear terms can predict the human stress levels during a reliable/unreliable AI system operation and successful or failed task performance. The results obtained can be used for theoretical and practical applications. The study provides new knowledge for sensors, AI/automation engineering, and physiological science. The developed models and results obtained will help to adapt the AI systems to the psychological state of the operator and reduce the stress and fatigue of the users during interaction with the system. The insights from this study could serve AI developers to improve their product attractiveness among users and increase trust in their technologies throughout society. Designers can also introduce our findings in interacting systems with AI elements such as mobile phones and apps, wristbands, wristwatches, tablets, and laptops.
9,966.6
2021-01-01T00:00:00.000
[ "Computer Science", "Psychology" ]
Supersingular Curves You Can Trust . Generating a supersingular elliptic curve such that nobody knows its endomorphism ring is a notoriously hard task, despite several isogeny-based protocols relying on such an object. A trusted setup is often proposed as a workaround, but several aspects remain unclear. In this work, we develop the tools necessary to practically run such a distributed trusted-setup ceremony. Our key contribution is the first statistically zero-knowledge proof of isogeny knowledge that is compatible with any base field. To prove statistical ZK, we introduce isogeny graphs with Borel level structure and prove they have the Ramanujan property. Then, we analyze the security of a distributed trusted-setup protocol based on our ZK proof in the simplified universal composability framework. Lastly, we develop an optimized implementation of the ZK proof, and we propose a strategy to concretely deploy the trusted-setup protocol. Introduction Be it foundationally or for efficiency, most of isogeny based cryptography is built upon supersingular elliptic curves [13,36,11,23,31,22,17].At the heart of it, lies the supersingular isogeny graph: a graph whose vertices represent supersingular elliptic curves (up to isomorphism) and whose edges represent isogenies (up to isomorphism) of some fixed small prime degree between them.A foundational hard problem for isogeny based cryptography is then: given two supersingular elliptic curves, find a path in the supersingular isogeny graph connecting them. An endomorphism is an isogeny from a curve E to itself, and their collection forms the endomorphism ring End(E).In recent years, the connection between finding isogeny paths and computing endomorphism rings of supersingular curves has become increasingly important [29,26,54,53].It is now established that, assuming the generalised Riemann hypothesis, there exists probabilistic polynomial time algorithms for these two problems: 1. Given supersingular elliptic curves E 0 , E 1 along with descriptions of their endomorphism rings, compute an isogeny path E 0 → E 1 ; 2. Given a supersingular elliptic curve E 0 along with a description of its endomorphism ring, and given an isogeny path E 0 → E 1 , compute a description of the endomorphism ring of E 1 . These algorithms-and variants-have successfully been used both constructively [31,22,17] and for cryptanalysis [29,44,46,26,23,28].Without the additional information above, computing the endomorphism ring of an arbitrary supersingular curve remains a hard problem, both for classical and quantum computers.Given the importance of this problem, it is natural to ask whether it is possible to sample supersingular curves such that computing their endomorphism ring is a hard problem, crucially, even for the party who does the sampling.We shall call these objects Supersingular Elliptic Curves of Unknown Endomorphism Ring, or Secuer1 in short. Applications.Generating a Secuer has turned out to be a delicate task, and no such curve has ever been generated.Yet, several isogeny based schemes can only be instantiated with a Secuer.This is the case, for example, of isogeny based verifiable delay functions [23] and delay encryption [8].The so-called CGL hash function based on supersingular curves [13] has been shown to be broken by the knowledge of the endomorphism ring [26], and one possible fix is to instantiate it with a Secuer.Other protocols which require a Secuer include hash proof systems, dual mode PKE [1], oblivious transfer [38], and commitment schemes [49]. Contributions.We analyze and put into practice a protocol for distributed generation of Secuers.Our main technical contribution is a key ingredient of the protocol: a new proof of isogeny knowledge (two curves E 0 and E 1 being public, a party wishes to prove that they know an isogeny E 0 → E 1 without revealing it).Our proof is similar to the SIDH proof of knowledge [20,18], but extends it in a way that makes it compatible with any base field, any walk length, and has provable statistical zero-knowledge (unlike any previous proof of isogeny knowledge).In particular, its statistical security makes it fully immune to the recent attacks [10,41,47]. To prove statistical security, we analyze supersingular ℓ-isogeny graphs with level structure, a generalization of isogeny graphs that was recently considered in [22,3].We prove that these graphs, like classic isogeny graphs, possesses the Ramanujan property, a fact that is of independent interest.Using the property, we analyze the mixing behavior of random walks, which lets us give very precise parameters to instantiate the proof of knowledge at any given security level. To show that the resulting protocol is practical, we implement it on top of Microsoft's SIDH library2 and benchmark it for each of the standard SIKE primes [35].We must stress that SIDH-style primes are possibly the most favorable to our protocol, in terms of practical efficiency. Finally, we sketch a roadmap to run the distributed generation protocol for the SIKE primes in a real world setting with hundreds of participants. Limitations.We must point out that our new proof of knowledge is not well adapted to a secure distributed generation protocol in the case where one wants to generate a Secuer defined over a prime field F p , instead of F p 2 , such as in [1,38].Different proofs of knowledge [19,5] could be plugged in the distributed protocol for the F p case, however their practical usability is dubious. Generating a Secuer The cornerstone of isogeny based cryptography is the endomorphism ring problem: if it could be solved efficiently, then all of supersingular isogeny based cryptography would be broken [29,26,53], leaving only ordinary isogeny based cryptography [16,50,21] standing. Definition 1 (Endomorphism ring problem).Given a supersingular curve E/F p 2 , compute its endomorphism ring End(E).That is, compute an integral basis for a maximal order O of the quaternion algebra ramified at p and ∞, as well as an explicit isomorphism O ≃ End(E). For any p, there exists a polynomially sized subset of all supersingular curves for which the endomorphism ring can be computed in polynomial time [12,40], but the problem is believed to be exponentially hard in general, even for quantum computers.A related problem, commonly encountered in isogeny protocols, is finding paths in supersingular isogeny graphs. Definition 2 (Isogeny ℓ-walk problem).Given two supersingular curves E, E ′ /F p 2 of the same order, and a small prime ℓ, find a walk from E to E ′ in the ℓ-isogeny graph. Such walks are always guaranteed to exist, as soon as they are allowed to have length in O(log(p)) [42,45,37,13].The two problems are known to be polynomial time equivalent, assuming GRH [54].Indeed, given End(E) and End(E ′ ), it is easy to compute a path E → E ′ .Reciprocally, given End(E) and a path E → E ′ , it is easy to compute End(E ′ ); and, by random self-reducibility, we can always assume that one of End(E) or End(E ′ ) is known. Our goal is to generate a Secuer: a curve for which the endomorphism ring problem is hard, and consequently one for which it is hard to find a path to any other given curve. What does not work.The supersingular elliptic curves over a finite field k of characteristic p are those such that #E(k) = 1 mod p.Any supersingular curve is isomorphic to one defined over a field with p 2 elements, thus, without loss of generality, we are only interested in supersingular curves defined over F p 2 .Among the p 2 isomorphism classes of elliptic curves over F p 2 , only ≈ p/12 of them correspond to supersingular curves. The standard way to construct supersingular curves is to start from a curve with complex multiplication over a number field, and then reduce modulo p. Complex multiplication elliptic curves have supersingular reduction modulo 50% of the primes, thus this technique quickly produces supersingular curves for almost all primes.For example, the curve y 2 = x3 + x, which has complex multiplication by the ring Z[i] of Gaussian integers, is supersingular modulo p if and only if p = 3 mod 4. Most isogeny based protocols are instantiated using precisely this curve as starting point.These curves are not Secuers, though, because from the information on complex multiplication one can compute the endomorphism ring in polynomial time [12,40]. As p grows, the curves with computable 3 complex multiplication form only a negligible fraction of all supersingular curves in characteristic p, so we may still hope to get a Secuer if we can sample a supersingular curve at random from the whole set.The natural way to do so is to start from a well known supersingular curve, e.g.E 0 : y 2 = x 3 + x, take a random walk E 0 → E 1 in the isogeny graph, and then select the arrival curve E 1 .But, by virtue of the reductions mentioned above, any E 1 constructed this way cannot be called a Secuer either. Several other techniques have been considered for generating Secuers, however all attempts have failed so far [6,43]. Distributed generation of Secuers.An obvious solution that has been proposed for schemes that need a Secuer is to rely on a trusted party to start from a special curve E 0 and to perform an isogeny walk to a random curve E 1 .Although E 1 is not a Secuer, if the trusted party keeps the walk E 0 → E 1 secret, no one else will be able to compute End(E 1 ). Of course, relying on a trusted third party is undesirable.The natural next step is to turn this idea into a distributed protocol with t parties generating a sequence of walks First, suppose that the sequence was generated honestly: the i-th party indeed generated a random isogeny from the previous curve E i−1 to a new curve E i+1 .Then it is sufficient for a single party to honestly discard their isogeny, for no path to be known by anyone from E 0 to E t .Then, E t is a Secuer for all practical purposes. To make this protocol secure against active adversaries, an additional ingredient is needed.As it is, the last party could cheat as follows: instead of generating an isogeny E t−1 → E t , they could reboot the chain and generate an isogeny E 0 → E t .They could then compute the endomorphism ring of E t .If only the curves E i along the path are revealed, it is impossible to detect such misbehavior.To prevent this, each party needs to prove that they know their component of the walk: an isogeny E i−1 → E i (as first discussed in [8]).To this end, we develop a statistically zero-knowledge proof of isogeny knowledge. Proof of isogeny knowledge State-of-the-art.Protocols to prove knowledge of an isogeny have been mostly studied for signatures.The first such protocol is the SIDH-based proof of knowledge of [20].Its security proof was found to be flawed and then fixed, either by changing the assumptions [32] or by changing the protocol [18].However, these protocols are now fully broken by the recent polynomial time attacks on SIDH-like protocols [10,41,47]. CSIDH-based proofs of knowledge were first introduced in [19], and then improved in [5] for the parameter set CSIDH-512.These are limited to isogeny walks between curves defined over a prime field F p , and tend to be prohibitively slow outside of the specially prepared parameter set CSIDH-512. Finally, De Feo and Burdges propose an efficient proof of knowledge tailored to finite fields used in delay protocols [8].However the soundness of this protocol is only conjectural, and, being based on pairing assumptions, is broken by quantum computers. In summary, no general purpose, quantum-safe, zero-knowledge proof of knowledge of an isogeny walk between supersingular curves defined over F p 2 exists in previous literature. Overview of our method.Our main technical contribution is a new proof of knowledge that ticks all the boxes above: it is compatible with any base field, any walk length, it has provable statistical zero-knowledge, and is practical-as illustrated by our implementation.The idea is the following.Two elliptic curves E 0 and E 1 being public, some party, the prover, wishes to convince the verifier that they know an isogeny ϕ : E 0 → E 1 (of degree, say, 2 m , large enough so it is guaranteed that such an isogeny exists).First, the prover secretly generates a random isogeny walk ψ : E 0 → E 2 of degree, say, 3 n .Defining ϕ ′ with kernel ψ(ker(ϕ)), and ψ ′ with kernel ϕ(ker(ψ)), one obtains the following commutative diagram: Now, the prover publishes a hiding and binding commitment to E 2 and E 3 .The verifier may now ask the prover to reveal one of the three isogenies ψ, ϕ ′ , or ψ ′ , by drawing a random chall ∈ {−1, 0, 1} (and open the commitment(s) corresponding to the relevant endpoints).For the prover to succeed with overwhelming probability, they must know all three answers, so they must know an isogeny from E 0 to E 1 : the composition ψ ′ • φ ′ • ψ : E 0 → E 1 .This is the idea behind the soundness of the protocol. So far, this protocol is more or less folklore and superficially similar to [18, §5.3].But does it leak any information?Whereas previous protocols only achieved computational zero-knowledge, we provide a tweak that achieves statistical zero-knowledge: there is a simulator producing transcripts that are statistically indistinguishable from a valid run of the protocol.The simulator starts by choosing the challenge chall first, then it generates an isogeny that is statistically indistinguishable from either ψ, φ ′ , or ψ ′ , according to the value of chall.Simulating ψ (or ψ ′ ) is straightforward: generate a random isogeny walk ψ (or ψ′ ) of degree 3 n from E 0 (or from E 1 ).The isogeny ψ is a perfect simulation of ψ.Simulating ϕ ′ seems trickier.An obvious approach is to first generate a random E 2 (for instance, by simulating ψ : E 0 → E 2 ), then generate a random walk isogeny φ′ : E 2 → E 3 of degree 2 m .While this may seem too naive, we in fact prove that when deg(ψ) is large enough, the distribution of φ′ is statistically close to a honestly generated φ ′ .The key is a proof that the isogeny graph enriched with so-called level structure has rapid mixing properties. Isogeny graphs with level structure The isogeny ϕ ′ is essentially characterised by its source, E 2 , and its kernel ker(ϕ ′ ), a (cyclic) subgroup of order deg(ϕ ′ ).We are thus interested in random variables of the form (E, C), where E is an elliptic curve, and C a cyclic subgroup of E, of order some integer d (not divisible by p).We call such a pair (E, C) a level d Borel structure. The simulator proposed above essentially generates φ′ as a uniformly random level 2 m Borel structure (E, C) = (E 2 , ker( φ′ )).On the other hand, a honestly generated ϕ ′ corresponds to a pair (ψ(E 0 ), ψ(ker ϕ)), and ψ is a uniformly random isogeny walk of degree 3 n .This process corresponds to a random walk of length n in the 3-isogeny graph with level 2 m structure, with starting point (E 0 , ker ϕ).We prove the following result.As a consequence, we prove that random walks quickly converge to the stationary distribution, so φ′ and ϕ ′ are statistically indistinguishable. Paper outline.We start in Section 2 with a few technical preliminaries on elliptic curves, isogenies, and proofs of knowledge.Section 3 is dedicated to the proof of Theorem 3.This section can be read independently from the rest.The reader only interested in applications, and willing to accept Theorem 3 (and its consequence on non-backtracking random walks, Theorem 11, page 11), can safely skip to the following section.This theoretical tool at hand, we then describe and analyse the new proof of isogeny knowledge in Section 4. We describe the protocol to generate a Secuer in Section 5, and prove its security.Finally, we report on our implementation in Section 6. General Notations We write x ← χ to represent that an element x is sampled at random from a set/distribution X .The output x of a deterministic algorithm A is denoted by x = A and the output x ′ of a randomized algorithm ) the set of integers lying between a and b, both inclusive (the set of integers lying between 1 and a, both inclusive).We refer to λ ∈ N as the security parameter, and denote by poly(λ), polylog(λ) and negl(λ) any generic (unspecified) polynomial, poly-logarithmic or negligible function in λ, respectively. 4For probability distributions X and Y, we write X ≈ Y if the statistical distance between X and Y is negligible. Elliptic curves, isogenies and "SIDH squares" We assume the reader has some familiarity with elliptic curves and isogenies.Throughout the text, p shall be a prime number, F p and F p 2 the finite fields with p and p 2 elements respectively.Unless specified otherwise, all elliptic curves will be supersingular and defined over F p 2 .We write E[d] for the subgroup of d-torsion points of E over the algebraic closure. Unless specified otherwise, all isogenies shall be separable.If G is a finite subgroup of E, we write ϕ : E → E/G for the unique (up to post-composition with an isomorphism of E/G) separable isogeny with kernel G.If G is cyclic, we say the isogeny is cyclic.We denote by φ the dual isogeny to ϕ. Separable isogenies and their duals can be computed and/or evaluated in time poly(#G) using any of the algorithms in [51,4], however in some cases, e.g. when #G only contains small factors, this cost may be lowered to as little as polylog(#G). Given separable isogenies ϕ : E 0 → E 1 and ψ : E 0 → E 2 of coprime degrees, we obtain the commutative diagram in (1) by defining ϕ ′ : E 2 → E 2 /ψ(ker(ϕ)) and ψ ′ : E 1 → E 1 /ϕ(ker(ψ)).Again, E 3 is only defined up to isomorphism.In categorical parlance, this is the pushout of ϕ and ψ, but cryptographers may know it better through its use in the SIDH key exchange.We refer to these commutative diagrams as SIDH squares or SIDH ladders (see Section 4.2 for more details). Proofs of Knowledge Our main technical contribution is a Σ-protocol to prove knowledge of an isogeny of given degree between two supersingular elliptic curves.Recall a Σ-protocol for an NP-language L is a public-coin three-move interactive proof system consisting of two parties: a verifier and a prover.The prover is given a witness w for an element x ∈ L, his goal is to convince the verifier that he knows w. Definition 4 (Σ-protocol).A Σ-protocol Π Σ for a family of relations {R} λ parameterized by security parameter λ consists of PPT algorithms (P 1 , P 2 , V) where V is deterministic and we assume P 1 , P 2 share states.The protocol proceeds as follows: 1.The prover, on input (x, w) ∈ R, returns a commitment com ← P 1 (x, w) which is sent to the verifier.2. The verifier flips λ coins and sends the result to the prover.3.Call chall the message received from the verifier, the prover runs resp ← P 2 (chall) and returns resp to the verifier.4. The verifier runs V (x, com, chall, resp) and outputs a bit. A transcript (com, chall, resp) is said to be valid, or accepting, if V (x, com, chall, resp) outputs 1.The main requirements of a Σ-protocol are: Correctness: If the prover knows (x, w) ∈ R and behaves honestly, then the verifier outputs 1. n n n-special soundness: There exists a polynomial-time extraction algorithm that, given a statement x and n valid transcripts (com, chall 1 , resp 1 ), . . ., (com, chall n , resp n ) where chall i ̸ = chall j for all 1 ≤ i < j ≤ n, outputs a witness w such that (x, w) ∈ R with probability at least 1 − ε for soundness error ε. A special sound Σ-protocol for R is also called a Proof of Knowledge (PoK) for R. Our Σ-protocol will have the peculiar property that the relation used to prove correctness turns out to be a subset of the one used to prove soundness.This will require extra care when proving security in Section 5. Special Honest Verifier Zero-knowledge (SHVZK): There exists a polynomial-time simulator that, given a statement x and a challenge chall, outputs a valid transcript (com, chall, resp) that is indistinguishable from a real transcript.Definition 5. A Σ-protocol (P 1 , P 2 , V) is computationally special honest verifier zero-knowledge if there exists a probabilistic polynomial time simulator Sim such that for all probabilistic polynomial time stateful adversaries A If the adversary is unbounded, the protocol is said to be statistically SHVZK. Non-Interactive Zero-Knowledge Proofs In this paper, we consider non-interactive zero-knowledge (NIZK) proofs in the random oracle model that satisfy correctness, computational extractability and statistical zero-knowledge.Definition 6. (NIZK proofs.)Let R be a relation and let the language L be a set of statements {st ∈ {0, 1} n } such that for each statement st ∈ L, there exists a corresponding witness wit such that (st, wit) ∈ R. A non-interactive zero-knowledge (NIZK) proof system for R is a tuple of probabilistic polynomial-time (PPT) algorithms NIZK = (P NIZK , V NIZK ) defined as follows (we assume that all algorithms in the description below have access to a common random oracle; we omit specifying it explicitly for ease of exposition): -P NIZK (st, wit): A PPT algorithm that, given a statement st ∈ {0, 1} n and a witness wit such that (st, wit) ∈ R, outputs a proof Π. -V NIZK (st, Π): A deterministic algorithm that, given a statement st ∈ {0, 1} n and a proof Π, either outputs 1 (accept) or 0 (reject). Computational Extractability.There exists an efficient PPT extractor Ext NIZK such that for any security parameter λ ∈ N and for any polynomially bounded cheating prover P * where: (i) Ext NIZK has rewinding access to P * , and (ii) P NIZK , Ext NIZK and P * all have access to a common random oracle, letting (st, Π) Statistical Zero Knowledge.There exists an efficient PPT simulator Sim NIZK such that for any security parameter λ ∈ N and for any non-uniform unbounded "cheating" verifier V * = (V * 1 , V * 2 ) where P NIZK , V * 1 and V * 2 all have access to a common random oracle, and such that Sim NIZK is allowed programming access to the same random oracle, we have where (st, wit, ξ) ← V * 1 (1 λ ), Π ← P NIZK (st, wit), and Π ← Sim NIZK (st). Isogeny graphs and expansion Let p be a prime and d an integer not divisible by p.An elliptic curve with level d Borel structure is a pair (E, C), where E is an elliptic curve defined over a field of characteristic p and C is an order We say that two such pairs (E 1 , C 1 ) and (E 2 , C 2 ) are isomorphic if there exists an isomorphism ϕ : Let ℓ be a prime not dividing pd.The supersingular ℓ-isogeny graph with level representatives of the set of isomorphism classes of supersingular elliptic curves with a level d Borel structure defined over F p 2 .We note that each such class over F p 2 admits a model defined over F p 2 : Each isomorphism class of supersingular elliptic curves has a representative E such that #E(F p 2 ) = (p + 1) 2 and thus the p 2 -Frobenius acts as a scalar multiplication [−p], so the kernel of any ℓ-isogeny is Gal(F p 2 )-invariant. Now, the set of edges from The number of edges is independent of the representative of the isomorphism classes.When d = 1, we recover the usual definition of the supersingular ℓ-isogeny graph. This graph is directed.The out-degree of each vertex is ℓ + 1, however the in-degree is not always ℓ + 1, hence the adjacency matrix of the graph is not always symmetric., d, ℓ).On the complex vector the space C V , we introduce the Hermitian form Q((E i , C i ), (E j , C j )) = w i δ ij , where δ ij is the Kronecker symbol and Generalities on the graph and its adjacency matrix Denote by ∥ • ∥ Q the associated norm.We compare will compare ∥ • ∥ Q with the L 1 and L 2 norms on C V .The set Ω of probability distributions on V is the set of vectors with real positive entries and L 1 norm equal to 1. Consider also the vector E = n i=1 1 wi (E i , C i ), and s the probability distribution obtained normalizing E. The following result contains a number of general facts about the adjacency matrix of G, which will be used later on. Theorem 7. 1.The adjacency matrix A of G is self-adjoint with respect to Q; in particular it is diagonalizable with real eigenvalues and eigenvectors; 2. The vector E is a left-eigenvector of eigenvalue ℓ + 1 of A; 3. The vector u with all entries equal to 1 is a right-eigenvector of A; in particular its orthogonal complement S with respect to the L 2 scalar product is preserved by right multiplication by A; , where, in the product, q runs over the prime divisors of d; Proof.First we show 1.Let L ij be the set of degree ℓ isogenies from Since ℓ is coprime with d, ℓC i is equal to C i , and the duality gives a bijection between L ij and L ji .The entry a ij of A is the cardinality of the quotient Dividing this equality by two we get w i a ji = w j a ij .The claim now follows from the definition of Q. We now prove 2. We have To see part 3, observe that the out-degree of each vertex of G is ℓ + 1, hence the sum of the elements of the rows of A is ℓ + 1, so the claim. We now prove 4. Let ⟨•, •⟩ be the Hermitian product on C V such that the basis Then, the Cauchy-Schwarz inequality gives and moreover we get the equality when ṽ = w/∥w∥ L 1 .We now compute We are going to show that, for H the group of upper triangular matrices Given this equation for granted, K can be computed by writing d = q q eq and checking that | GL 2 (Z/dZ)| = q (q 2eq − q 2eq−2 )(q 2eq − q 2eq−1 ) and |H| = q q eq (q eq − q eq−1 ) 2 . Equation ( 2) is the equation of the orbits for a group action.Fix an elliptic curve E, let X be the set of order d cyclic subgroups of E [d].This set has a natural transitive action by Aut(E[d]) ∼ = GL 2 (Z/dZ), which gives a bijection X ↔ GL 2 (Z/dZ)/H, so the right hand side of Equation ( 2) is the cardinality of X. Level d Borel structures on E are the orbits of the action of Aut(E) on X.The left hand side of Equation ( 2) is again the cardinality of X, obtained summing the cardinalities of each orbit. Finally we prove 5. Let π = n i=1 π i (E i , C i ) be a probability distribution and let λ = We conclude recalling that w i ≤ 3 for every i.Notice that for π = (E i , C i ) we get ∥π − s∥ 2 Q = w i − 1/λ, hence the above estimate is not too loose. Proof of Theorem 3 We now prove that G = G(p, d, ℓ) has the Ramanujan property.This follows from the first three items of Theorem 7 combined with the following result, whose proof heavily relies on the theory modular forms.Theorem 8. Let S ⊂ C V be the subspace of vectors i v i (E i , C i ) such that i v i = 0, as in Theorem 7. The eigenvalues of the action of A on S are all contained in the Hasse interval To prove Theorem 8, we assume standard notations and results about quadratic forms and modular forms, such as the ones from [25,48,33].Given two elliptic curves with level structure (E i , C i ) and (E j , C j ), we denote by Λ ij the lattice of isogenies ϕ : The degree defines a quadratic form deg on Λ ij .This quadratic module has rank four, level dp and determinant d 2 p 2 .We can thus define the theta series This function is in M 2 (Γ 0 (dp)), the space of modular forms of weight two for the modular group Γ 0 (dp), by [33,Theorem 4.2] (observe that in loc.cit. the exponential is one because Q(h) is an integer; moreover, we choose P = 1) or [48, Chapter IX, Theorem 5, page 218].The above construction extends to an Hermitian pairing We call this pairing the Brandt pairing, even though there is a little ambiguity5 in this set-up.The Brandt pairing is non-degenerate: let v = c i (E i , C i ), then the coefficient of q of Θ(v, v) is the Hermitian norm of the vector of coefficients (. . ., c i , . . .).We will prove the following two key propositions.Proposition 9.The Brandt pairing intertwines the adjacancy matrix A of G and the Hecke operator Proposition 10.For every three elliptic curves with level structure The combination of these two results tells that the spectrum of the action of A restricted to S is contained into the spectrum of the action of the Hecke operator T ℓ on the space of cusp modular forms of weight two for Γ 0 (dp).The Ramanujan Conjecture, proved by Eichler, predicts that this second spectrum is contained in the Hasse interval, and hence proves Theorem 8. We refer to [24, Theorem 8.2] for a proof of the Ramanujan Conjecture.In loc.cit.this result is proven only for eigenvectors of T ℓ which are new-forms.An eigenvector which is an old form will come from an embedding ι : S 2 (Γ 0 (m)) → S 2 (Γ 0 (dp)) with m that divides dp.Since ℓ is coprime with dp, the map ι is T ℓ -equivariant (cf.[25, proof of Proposition 5.6.2]),so we can still deduce our result from [24,Theorem 8.2].It is worth recalling that [24, Theorem 8.2] is way more general that what we need, as it applies to modular forms of every weight. Proof of Proposition 9 We prove that both sides have the same q-expansions.For a power series On the other side, where C varies among the cyclic non-trivial subgroups of E i [ℓ] of cardinality ℓ, and π C is the projection and let F be the disjoint union of the above maps.The map F is surjective: and we can write α = f • π C .In particular, let us compute the cardinality of the fiber The quadratic module (Λ/c 0 Λ, deg) is (non-canonically) isomorphic to a Borel subalgebra of (End((Z/c 0 Z) ⊕2 ), det).An isomorphism can be obtained mapping it to Hom(E[c 0 ], E ′ [c 0 ]), and then choosing a symplectic basis. If ϵ = 0 we are done, otherwise ϵ = 1.Since [Hom(E, E ′ ) : Λ] = d is prime to p, we have Λ/p = Hom(E, E ′ )/p = (Hom(E, E ′ ) ⊗ Z p )/p, and the quadratic Z p -module Hom(E, E ′ ) ⊗ Z p does not depend on the pair because, by the Deuring correspondence (see e.g.[52,Theorem 42.3.2.]), together with [52, Lemma 19.6.6], it is isomorphic to λO p with the reduced norm, where O p is the maximal order in the non-ramified quaternions over Q p , and λ is an element of norm prime to p. Mixing time of non-backtracking walks We finally analyze the behavior of random walks in G = G(p, d, ℓ), which we will ultimately use to prove statistical indistinguishability of distribution arising from our proof of knowledge.First, observe that Theorem 7 item 2 shows that the probability distribution s introduced in Subsection 3.1 is the stationary distribution on G.This is nearly the uniform distribution: all curves are equally likely, with the possible exception of the two curves with extra automorphisms, j = 1728 and j = 0, which are respectively twice and thrice less likely. We are going to determine the speed at which random walks converge to the stationary distribution.We focus on non-backtracking walks, which are the most useful for cryptographic protocols, but, because the graph is directed, we need some care to define them.Edges of G are equivalence classes of isogenies, so we choose a representative for each class.For an edge α we define its dual edge as the chosen representative β for the class Aut(E, C)α, so that βα = uℓ for u ∈ Aut(E, C).Notice that the dual of β (as an edge) might be different from α, but this is not relevant for us.We say that a random walk on G is non-backtracking walk if an edge is never followed by its dual. With this "duality", we have that isogenies of degree a power of ℓ and with cyclic kernel (up to the equivalence α ∼ β iff ker α = ker β) correspond to non-backtracking walks. Theorem 11 (Mixing time).Let π be a probability distribution on G, and π (k) the distribution obtained after a non-backtracking random walk of length k.Then we have where K and M are as in Theorem 7. Proof.Denote by A (k) the matrix whose (i, j) entry is the number of non-backtracking walks from i to j.Since each edge has a unique dual, we get the same recurrence formula as in the non-oriented case, namely 1) . Observe that the sum of all the entries in a fixed row of A (k) is (ℓ + 1)ℓ k−1 .We denote by P (k) its normalization Hence, P (k) is a polynomial in A, see e.g.[2, Section 2].Let us call this polynomial µ k (x) (here, the use of the symbol µ i is slightly different from the one of [2]).The matrix P (k) is diagonalizable, it has the same eigenvectors as A, and has eigenvalues µ k (ℓ + 1) = 1 and µ k (λ i ), where λ i is any eigenvalue of A different from ℓ + 1. Combining the proof of [2, Lemma 2.3] and Theorem 3, we get Now observe that π (k) = πP (k) , and hence π (k) − s = (π − s) P (k) .The difference of two probability distributions is orthogonal for the standard L 2 scalar product to the vector u from Theorem 7 item 3. Since E is not orthogonal to u, by Theorem 7 item 3 we conclude that π − s is in the linear span of the eigenvectors of A corresponding to eigenvalues different from ℓ + 1.Since A is self-adjoint with respect to Q, using Equation ( 6) we have The definition of K and M from Theorem 7 tells that We obtain the result recalling that the total variation distance between two probability distributions is half of the L 1 distance, see e.g.[39, Proposition 4.2]. Remark 12 (Improvement of Theorem 11 and Lemma 15).Under the assumption that the eigenvalues of the adjacency matrix of G are strictly contained in the Hasse interval (so there are no eigenvalues equal to ±2 √ ℓ), Theorem 11 can be improved: the linear factor (k + 1) can be replaced by a constant which does not depend on k.Indeed, as ±2 √ ℓ is not an eigenvalue, sin(θ) in Equation 5never vanishes.If we write | sin(θ)| ≥ ε for some ε > 0, we obtain which can be used in place of Equation 6. Observe that, even with this improvement, the bound will not be sharp, because in the bound of Equation 7we consider only the eigenvalues with greatest modulus, but the other eigenvalues of A have smaller modulus.This argument in turn improves Lemma 15, where the linear factor k can be replaced by a constant independent of k. Proof of Knowledge Our goal is to provide a PoK of an isogeny walk ϕ : E 0 → E 1 between two supersingular curves defined over F p 2 that can be seamlessly plugged in a distributed Secuer generation protocol.For this, we need the following properties: 1. Compatible with any pair of curves (E 0 , E 1 ); this rules out [30,31], which is restricted to a special starting curve E 0 , and [19] and derivatives, which are restricted to curves defined over F p . 2. Statistically ZK, so that the security of the final Secuer does not hinge on computational assumptions brought in by the PoK; this rules out all other isogeny based PoKs in the literature.3. Post-quantum secure, possibly relying on as few additional assumptions as possible; this rules out many generic ZK proof systems.4. Possibly compatible with any walk length and any base field F p 2 .5. Usable in practice for cryptographically sized finite fields. The only attempt at using generic proof systems to prove knowledge of isogeny walks has been made in [14], and is based on a SNARG derived from a Sumcheck protocol carefully optimized for isogenies.However this work does not consider ZK, and does not evaluate the concrete efficiency of the SNARG.Even if it could be made efficient, adding post-quantum ZK would likely come at a considerable cost, thus we do not investigate this path further. Our new PoK inherits from the SIDH-based Σ-protocol of De Feo, Jao and Plût [20], and from the recent developments of De Feo, Dobson, Galbraith and Zobernig [18].The common theme to all of them is to construct random SIDH squares on top of the secret isogeny ϕ : and to reveal some, but not all of the edges ψ, ψ ′ , ϕ ′ in response to a challenge.The reason these protocols are not statistically ZK is that the side ϕ ′ is strongly correlated to the parallel side ϕ (often unique given E 2 ) and can thus easily be distinguished by an unbounded adversary. Our first idea is to make the walk ψ long enough that the distribution of (E 2 , ϕ ′ ) becomes statistically close to the uniform distribution on supersingular curves with isogenies of degree deg(ϕ).To prove it, we will use the properties of isogeny graphs with level structure analyzed in Section 3. But making ψ longer is easier said than done.SIDH-based protocols are constrained in the lengths of ϕ and ψ by the form of the prime p: typically, p + 1 = 2 a 3 b and then deg(ϕ) = 2 a and deg(ψ) = 3 b .Our second idea is to glue several SIDH squares together to make longer walks (see Fig. 2).We call these larger diagrams SIDH ladders. A valuable side-effect of gluing SIDH squares together is that we can free ourselves from the constraints on p.All we need is that isogenies of a small prime degree ℓ coprime to deg(ϕ) can be computed efficiently, then we stack vertically sufficiently many SIDH squares to make deg(ψ) = ℓ n as large as we need.In practice, we will take deg(ϕ) = 2 m , deg(ψ) = 3 n , and the protocol will be most efficient for SIDH primes, but in full generality our protocol works for any base field and any isogeny degree. Protocol description and analysis Let E 0 , E 1 be supersingular curves defined over a finite field F p 2 , and let ϕ : E 0 → E 1 be a cyclic separable isogeny of smooth degree d.Let ℓ be a small prime not dividing pd.Let C(m; r) be a statistically hiding and computationally binding commitment scheme.Our Σ-protocol is described in Fig. 1; it depends on a parameter n, controlling the length of the ℓ-isogeny walks, that we will determine in Definition 16.The prover consists of two stateful algorithms (P 1 , P 2 ): the former is randomized and produces a commitment (com 2 , com 3 ), the latter receives a ternary challenge chall ∈ {−1, 0, 1} and produces a deterministic response resp.The verifier is a deterministic algorithm that receives (com 2 , com 3 ), chall, resp and outputs a bit indicating whether or not the proof is accepted. Assuming the commitment C is computationally binding, it is 3-special sound for the relation More precisely, there is a probabilistic polynomial time algorithm that, given three successful transcripts of the protocol with same commitments and distinct challenges, either recovers a witness χ : E 0 → E 1 , or opens one of the commitments C(E i ; r i ) to two distinct values (breaking the binding property). Proof.Correctness.Suppose that the prover P = (P 1 , P 2 ) and the verifier V follow the protocol.First note that, since the degree d of ϕ is smooth, the SIDH ladder in P 1 can be constructed as described in Section 4.2.Then it is clear that the commitments open successfully, and the verifier accepts the transcript for any challenge.3-special soundness.Given three accepting transcripts (com, −1, resp −1 ), (com, 0, resp 0 ) and (com, 1, resp 1 ), recover (ϕ ′ , E 2 , r 2 , E 3 , r 3 ) = resp 0 where ϕ ′ : E 2 → E 3 is an isogeny.If the curves in resp −1 and resp 1 are not equal to E 2 and E 3 respectively, then we can open one of the commitments C(E 2 ; r 2 ) or C(E 3 ; r 3 ) to two distinct outputs.Otherwise, we have resp −1 = (ψ, E 2 , r 2 ) and resp 1 = (ψ ′ , E 3 , r 3 ) where ψ : Factoring out the non-cyclic part of χ ′ , we extract a cyclic isogeny χ : • χ for some 0 ≤ i ≤ n; however, like in the original SIDH PoK [18,32], we cannot guarantee that i = 0. We are now going to define the simulator for proving ZK.Simulating chall = ±1 is easy, however how well we can simulate the case chall = 0 depends on the parameter n given to P 1 .The opening (E 2 , ϕ ′ : E 2 → E 3 ) can be equivalently viewed as the curve with level d Borel structure (E 2 , ker(ϕ ′ )).Our goal is to have this opening distributed like a "random" vertex in the graph G = G(p, d, ℓ).To this effect, we define two sequences D 1 (k) and D 2 (k) of probability distributions on G, and we show that they converge as k grows.Definition 14.Let ϕ : E 0 → E 1 be a cyclic separable isogeny of degree d.Define where C E (f ) is the uniform distribution on the cyclic subgroups of order f of E, up to Aut(E). Lemma 15.Keep notations as above, fix a positive real number ε, and let k be a positive integer such that is the total variation distance between the two distributions, also known as statistical distance. Proof.We bound the statistical distance of each of D 1 (k) and D 2 (k) from the stationary distribution of G(p, d, ℓ), as determined in Theorem 7, then we conclude with the triangle inequality.For D 1 (k), we can directly apply Theorem 11, but D 2 (k) needs more care.Let G 0 be the classical isogeny graph.This can be thought of as the graph with d = 1 Borel level structure.Let s 0 be the stationary distribution on G 0 .Consider the projection map P : G → G 0 which forgets the level structure.The push-forward distribution P * D 2 (k) is the distribution of the length k non-backtracking walks starting at E 0 , so we can bound its total variation distance from s 0 using Theorem 11.For any probability distribution π on G 0 let us denote π the distribution on G obtained by first choosing E with distribution π and then choosing C uniformly inside the set of cyclic subgroups of order d.Notice that for each two subgroups C, C ′ , the pair (E, C) defines the same vertex as (E, C ′ ) if and only if there exists an automorphism of E sending C to C ′ .This, together with the fact that the set of C's for a single E has cardinality where H is the subgroup of upper triangular matrices.The above formula, together with (2), implies that for every probability distribution π on G 0 and every subset A of G 0 , one has π(P −1 (A)) = π(A).In turn, this means that for π 1 , π 2 probability measures on G 0 , we have Proof.We simulate the honest prover for each of the three challenges as follows. Executing the protocol The protocol we just described crucially depends on the ability to construct a commutative square with sides of degrees d and ℓ n .The SIDH setting has p + 1 = d • ℓ n so that the square can be constructed by simply pushing a single kernel point for ψ through ϕ and vice versa.We refer to such a square as an SIDH square.For more general choices of ℓ n and d, the kernels are typically generated by points defined over very large extension fields, requiring superpolynomial space.We efficiently construct such "larger" squares by gluing together several SIDH squares in what we call SIDH ladders, as depicted in Fig. 2. For simplicity, we shall present the case d = (2 a ) w and ℓ n = (3 b ) h , where 2 a and 3 b are the side lengths of an SIDH square, and w and h are positive integers defining the width and height of the ladders in units of SIDH squares.However, the technique generalizes easily to any coprime d and ℓ n , as long as isogenies of degrees d and ℓ can be efficiently computed. First, notice that there always exist some choice of a and b such that points (and hence kernel subgroups) of orders 2 a and 3 b can be represented efficiently.This is clear if the prime p is a SIDH prime where 2 a 3 b | (p + 1), but for a generic prime p, one can set a = b = 1: Points of order 2 and 3 are defined over a small extension field and can thus be efficiently represented.Moreover, any isogeny of degree (3 b ) h is the composition of h isogenies of degree 3 b each, which can be stored as a sequence of h kernel generators which are efficiently representable. If the width w of the ladder is one, the prover can now recursively push the kernel G of the isogeny ϕ = ϕ 0,1 through the isogenies ψ i,0 to obtain its image G i on each curve E i,0 .Each horizontal isogeny ϕ 0,i has kernel G i , and the prover can compute the kernel of the right-side vertical isogeny ψ ′ i,0 as the image of the kernel of ψ i,0 under the isogeny ϕ i−1,1 .Since each square composed of (E i,0 , E i+1,0 , E ′ i,0 , E ′ i+1,0 ) is a commutative diagram, so is the larger square (E 0 , E 1 , E 2 , E 3 ).In the general case where w > 1, the prover can use a similar approach for the horizontal isogeny ϕ as used for the vertical isogeny ψ: The isogeny ϕ can be written as the composition of w isogenies ϕ 0,w • . . .• ϕ 0,1 of degree 2 a and their kernels can be mapped through the vertical isogenies.In other words, the prover can glue horizontally w compatible ladders, one for each factor ϕ 0,i of ϕ.The right descending isogenies of each ladder are used as the left descending isogenies of the next one.This allows the prover to compute w × h SIDH squares in such a way that the curves (E 0 , E 1 , E 2 , E 3 ) and the isogenies between them form a commutative diagram.This is illustrated in Fig. 2. For the challenges chall = ±1, the prover reveals the isogenies ψ i,0 of the leftmost squares, or the isogenies ψ i,w of the rightmost squares.For the challenge chall = 0, the prover responds with the isogenies ϕ h,i of the bottom squares. Verification consists of evaluating (depending on the challenge) either w or h isogenies of degree 2 a or 3 b , which can be done efficiently.Generating the proof is slower, as the prover needs to fill in all the w × h SIDH squares that make up the ladder.The proving complexity is thus quadratic in w and h, while the verification complexity is linear in w and h.However, the complexity of computing an SIDH square with degrees 2 a or 3 b is only quasilinear in a and b using sparse strategies [20]; thus, maximizing the size of SIDH squares improves performance, which explains why SIDH primes are the most efficient scenario for this proof.If the degree of the isogenies and the size of the underlying field are kept constant, in the SIDH setting we have that 2 a 3 b | (p + 1) for large values of a and b (in the order of several hundreds), and thus w and h can be small.For a generic prime, the prover might need to set a = b = 1 and work with large values of w and h, incurring a quadratic cost, besides possibly having to compute points over an extension field of degree bounded by a small constant.where each isogeny ϕ 0,i has degree d i , and constructs compatible ladders for each ϕ 0,i . Distributed Secuer Setup and its Security In this section, we formally describe the distributed Secuer setup protocol and prove its security under a security definition using the simplified universal composability (SUC) framework due to Canetti, Cohen, and Lindell [9] in the real/ideal world paradigm.Our security definitions consider a dishonest majority corruption model, wherein the adversary can corrupt up to t − 1 of the t participating parties in the distributed Secuer setup protocol.The protocol uses a non-interactive version of the Σ-protocol described in Section 4. We begin by formally describing this non-interactive zero-knowledge (NIZK) PoK protocol. The NIZK protocol We transform the Σ-protocol of Section 4 into a NIZK using the standard Fiat-Shamir heuristic [27] for transforming interactive PoK protocols into NIZK proofs, albeit with the difference that soundness and zero-knowledge hold for slightly different languages. The NIZK construction.Let E 0 , E 1 be supersingular curves defined over a finite field F p 2 , let ϕ : E 0 → E 1 be a separable isogeny of smooth degree d and let C(m; r) be a statistically hiding and computationally binding commitment scheme.Additionally, let Σ = (P 1 , P 2 , V) be the interactive PoK protocol described in Section 4, let λ ∈ N be the security parameter, let ℓ be a small prime not dividing dp, let n = n(p, d, ℓ, λ), and let N = poly(λ) be a fixed polynomial.Finally, let H : {0, 1} * → {−1, 0, 1} N be a random oracle.The NIZK proof system consists of a pair of algorithms NIZK = (P NIZK , V NIZK ) as described in Fig. 3.The prover algorithm P NIZK is randomized and produces a proof Π.The verifier algorithm V NIZK is deterministic; it receives the proof Π and outputs a bit b ∈ {0, 1} indicating whether or not the proof is accepted. Correctness, Extractability and ZK.Correctness follows immediately from the correctness of the underlying Σ-protocol.We state and prove the following propositions for extractability and ZK. Proposition 19.Assuming that Σ = (P 1 , P 2 , V) satisfies 3-special soundness with respect to the relation R ⋆ (described in Proposition 13) and that H is a random oracle, the NIZK NIZK = (P NIZK , V NIZK ) satisfies extractability (and hence soundness) with respect to the relation R ⋆ . Proof.We provide an informal proof overview.We begin by noting that Σ is a public-coin protocol, and that there exists a probabilistic polynomial-time algorithm that extracts a witness from 3 accepting transcripts corresponding to N parallel executions of Σ w.r.t. the same statement.Consequently, we can invoke the generalized forking lemma of [7] to argue the existence of a probabilistic polynomial-time witnessextraction algorithm for NIZK.This completes the proof of extractability (and hence, soundness) for NIZK. P NIZK (E0, E1, ϕ, n, N ): V NIZK (E0, E1, Π, N ): Proof.We again provide an informal proof overview.Let Sim Σ be a ZK simulator that simulates an accepting transcript for the underlying Σ-protocol (as described in the proof of ZK for Σ).We construct a ZK simulator Sim NIZK that simulates an accepting proof as follows: 1. Sim NIZK simulates the random oracle H as follows: it maintains a local table consisting of tuples of the form (x, y) ∈ {0, 1} * × {−1, 0, 1} N .On receiving a query x ∈ {0, 1} * from the adversary A, it looks up this table to check if an entry of the from (x, y) exists.If yes, it responds with y.Otherwise, it responds with a uniformly sampled y ← {−1, 0, 1} N , and programs the random oracle as H(x) := y by adding the entry (x, y) to the table.2. For each i ∈ [N ], Sim NIZK internally invokes the simulator Sim Σ for the underlying Σ-protocol to obtain the i-th accepting transcript of the form 3. At this point, Sim NIZK aborts if the adversary A has already issued a random oracle query on the input x = (com 2,1 , com 3,1 ), . . ., (com 2,N , com 3,N ) . 4. Otherwise, Sim NIZK programs the random oracle as H (com 2,1 , com 3,1 ), . . ., (com 2,N , com 3,N ) := (chall 1 , . . ., chall N ), and outputs the simulated proof as We note that Sim NIZK runs in polynomial time as long as Sim Σ runs in polynomial time.Additionally, if Sim NIZK does not abort, it outputs a simulated proof that is distributed in a statistically indistinguishable manner from the distribution of a real proof, assuming that Sim Σ outputs a simulated accepting transcript with distribution statistically indistinguishable from a real accepting transcript for Σ.Finally, Sim NIZK aborts with only negligible probability, since the adversary A guesses ((com 2,i , com 3,i ), chall i , resp i ) for each i ∈ [n] with at most negligible probability.This completes the proof of statistical ZK for NIZK. Our distributed Secuer setup protocol We now move to the distributed Secuer setup protocol.Let P 1 , . . ., P t be a set of t participating parties and let E 0 be some fixed starting curve.In a nutshell, the idea is to have the parties act sequentially: each P i at its own turn performs a secret random walk E i−1 → E i and broadcasts E i and a NIZK PoK of the secret walk.We claim that, as long as one party is honest, the final curve E t is a Secuer. To get any security guarantee, we need to carefully set the parameters of the random walk E i−1 → E i .The natural choice is to fix some small prime q, not dividing ℓp, and to take a random walk long enough that the distribution of E i is negligibly far from the stationary distribution on the q-isogeny graph G(p, 1, q).For example we may set q = 2 and ℓ = 3, then Theorem 11 provides a precise bound to set the length δ = n(p, 1, q, λ) of the q-walk as a function of the security parameter, and ultimately the parameter n(p, q δ , ℓ, λ) of the PoK. Remark 21.For increased efficiency, we may choose to perform shorter q-walks E i−1 → E i of length log q (p).This length approximates the diameter of the supersingular q-isogeny graph; hence, it ensures that the secret isogeny can reach almost any curve in the graph. Under mild assumptions, this choice would still yield a secure protocol, but it would also make the security proof somewhat more involved.For this reason, we shall stick here to the more conservative choice of walking long enough to ensure nearly stationary distribution of E i . We formally describe the protocol (referred to as Γ Secuer henceforth).Assume that E 0 is known to all the parties at the start.Let NIZK = (P NIZK , V NIZK ) be the non-interactive proof as described above.The protocol Γ Secuer proceeds in t rounds while only using broadcast channels of communication, where round-i for each i ∈ [t] is as follows: -Party P i performs a q-isogeny walk starting at curve E i−1 and ending at curve E i (where E i−1 and E i are both supersingular curves defined over F p 2 ), such that party P i knows a separable isogeny ϕ i : E i−1 → E i of degree q δ , where δ = n(p, 1, q, λ).-Party P i generates Π i ← P NIZK (E i−1 , E i , ϕ i , n, N ), where n = n(p, q δ , ℓ, λ), and broadcasts to all other parties.-Each party P j for j ∈ [t] \ {i} verifies the NIZK proof Π i by computing b If b i = 0 (i.e., the proof is invalid), P j aborts. At the end of round-t, all parties output E t to be the final output curve. Correctness.Correctness of Γ Secuer follows immediately from the correctness guarantees of the NIZK. Proof of security for Γ Secuer We now present the proof of security for Γ Secuer using the simplified universal composability (SUC) framework [9] in the real/ideal world paradigm.We consider a dishonest majority corruption model, wherein the adversary can corrupt up to (t − 1) of the t participating parties. The ideal functionality.Intuitively, the ideal functionality for distributed Secuer setup should simply take as input the initial curve E 0 and output a Secuer E t .It is however not obvious how to model the property of being a Secuer in the plain SUC model: a game based definition, stating that an adversary who can compute End(E t ) can be used to break some other assumption, appears to be more appropriate.Thus, we prove security in two steps.First, we prove that Γ Secuer securely emulates a less-than-ideal functionality F * Secuer (described in Fig. 4) that enforces that: (a) for each i ∈ [t], if a corrupt party P i outputs a curve E i , it must know a valid isogeny ϕ i : E i−1 → E i , and (b) for each i ∈ [t], if an honest party P i outputs a curve E i , then the corresponding isogeny ϕ i : E i−1 → E i is hidden from the adversary.This step relies on the extractability and ZK properties of the NIZK protocol described above.Next, we prove that, assuming the hardness of the endomorphism ring problem in the F * Secuer -hybrid model, the output curve E t is a Secuer, i.e. that the (malicious) adversary cannot compute End(E t ). Theorem 22. Assuming that NIZK = (P NIZK , V NIZK ) satisfies extractability and zero-knowledge, and assuming the hardness of the endomorphism ring problem (Definition 1) and GRH, the output E t of the protocol Γ Secuer is a Secuer if at least one party P i * for some i * ∈ [t] is honest. Secure emulation of F * Secuer .We now prove that Γ Secuer securely emulates the less-than-ideal functionality F * Secuer .Our proof is in the real/ideal world paradigm defined formally as follows. The real world.The following entities engage in the real protocol Γ Secuer : (i) a set H ⊆ [t] of honest parties, (ii) a real-world adversary A controlling a set C ⊂ [t] of corrupt parties, and (iii) the environment E that provides E 0 to each party, interacts with the real-world adversary A, receives the final output curve E t from the honest parties, and eventually outputs a bit b ∈ {0, 1}. be the set of honest parties, and let Ci ⊆ [i − 1] be the set of corrupt parties among the first (i − 1) parties P1, . . ., P (i−1) .-For each j ∈ Hi, F * Secuer receives as input from Pj a tuple of the form (Ej, ϕj).-For each j ′ ∈ Ci, F * Secuer receives as input from the simulator Sim a tuple of the form (E j ′ , ϕ j ′ ).-If for any j ∈ [i − 1], ϕj is not an isogeny from the curve Ej−1 to the curve Ej, F * Secuer outputs ⊥ and aborts. -Otherwise, F * Secuer takes a random walk starting from the (i − 1)-th curve Ei−1 and ending in a curve Ei such that F * Secuer knows ϕi : Ei−1 → Ei, where ϕi is a separable isogeny of degree d. -Finally, F * Secuer outputs (Ei, ϕi) to the party Pi, and outputs Ei to the simulator Sim and to all parties Pj for j ̸ = i.The ideal world.The following entities interact with the functionality F * Secuer : (i) A set H ⊆ [t] of honest parties, where for each i ∈ H, party P i directly forwards its secret isogeny to F * Secuer , (ii) an ideal-world simulator Sim that sends inputs to F * Secuer on behalf of a set C ⊂ [t] of corrupt parties, and (iii) the environment E that provides each party with the starting curve E 0 , interacts with the simulator Sim, receives the final output curve E t from the functionality, and eventually outputs a bit b ∈ {0, 1}. For any t-party Secuer setup protocol Γ Secuer , any adversary A, any simulator Sim, and any environment E, we define the following random variables: real ΓSecuer,A,E : denotes the output of the environment E after interacting with the adversary A during a real-world execution of Γ Secuer .-ideal F * Secuer ,Sim,E : denotes the output of the environment E after interacting with the simulator Sim in the ideal world.Proof.We prove this theorem by constructing a PPT simulator Sim that simulates the view of the environment E in the ideal world.The simulator Sim receives E 0 from the environment E, internally runs the real-world adversary A and the NIZK simulator Sim NIZK , and proceeds in round-i for i ∈ [t] as described next.Note that we implicitly assume that Sim has rewinding access to the adversary A and programming access to the random oracle in the analysis below. Case and broadcasts (E i , Π i ) as the message corresponding to the honest party P i .Indistinguishability of views.We now prove that for the above construction of Sim, the view of E in the ideal world is indistinguishable from that in the real world.We prove this by a sequence of hybrids as described below (recall that H ⊆ [t] and C ⊂ [t] denote the set of honest and corrupt parties, respectively). -Hybrid-0: In this hybrid, the distribution of messages broadcast by each party is identical to the real-world protocol γ Secuer .-Hybrid-1: In this hybrid, for each corrupt party P j such that j ∈ C, instead of verifying the NIZK proof Π j using V NIZK (as in the real protocol), extract the witness ϕ j using the the extraction algorithm of NIZK.If extraction fails, output ⊥. -Hybrid-2: In this hybrid, for each honest party P i such that i ∈ H, instead of generating the NIZK proof Π i ← P NIZK (E i−1 , E i , ϕ i , n, N ) (as in the real protocol), generate a simulated proof as ). -Hybrid-3: In this hybrid, the distribution of messages broadcast by each party is identical to the ideal-world messages broadcast by Sim. Note that for E to distinguish between hybrid-0 and hybrid-1 with non-negligible probability, the adversary A must be able to produce with non-negligible probability a proof Π j corresponding to a corrupt party P j for j ∈ C such that V NIZK (E j−1 , E j , Π j , N ) = 1 but extraction fails.This immediately violates extractability of NIZK, thus completing the proof of the lemma. Note that for E to distinguish between hybrid-1 and hybrid-2 with non-negligible probability, there must exist an honest party P i for i ∈ H and a distinguisher D such that Pr , where λ is the security parameter, and where Proof.Suppose that there exists an adversary A corrupting a dishonest majority of the parties that efficiently computes the endomorphism ring of E i with non-negligible probability.Also assume that A corrupts all of P 1 , . . ., P i−1 .We can use A to construct an algorithm B that solves the endomorphism ring problem.The algorithm B receives as input a uniformly random curve E * /F p 2 , internally runs the adversary A to emulate the outputs of the corrupt parties P 1 , . . ., P i−1 , and finally feeds A with E i := E * .The view of the adversary A is properly simulated by B, since E i output by F * Secuer and E * provisioned by B are statistically indistinguishable (here we use Theorem 11, which crucially follows from the honest party taking a q-walk of length n(p, 1, q, λ)).Finally, B uses A to recover the endomorphism ring of E * with non-negligible probability.This concludes the proof of Lemma 27. We now prove Theorem 26.We break the proof into two cases: (i) when P t is honest, and (ii) when P t is corrupt.The proof for case (i) is immediate from Lemma 27.Hence, we focus on case (ii).Let H ⊆ [t] be the set of honest parties, and let i * = max ({i : P i ∈ H}).By Lemma 27, E i * must be a Secuer.Now, suppose that E t is not a Secuer, i.e., there exists an adversary A corrupting dishonest majority of the parties that efficiently computes the endomorphism ring of E t with non-negligible probability.Since all of P i * +1 , . . ., P t are corrupt, A knows a walk from E i * to E t in the ℓ-isogeny graph.However, since E t is not a Secuer, A can use the reduction [54] (assuming GRH) to recover End(E i * ), thereby violating Lemma 27.This completes the proof of Theorem 26.Finally, the proof of Theorem 22 follows immediately from the proofs of Theorem 23 and Theorem 26, which completes the proof of security for our distributed Secuer setup protocol Γ Secuer . Implementation and Results In this section, we report on our proof-of-concept implementation of our proof of knowledge (Section 4), including a discussion of proof sizes and running times.Moreover, we lay out concretely how one may deploy the trusted setup protocol from Section 5 in the real world. Parameter selection.The base-field primes p in our proof-of-knowledge implementation are taken from the four SIKE parameter sets p434, p503, p610, and p751.As discussed in Section 4.2, our proof of knowledge achieves its optimal efficiency for SIDH-style primes.Moreover, those primes have been featured extensively in the literature, and thus appear to be the obvious choice to demonstrate our proof of knowledge.That said, we stress once more that our techniques are generic and can be applied in any choice of characteristic.We use the degree q = 2 for the random walks E i → E i−1 , and ℓ = 3 for the random walks of the Σ-protocol of Fig. 1.Like Section 5, we set δ = n(p, 1, 2, λ) for the length of the 2-walks, and n = n(p, 2 δ , 3, λ) for the 3-walks.Lastly, the Σ-protocol needs to be repeated several times to achieve a negligible soundness error.Since one repetition has soundness error 2/3, the protocol needs to be repeated −λ/log(2/3) times to achieve 2 −λ soundness error.We target the same security levels as the corresponding SIKE parameter sets, i.e., λ = 128 for p434 and p503, λ = 192 for p610, and λ = 256 for p751.The resulting conservative parameters are summarized in Table 1. Implementation.We developed an optimized implementation6 of our proof of knowledge (Section 4.1) for the trusted-setup application (Section 5) based on version 3.5.1 of Microsoft's SIDH library 7 .Our implementation inherits and benefits from all lower-level optimizations contained in that library, and it supports a wide range of platforms with optimized code for a variety of Intel and ARM processors.Compiling our software produces two command-line tools prove and verify, which use a simple ASCIIbased interface to communicate the data contributed to the trusted setup and its associated proof of isogeny knowledge. The implementation closely follows the strategy outlined in Section 4.2.This includes the choices d = (2 a ) w and ℓ n = (3 b ) h ; thus, both the witness and the commitment isogenies are uniformly random cyclic isogenies of degree d and ℓ n respectively.To reduce latency, we additionally exploit parallelism: Recall that the proof of knowledge is repeated many times to achieve a low soundness error; indeed most of the computations are independent between those repetitions and can thus easily be performed at the same time on a multi-core system.This is confirmed by experimental results, where our implementation is observed to parallelize almost perfectly when run on an eight-core processor. Sampling purely random large-degree isogenies with code from SIDH comes with two caveats: First, the sampling of "small" squares must avoid backtracking between the individual squares being glued to ensure that the composition is cyclic in the end; in both cases this is done by keeping track of the kernel of the dual of the last prime-degree step of the previous square and avoiding points lying above this "forbidden" kernel when choosing the next square.Besides that, the specific isogeny formulas used in SIDH fail for the 2-torsion point (0, 0), which can be resolved by changing to a different Montgomery model each time this kernel point is encountered.For curves revealed in the proof, the choice of Montgomery model should be randomized to avoid leakage.Similarly, the kernel generators of the horizontal isogeny ϕ ′ also need to be randomized, as Lemma 15 only distinguishes cyclic subgroups and revealing specific generators may leak.Our software sacrifices some performance for simplicity, which aids auditability and hence helps increase trust in the results of a trusted-setup ceremony.Some unused optimizations: Two-isogenies are faster to compute than three-isogenies, and since the SIDH ladder is taller than wider, swapping the role of two-and three-isogenies in the trusted-setup application could somewhat improve the resulting performance.For simplicity, our implementation also only uses full SIDH squares, and thus all isogeny degrees are rounded up to the closest multiple of an SIDH square; shortening the sides of some of the squares can save time.We also did not apply all optimizations to reduce the proof size.This includes applying SIDH-style compression techniques [15] to the points contained in the proof, cutting their size approximately in half.Moreover, applying a slight bias when sampling the challenges chall i means smaller responses can appear more often, at the expense of requiring slightly more repetitions; we investigated this tradeoff and determined that the potential improvement is essentially void.submission failing to complete full chain verification before the tip curve is updated again increase.We can parallelize our verification of the multiple proofs to lower these chances, and do a quick validation abort if any proof or any checks of the validity of the chaining of curves fails. The configuration for the continuous integration (CI) checks is maintained in a separate repository to prevent modification from protocol participants.Hosting the protocol on GitHub raises the bar to Sybil attacks, as it requires all participants to have a GitHub account with a verified email address.Using our tool requires generation of a GitHub personal access token to authenticate when generating the submission, which further complicates automation / collusion of adversarial participants. The end result of the protocol is a public git repository whose final commit contains a series of curves and valid proofs of knowledge of isogenies between them, the last of which is the final Secuer, a curve with unknown endomorphism ring, in a parsable hex encoding.Anyone can pull down this artifact and verify the series of curves and proofs independently if they wish. Conclusion In this work, we analyzed a distributed Secuer generation protocol, and proposed a concrete instantiation with strong security guarantees based on a novel proof of isogeny knowledge. In the upcoming months, to demonstrate the practical feasibility of our protocol, we intend to run a distributed Secuer generation ceremony using the technology outlined in Section 6.We believe that such a ceremony could easily scale to hundreds, or even thousands of participants. Our new proof of knowledge is especially well-suited for SIDH-like base fields, but can be used reasonably well with fields F p 2 of any characteristic.However, some important applications require a Secuer defined over F p .Although our proof of knowledge also applies to this case, it does not hide the degree of the secret isogeny walks, making it extremely cumbersome and inefficient to generate Secuers over such fields.With the exception of CSI-FiSh [5], all proofs of isogeny knowledge developed for prime fields are rather inefficient [19], thus the distributed generation of a Secuer defined over F p is still an open problem in practice. To show security of the proof of knowledge, we developed the theory of supersingular isogeny graphs with level structure, in particular proving that they possess the Ramanujan property.In this work we only focused on the so-called Borel level structure, however similar properties can be proven for more general level structures.In a follow-up work, we will develop the general theory of these graphs, prove bounds on their eigenvalues, and discuss consequences for isogeny-based cryptography. Theorem 3 . Let G = G(p, d, ℓ) the supersingular ℓ-isogeny graph with level d Borel structure.The adjacency matrix A of G is diagonalizable, with real eigenvalues, and has the Ramanujan property, i.e the integer ℓ + 1 is an eigenvalue of A of multiplicity one, while all the other eigenvalues are contained in the Hasse interval [ hence the proposition follows from (3) together with the above formula summed over α in the codomain.Proof of Proposition 10We have to show that, for any two pairs (E, C) and (E ′ , C ′ ) and any cusp of X 0 (dp), the residue r of Θ((E, C), (E ′ , C ′ ))dτ does not depend on (E, C) and (E ′ , C ′ ) at the cusp but only on p, d and the cusp.By the discussion in [25, Section 3.8, page 103] each cusp can be represented as ( a c ) with c dividing dp, and r is equal to a 0 (Θ((E, C), (E ′ , C ′ ))| M ) for M any matrix in SL 2 (Z) of the form ( a b c δ ).By [48, Chapter IX, Equation (21), page 213], we have r = 1 c 2 pd ν,λ∈Λ/cΛ e (a − 1) deg(λ) + deg(λ + ν) + (δ − 1) deg(ν) c where e(z) = e 2πiz , and Λ is the lattice of isogenies from (E, C) to (E ′ , C ′ ) which map C into C ′ .The above formula tells us that r only depends on M and on the quadratic form deg : Λ/cΛ → Z/cZ.Writing c = c 0 p ϵ with c 0 dividing N and ϵ = 0, 1 and using the Chinese remainder theorem we can split the quadratic form in two parts where cos(θ) = λ i /(2 √ ℓ).Recall that | sin(x + y)| ≤ | sin(x)| + | sin(y)|, hence | sin(mθ)| ≤ m| sin(θ)| and we can achieve the bound: Fig. 1 : Fig. 1: Interactive proof of knowledge of a cyclic isogeny ϕ : E 0 → E 1 of degree d. Definition 16 .Proposition 17 . One can then check by direct computation that s = s 0 .We conclude that d T V (D 2 (k), s) = d T V (P * D 2 (k), s 0 ), and the right hand side can be bound using Theorem 11.Given p, d, ℓ and m, define n(p, d, ℓ, m) = min k ∈ Z | τ (p, d, ℓ, k) ≤ 2 −m .Let λ be a security parameter and let n = n(p, d, ℓ, λ).The Σ-protocol of Fig. 1 is statistically SHVZK for the relation R d defined in Proposition 13, assuming the commitment C is statistically hiding. Fig. 2 : Fig. 2: An SIDH ladder.Remark 18. Above, we assumed that the degree of the witness ϕ was d = (2 a ) w .As mentioned before, this can be generalized to any witness ϕ of smooth degree d = d 1 . . .d w as far as the d i -torsion groups are accessible (ideally, one should have E 0[d i ] ⊆ E 0 (F p 2 )).In this case, one factors ϕ as ϕ = ϕ 0,w • . . .• ϕ 0,1 where each isogeny ϕ 0,i has degree d i , and constructs compatible ladders for each ϕ 0,i . Fig. 3 :Proposition 20 . Fig. 3: The NIZK.Proposition 20.Assuming that Σ = (P 1 , P 2 , V) is statistically SHVZK for the relation R d (described in Proposition 17) and that H is a random oracle, the NIZK NIZK = (P NIZK , V NIZK ) is statistically ZK for the relation R d . -1: Party P i is corrupt.In this case, Sim internally runs the real-world adversary A to obtain the broadcast message (E i , Π i ) corresponding to the corrupt party P i .It then uses the extraction algorithm of NIZK to extract the corresponding witness ϕ i .If extraction fails, Sim outputs ⊥ and aborts.Otherwise, Sim stores (E i , Π i , ϕ i ) internally, and broadcasts (E i , Π i ) as the message corresponding to the corrupt party P i .Case-2: Party P i is honest.In this case, Sim invokes the ideal functionality to obtain E i .Concretely, let C i ⊆ [i − 1] be the set of corrupt parties among the first (i − 1) parties P 1 , ..., P (i−1) .Sim invokes the ideal functionalityF * Secuer (E 0 , i) with the set {(E j ′ , ϕ j ′ )} j ′ ∈[Ci].If F * Secuer outputs ⊥, Sim outputs ⊥ and aborts.Otherwise, Sim receives from F * Secuer the corresponding curve E i .At this point, it invokes the simulator Sim NIZK of the NIZK protocol to obtain a simulated proof as Π This immediately violates the ZK property of NIZK, thus completing the proof of the lemma.Finally, hybrid-2 and hybrid-3 are identical by inspection, thus completing the proof of Theorem 23.Analyzing E t in F * Secuer -hybrid model.Based on the above secure emulation guarantee, we now analyze the output E t of Γ Secuer in the F * Secuer -hybrid model.Concretely, we state and prove the following theorem.Assuming the hardness of the endomorphism ring problem and GRH, the output E t of F * Secuer (E 0 , t) is a Secuer if at least one party is honest.To prove this theorem, we first prove the following lemma.Lemma 27.Assuming the hardness of the endomorphism ring problem, the output E i of F * Secuer (E 0 , i) for i ∈ [t] is a Secuer whenever P i is honest. Table 1 : Parameters and corresponding secret/proof size for each of the four SIKE finite fields. Table 2 : Benchmarks for isogeny walk generation, proving, and verification for each of the four SIKE finite fields.
18,935
2022-01-01T00:00:00.000
[ "Computer Science", "Mathematics" ]
AN ULTRAWEAK SPACE-TIME VARIATIONAL FORMULATION FOR THE WAVE EQUATION: ANALYSIS AND EFFICIENT NUMERICAL SOLUTION . We introduce an ultraweak space-time variational formulation for the wave equation, prove its well-posedness (even in the case of minimal regularity) and optimal inf-sup stability. Then, we introduce a tensor product-style space-time Petrov–Galerkin discretization with optimal discrete inf-sup stability, obtained by a non-standard definition of the trial space. As a consequence, the numerical approximation error is equal to the residual, which is particularly useful for a posteriori error estimation. For the arising discrete linear systems in space and time, we introduce efficient numerical solvers that appropriately exploit the equation structure, either at the preconditioning level or in the approximation phase by using a tailored Galerkin projection. This Galerkin method shows competitive behavior concerning wall-clock time, accuracy and memory as compared with a standard time-stepping method in particular in low regularity cases. Numerical experiments with a 3D (in space) wave equation illustrate our findings. nodes), respectively, such that the homogeneous boundary conditions are satisfied. We obtain a discretization of dimension 𝑁 ℎ . We show an example for Ω = (0 , 1) and ℎ = 14 i.e. , 𝑁 ℎ = 4) in Figure 1, the test functions on the left. The arising trial functions are depicted on the right and turn out to be identical to the time discretization trial functions in Example 3.1. regularity for initial data 0 ∈ 2 (Ω). Following [11], we employ specifically chosen test spaces so as to derive a well-posed variational problem. A Petrov-Galerkin method is then used for the discretization: inspired by [9], we first choose an appropriate test space and then define the (non-standard) trial space to preserve optimal inf-sup stability. After completion of this work, we learned that this approach is very closely related to the DPG * method [13,21]. The aforementioned discretization results into a linear system of equations B B B = , whose (stiffness) matrix B B B is a sum of tensor products and has large condition number, making the system solution particularly challenging. Memory and computational complexity are also an issue, as space-time discretizations in general lead to larger systems as compared to conventional time-stepping schemes, where a sequence of linear systems has to be solved, whose dimension corresponds to the spatial discretization only. Building upon [19], we introduce matrix-based solvers that are competitive with respect to time-stepping schemes. In particular, we show that in case of minimal regularity the space-time method using fast matrixbased solvers outperforms a Crank-Nicolson time-stepping scheme. The remainder of this paper is organized as follows: In Section 2, we review known facts concerning variational formulations in general and for the wave equation in particular. We derive an optimally inf-sup stable ultraweak variational form. Section 3 is devoted to the Petrov-Galerkin discretization, again allowing for an inf-sup constant equal to 1. The arising linear system of equations is derived in Section 4 and its efficient and stable numerical solution is discussed in Section 5. We show some results of numerical experiments for the 3D wave equation in Section 6. For proving the well-posedness of the proposed variational form we need a result concerning a semi-variational formulation of the wave equation, whose proof is given in Appendix A. Then, for all ∈ V ′ , the variational problem (2.2) admits a unique solution * ∈ U, which depends continuously on the data ∈ V ′ if and only if The inf-sup constant (or some lower bound) also plays a crucial role for the numerical approximation of the solution ∈ U since it enters the relation between the approximation error and the residual (by the Xu-Zikatanov lemma [35], see also below). This motivates our interest in the size of : the larger 3 , the better. A standard tool (at least) for (i) proving the inf-sup-stability in (C.2); (ii) stabilizing finite-dimensional discretizations; and (iii) getting sharp bounds for the inf-sup constant; is to determine the so-called supremizer. To define it, let : U × V → R be a generic bounded bilinear form and 0 ̸ = ∈ U be given. Then, the supremizer ∈ V is defined as the unique solution of It is easily seen that which justifies the name supremizer. The semi-variational framework We start presenting some facts from the analysis of semi-variational formulations of the wave equation, where we follow and slightly extend ( [3], Chap. 8). The term semi-variational originates from the use of classical differentiation w.r.t. time and a variational formulation in the space variable. As above, we suppose that two real Hilbert spaces and are given, such that is compactly imbedded in . Let : × → R be a continuous, coercive and symmetric bilinear form 4 . Next, let be the operator on associated with (·, ·) in the following sense: We define the domain of by 5) and recall that for any ∈ ( ) there is a unique ∈ such that ( , ) = ( , ) for all ∈ . Then, we define : ( ) → by ↦ → =: . By the spectral theorem there exists an orthonormal basis { : ∈ N} of (the eigenvectors of ) and numbers ∈ R with 0 < 1 ≤ 2 ≤ · · · , lim →∞ = ∞, such that Note that ( ) is dense in , ∈ ( ) and = for all ∈ N. For ∈ R, we define and note that 0 = , 1 = and 2 = ( ). Moreover, ( ) ′ ∼ = − , see Proposition A.1. We consider the non-homogeneous wave equation Then the following result on the existence and uniqueness holds. Its proof is given in Appendix A. We note a simple consequence for the backward wave equation. Towards space-time variational formulations The semi-variational formulation described above cannot be written as a variational formulation in the form of (2.2), since ([0, ]; ) is not a Hilbert space, even if is a Hilbert space of functions : Ω → R in space, e.g., 2 (Ω) or 1 0 (Ω). We need Lebesgue-type spaces for the temporal and spatial variables yielding the notion of Bochner spaces, denoted by := 2 ( ; ) 5 and defined as := 2 ( ; ) := We will derive a space-time variational formulation in Bochner spaces, i.e., we multiply the partial differential equation in (2.1) with test functions in space and time and also integrate w.r.t. both variables. Now, the question remains how to apply integration by parts. One could think of performing integration by parts once w.r.t. all variables. This would yield a variational form in the Bochner space 1 ( ; ). However, we were not able to prove well-posedness in that setting. Hence, we suggest an ultraweak variational form, where all derivatives are put onto the test space by means of integration by parts. We thus define the trial space as U := ℋ = 2 ( ; ) (2.13) and search for an appropriate test space V to guarantee the well-posedness of (2.2). Assuming that ( ) = ( ) = 0 for any ∈ V and performing integration by parts twice for both time and space variables, we obtain for ∈ V, where the space V still needs to be defined in such a way that all the assumptions of Theorem 2.1 are satisfied. It turns out that this is not a straightforward task. The duality pairing ⟨·, ·⟩ is defined in (A.1) in the appendix. The Lions-Magenes theory Variational space-time problems for the wave equation within the setting (2.14) have already been investigated in the book [23] by Lions and Magenes. We are going to review some facts from Chapter III, Section 9, pp. 283-299 of [23]. The point of departure is the following adjoint-type problem. Notice that the first statement is proven by deriving energy-type estimates for the uniqueness and a Faedo-Galerkin approximation for the existence. Let us comment on the previous theorem. First, we note that 0 ∈ , 1 ∈ are "too smooth" initial conditions, we aim at (only) 0 ∈ , 1 ∈ ′ , see (2.14). As a consequence: (1) Statement (a) in Thm. 2.4 results in a "too smooth" solution. In fact, we are interested in an ultraweak solution ∈ 2 ( ; ), (a) is "too" much. (2) Even though the stated solution in (b) has the "right" regularity, it is not clear how to associate the functional in (2.14) to the dual space W ′ , i.e., how to interpret the three terms of in (2.14) in the space W ′ . These issues are partly fixed by the following statement. Even though the latter result uses the "right" smoothness of the data and also includes existence and uniqueness, we are not fully satisfied with regard to our goal of a well-posed variational formulation of the wave equation in Hilbert spaces. In fact, the "trial space" ∞ ( ; ) ∩ 1 ∞ ( ; ′ ) is not a Hilbert space and it is at least not straightforward to see how we can base a Petrov-Galerkin approximation on such a trial space. Hence, we follow a different path. An optimally inf-sup stable ultraweak variational form We are going to derive a well-posed ultraweak variational formulation (2.2) of (2.1), where U = 2 ( ; ) and (·, ·), (·) are defined by (2.14). To this end, we will follow the framework presented in [11]. This approach is also called the method of transposition and already goes back to [23], see also e.g., [2,8,24] for the corresponding finite element error analysis. For the presentation we will need the semi-variational formulation described above. Let us restrict ourselves to = −∆ acting on a convex domain Ω ⊂ R and supplemented by homogeneous Dirichlet boundary conditions. This means that = 2 (Ω), = 1 0 (Ω) and ( ) = 2 (Ω) ∩ 1 0 (Ω). However, we stress the fact that most of what is said here can be also extended to other elliptic operators. Then, the starting point is the operator equation in the classical form, i.e., i.e., ∘ = −∆ is also to be understood in the classical sense. Next, denote the classical domain of ∘ by ( ∘ ), where initial and boundary conditions are also imposed in ( ∘ ), i.e., The range ℛ( ∘ ) in the classical sense then reads ℛ( ∘ ) = (︀ Ω )︀ . As a next step, we determine the formal adjoint * ∘ of ∘ . Since the operator * ∘ coincides with ∘ while acting on the space of functions with homogeneous terminal conditions ( ) =˙( ) = 0 instead of initial conditions. This means that ℛ( Following [11], we need to verify the following conditions In order to prove ( * 1), first note that Let us denote the continuous extension of * ∘ from ( * ∘ ) to 2 also by * ∘ . Corollary 2.3 implies that this continuous extension * ∘ is an isomorphism from 2 onto ([0, ]; ) (here we need the semi-variational theory). This implies that * ∘ is injective on ( * ∘ ), i.e., ( * 1). Now, the properties ( * 1) and ( * 2) ensure that is a norm on ( * ∘ ) = 2 . Then, we set which is a Hilbert space, where * is to be understood as the continuous extension of * ∘ from 2 to V 7 . Now, we are ready to prove our first main result. Theorem 2.6. Let ∈ 2 ( ; ′ ), 0 ∈ and 1 ∈ ′ . Moreover, let V, (·, ·), and (·) be defined as in (2.19) and (2.14), respectively. Then, the variational problem admits a unique solution * ∈ U. In particular, Proof. We are going to show the conditions (C.1)-(C.3) of Theorem 2.1 above. Remark 2.7. The essence of the above proof is the fact that U and V are related as U = * (V) and noting that * coincides with , while acts on functions with homogeneous initial conditions whereas * on functions with homogeneous terminal conditions. Petrov-Galerkin discretization We determine a numerical approximation to the solution of a variational problem of the general form (2.2). To this end, one chooses finite-dimensional trial and test spaces, U ⊂ U, V ⊂ V, respectively, where is a discretization parameter to be explained later. For convenience, we assume that their dimension is equal, i.e., := dim U = dim V . The Petrov-Galerkin method then reads As opposed to the coercive case, the well-posedness of (3.1) is not inherited from that of (2.20). In fact, in order to ensure uniform stability (i.e., stability independent of the discretization parameter ), the spaces U and V need to be appropriately chosen in the sense that the discrete inf-sup (or LBB -Ladyshenskaja-Babuška-Brezzi) condition holds, i.e., there exists a ∘ > 0 such that where the crucial point is that ∘ is independent of . The size of ∘ is also relevant for the error analysis, since the Xu-Zikatanov lemma [35] yields a best approximation result for the "exact" solution * of (2.20) and the "discrete" solution * of (3.1). This is also the key for an optimal error/residual relation, which is important for a posteriori error analysis (also within the reduced basis method). The key idea, as already stated earlier, is to first choose sufficiently smooth test functions, namely 2 in space and time. This can be done, e.g., by choosing at least quadratic splines. Then, the trial functions are the image of the test functions under the adjoint wave operator. A stable Petrov-Galerkin space-time discretization In order to use a straightforward finite element discretization, we use tensor product subspaces U ⊂ U and V ⊂ V with a possibly large inf-sup lower bound ∘ in (3.2). Constructing such a stable pair of trial and test spaces is again a nontrivial task, not only for the wave equation. It is a common approach to choose some trial approximation space U (e.g., by splines) and then (try to) construct an appropriate according test space V in such a way that (3.2) is satisfied. This can be done, e.g., by computing the supremizers for all basis functions in U and then define V as the linear span of these supremizers. However, this would amount to solve the original problem times, which is way too costly. We mention that this approach indeed works within the discontinuous Galerkin (dG) method, see, e.g., [10,12,16]. We will follow a different path, also used in [9] for transport problems. We first construct a test space V by a standard approach and then define a stable trial space U in a second step. This implies that the trial functions are no longer "simple" splines but they arise from the application of the adjoint operator * (which is here the same as the primal one except for initial/terminal conditions) to the test basis functions. Finite elements in time. We start with the temporal discretization. We choose some integer > 1 and set ∆ := / . This results in a temporal "triangulation" , i.e., 1 , 2 , can be formed by using 0 = 0 as double and triple node, respectively. Thus, we get a discretization in 2 { } ( ) of dimension . We show an example for = 1 and ∆ = 1 4 (i.e., = 4) in Figure 1, the test functions in the center, optimal trial functions on the right. Since * is an isomorphism of V onto 2 ( ; ), the functions are in fact linearly independent. An example of a single trial function is shown in Figure 2. Proof. Let 0 ̸ = ∈ U ⊂ 2 ( ; ). Then, since U = * (V ) there exists a unique ∈ V such that * = . Hence On the other hand, by the Cauchy-Schwarz inequality, we have which proves the claim. Optimal ultraweak discretization of ordinary differential equations For the understanding of our subsequent numerical investigations, it is worth considering the univariate case, i.e., ordinary differential equations (ODEs) of the form with either boundary or second order initial conditions, namely We did experiments for a whole variety of problems admitting solutions of different smoothness both for the boundary value ((3.7), (3.8a)) and the initial value problem ((3.7), (3.8b)). The differences were negligible, so that we only report the results for the initial value problem (3.8b). We investigate the 2 -error and the condition number of the stiffness matrix over the degrees of freedom (d.o.f.) for different type of discretizations, namely -1 * /3: quadratic spline test functions and inf-sup-optimal trial functions as image of the test functions under the adjoint operator; ansatz / test : splines of order ansatz for the trial and of order test for the test functions (here 1/3 and 2/4). We obtain "standard" spline spaces for the trial space, not related to the test space through the image of the adjoint operator -and thus not necessarily inf-sup optimal. The results are shown in Figure 3. The errors for 1 * /3 and 1/3 are the same, so that the blue line is not visible (in fact, the spanned spaces coincide with different bases). We obtain linear convergence for these cases and quadratic order for 2/4. Concerning the condition numbers, we see that the inf-sup optimal case in fact gives rise to significantly larger condition numbers than the "standard" ones. It is worth mentioning that we got ≡ 1 in all cases. This means in particular that the ansatz spaces generated by the inf-sup-optimal setting 1 * /3 are identical with those for the 1/3 case. After observing this numerically, we have also proven this observation. However, we stress the fact that this is a pure univariate fact, i.e., for the ODE. It is no longer true in the PDE case as we shall also see below. The linear system To derive the stiffness matrix, we first use arbitrary spaces induced by . We stress that B B B is symmetric and positive definite for = −∆. Finally, let us now detail the right-hand side. Recall from (2.14), that ( ) = ( , ) ℋ + ⟨ 1 , (0)⟩ − ( 0 ,˙(0)) . Hence, Using appropriate quadrature formulae results in a numerical approximation, which we will again denote by . Then, solving the linear system B B B = yields the expansion coefficients of the desired approximation ∈ U as follows: Let = ( ) =1,..., , = ( , ), then in the general case, and for the special one, i.e., = * ( ), Stability vs. conditioning The (discrete) inf-sup constant refers to the stability of the discrete system, being included in the error/residual relation where the residual ∈ V ′ is defined as usual by ( ) := ( ) − ( * , ), ∈ V. Hence, is a measure for the stability; its value is the minimal generalized eigenvalue of a generalized eigenvalue problem. This has no effect on the condition number (B B B ), which instead governs the accuracy of direct solvers and convergence of iterative methods in the symmetric case. Conditioning of the matrices We report on the condition numbers of the matrices involved in (4.1) and (4.2). In Figure 4, we see the asymptotic behavior of the different matrices. Most matrices show a "normal" scaling w.r.t. the order of the differential operator. However, there are two components, namelyM Δ in the general case and N Δ in the inf-sup-optimal case, which show a very poor scaling as the mesh size tends to zero (here indicated by ℎ max but used for both ∆ and ℎ). This effect comes from the initial condition, namely the first column of the matrices. On the other hand, (B B B ) scales like a stiffness matrix of a 4th order problem. Since B B B is a sum of tensor products involving some ill-conditioned components, we need a structure-aware preconditioning. We observed the spectral equivalence for the spatial matrices also in our numerical experiments. However, we saw that this is not true for the temporal matrices in the sense that Δ and ⊤ Δ −1 Δ Δ are not spectrally equivalent. Solution of the algebraic linear system To derive preconditioning strategies and the new projection method, we rewrite the linear system B B B = as a linear matrix equation, so as to exploit the structure of the Kronecker problem. Let = vec( ) be the operator stacking the columns of one after the other, then it holds that ( ⊗ ) = vec( ⊤ ) for given matrices , , and of conforming dimensions. Hence, the vector system is written as where = vec( ) and the symmetry of some of the matrices has been exploited. In the following we describe two distinct approaches: First, we recall the matrix-oriented conjugate gradient method, preconditioned by two different operator-aware strategies. Then we discuss a procedure that directly deals with (5.1). Preconditioned conjugate gradients Since B B B is symmetric and positive definite, the preconditioned conjugate gradient (PCG) method can be applied directly to (5.1), yielding a matrix-oriented implementation of PCG, see Algorithm 1. Here tr( ) denotes the trace of the square matrix . In exact precision arithmetic, this formulation, gives the same iterates as the standard vector form, while exploiting matrix-matrix computations [22]. This can easily be seen by exploiting Sylvester operator preconditioning. A natural preconditioning strategy consists of taking the leading part of the coefficient matrix, in terms of order of the differential operators. Hence, setting P = Δ ⊗ ℎ + Δ ⊗ ℎ , we have (see also [19]) with +1 = vec( +1 ) and +1 = vec( +1 ). Applying −1 corresponds to solving the generalized Sylvester For small size problems in space, this can be carried out by means of the Bartels-Stewart method [7], which entails the computation of two Schur decompositions, performed before the PCG iteration is started. For fine discretizations in space, iterative procedures need to be used. For these purposes, we use a Galerkin approach based on the rational Krylov subspace [14], only performed on the spatial matrices; see [31] for a general discussion. A key issue is that this class of iterative methods requires the righthand side to be low rank; we deliberately set the rank to be at most twenty. Hence, the Sylvester solver is applied after a rank truncation of +1 , which thus becomes part of the preconditioning application. To derive a preconditioner that takes full account of the coefficient matrix we employ the operator K K K ⊤ M M M −1 K K K in Section 4.2. Thanks to the spectral equivalence in Proposition 4.1, PCG applied to the resulting preconditioned operator appears to be optimal, in the sense that the number of iterations to reach the required accuracy is independent of the spatial mesh size; see Table 2. In vector form this preconditioner is applied as +1 = (︀ K K K ⊤ M M M −1 K K K )︀ −1 +1 . However, this operation can be performed without explicitly using the Kronecker form of the involved matrices, with significant computational and memory savings. We observe that Moreover, due to the transposition properties of the Kronecker product, We next observe that the equation̂︀ K K K ⊤ = +1 can be written as the following Sylvester matrix equation Δ . The overall computational cost of this operation depends on the cost of solving the two matrix equations. For small dimensions in space, once again a Schur-decomposition based method can be used [7]; we recall here that thanks to the discretization employed, we do not expect to have large dimensions in time, as matrices of size at most (100) arise. Also in this case, for fine discretizations in space we use an iterative method (Galerkin) based on the rational Krylov subspace [14], only performed on the spatial matrices, with the truncation of the corresponding right-hand side, +1 and , respectively, so as to have at most rank equal to twenty. Allowing a larger rank did not seem to improve the effectiveness of the preconditioner. Several implementation enhancements can be developed to make the action of the preconditioner more efficient, since most operations are repeated at each PCG iteration with the same matrices. Galerkin projection An alternative to PCG consists of attacking the original multi-term matrix equation directly. Thanks to the symmetry of ℎ we rewrite the matrix equation (5.1) as with of low rank, that is = 1 ⊤ 2 . Note that this is an assumption on the data. In particular, we assume that the right-hand side ( ) in (2.14) can be discretized in a way such that the matrix form of has low rank. This happens for instance when is a separable function of and , or it can be well approximated by a separable function; other scenarios can also lead to a low-rank . Consider two appropriately selected vector spaces , of dimensions much lower than ℎ , , respectively, and let , be the matrices whose orthonormal columns span the two corresponding spaces. We look for a low rank approximation of as = ⊤ . To determine we impose an orthogonality (Galerkin) condition on the residual with respect to the generated space pair ( , ). Using the matrix Euclidean inner product, this corresponds to imposing that ⊤ = 0. Substituting and into this matrix equation, we obtain the following reduced matrix equation, of the same type as (5.4) but of much smaller size, The small dimensional matrix is thus obtained by solving the Kronecker form of this equation 9 . The described Galerkin reduction strategy has been thorough exploited and analyzed for Sylvester equations, and more recently successfully applied to multi-term equations, see, e.g., [28]. The key problem-dependent ingredient is the choice of the spaces , , so that they well represent spectral information of the "left-hand" and "right-hand" matrices in (5.4). A well established choice is (a combination of) rational Krylov subspaces [31]. More precisely, for the spatial approximation we generate the growing space range ( ) aŝ︀ where is the th column of , so that +1 is obtained by orthogonalizing the new columns inserted in︀ +1 . The matrix̂︀ +1 grows at most by two vectors at a time. For each , the parameter can be chosen either a priori or dynamically, with the same sign as the spectrum of ℎ ( ℎ ). Here is cheaply determined using the adaptive strategy in [14]. Since ℎ represents an operator of the second order, the value √ resulted to be appropriate; a specific computation of the parameter associated with ℎ can also be included, at low cost. Analogously,︁ where is the th column of , and +1 is obtained by orthogonalizing the new columns inserted in︁ +1 . The choice of ℓ > 0 is made as for . Remark 5.1. This approach yields the vector approximation = ( ⊗ ) , with = vec( ) that is, the approximation space range ( ⊗ ) is more structured than that generated by PCG applied to . Experimental evidence shows that this structure-aware space requires significantly smaller dimension to achieve similar accuracy. This is theoretically clear in the Sylvester equation case [31], while it is an open problem for the multi-term linear equation setting. Remark 5.2. For fine space discretizations, the most expensive step of the Galerkin projection is the solution of the linear systems with ( ℎ + ℎ ) and Depending on the size and sparsity, these systems can be solved by either a sparse direct method or by an iterative procedure; see [31] and references therein. If denotes the current approximate solution computed at iteration , Algorithm 1 and the Galerkin method are stopped as soon as (i) ℰ < 10 −5 , where the backward error ℰ is defined as and is the residual matrix defined in (5.5), and (ii) ‖ ‖ /‖ ‖ < 10 −2 for the relative residual norm. For the Galerkin approach the computation of ℰ simplifies thanks to the low-rank format of the involved quantities (for instance, does not need to be explicitly formed to compute its norm). Moreover, the linear systems in the rational Krylov subspace basis construction are solved by the vector PCG method with a tolerance = 10 −8 ; see Remark 5.2. We compared the space-time method with the classical Crank-Nicolson time stepping scheme, in terms of approximation accuracy and CPU time. The ℎ × ℎ linear systems involved in the time marching scheme are solved by means of the vector PCG method with tolerance = 10 −6 . The time stepping scheme is also used to compute the reference solutions. To this end, we chose 1024 time steps and 64 unknowns in every space dimension, resulting in 2.68 · 10 8 degrees of freedom. For the error calculation, we evaluated the solutions on a grid of 64 points in every dimension and approximated the 2 error through the 1.67 · 10 7 query points. The code 10 is run in Matlab and the B-spline implementation is based on [25] 11 . To explore the potential of the new ultraweak method on low-regularity solutions, we only concentrate on experiments with lower regularity from the CFL-stability condition. On the other hand, the rate of convergence of the time-stepping method is clearly better than the low-order space-time method using discontinuous ansatz functions. In order to get a convergence order of the space-time method comparable to the rate of the Crank-Nicolson scheme, we would at least need to use quartic test functions. Case 2: Discontinuous solution For the case of a discontinuous solution, our results are shown in Figure 6. The number of iterations for the PCG and the Galerkin methods behave similar as in case 1, so that we do not monitor all numbers. Again, we observe that the performance of the ultraweak space-time method is basically independent of the wave speed. Moreover, due to the fact that the solution is discontinuous, the rate of convergence of the time stepping scheme is no longer optimal. As a consequence, the ultraweak space-time method outperforms the time stepping approach w.r.t. accuracy and efficiency. The benefit is even larger for increasing wave speed numbers. Conclusions Our theoretical results and numerical experience show that the proposed ultraweak variational space-time method, when equipped with appropriate linear algebra solvers, is significantly more accurate and efficient than the Crank-Nicolson scheme on problems with low regularity and high wave speed. Appendix A. Proof of Theorem 2.2 We collect the proof of the well-posedness for the semi-variational setting in Section 2.2. Even though the proofs are based upon rather classical spectral theory, we could not find them in the desired form in the literature. is an isometric isomorphism from − to ( ) ′ , i.e., ( ) ′ ∼ = − . Proof. First, note that is a Hilbert space with the inner product ( , ) := ∑︀ ∞ We are now in the position to prove Theorem 2.2 for the wave equation with inhomogeneous right-hand side.
7,313.6
2022-04-11T00:00:00.000
[ "Mathematics", "Physics" ]
Mitogen-activated Protein Kinase (MAPK)-regulated Interactions between Osterix and Runx2 Are Critical for the Transcriptional Osteogenic Program* Background: Osterix and Runx2 are master genes that transcriptionally promote osteoblast differentiation. Results: Osterix and Runx2 cooperate to induce osteogenic genes by binding to promoters and interacting with each other. Conclusion: Osterix and Runx2 exhibit cooperation, subject to further regulation by MAPK signals, during osteogenesis. Significance: A network of interactions between transcription factors provides a circuit that drives the osteoblast differentiation program. The transcription factors Runx2 and Osx (Osterix) are required for osteoblast differentiation and bone formation. Runx2 expression occurs at early stages of osteochondroprogenitor determination, followed by Osx induction during osteoblast maturation. We demonstrate that coexpression of Osx and Runx2 leads to cooperative induction of expression of the osteogenic genes Col1a1, Fmod, and Ibsp. Functional interaction of Osx and Runx2 in the regulation of these promoters is mediated by enhancer regions with adjacent Sp1 and Runx2 DNA-binding sites. These enhancers allow formation of a cooperative transcriptional complex, mediated by the binding of Osx and Runx2 to their specific DNA promoter sequences and by the protein-protein interactions between them. We also identified the domains involved in the interaction between Osx and Runx2. These regions contain the amino acids in Osx and Runx2 known to be phosphorylated by p38 and ERK MAPKs. Inhibition of p38 and ERK kinase activities or mutation of their known phosphorylation sites in Osx or Runx2 strongly disrupts their physical interaction and cooperative transcriptional effects. Altogether, our results provide a molecular description of a mechanism for Osx and Runx2 transcriptional cooperation that is subject to further regulation by MAPK-activating signals during osteogenesis. Bone development and remodeling depend on the activity of the osteoblasts that derive from condensations of mesenchymal stem cells. It is well known that osteochondroprogenitor maturation and the later conversion of preosteoblasts to mature osteoblasts are controlled by a complex network of transcription factors activated by specific osteogenic signals. Among these transcription factors, Runx2 and Osx (Osterix) play a crit-ical role in osteogenesis (1)(2)(3). They are considered master osteogenic factors because their null mice do not form mature osteoblasts (4,5). In Osx-null mice, bone calcification is prevented, even though Runx2 is expressed, suggesting that Osx acts downstream of Runx2 during bone development (5). Moreover, in adult organisms, osteoblast action is still required because the mammalian skeleton undergoes continuous turnover throughout the lifetime. In vivo studies have demonstrated that Runx2 and Osx are mandatory for osteoblast maturation as well as bone formation during the adult stage (4,6,7). In addition, several Runx2 and Osx mutations or SNPs are related to bone illnesses such as osteoporosis, osteogenesis imperfecta, and cleidocranial dysplasia (8 -12). Several studies have highlighted the role of Runx2 and Osx in osteoblast function at the molecular level. It has been demonstrated that expression of Osx in vivo requires Runx2, although osteogenic signals are still able to stimulate Osx expression in Runx2-deficient cells (13)(14)(15)(16)(17). Runx2 regulates the expression of numerous osteoblastic genes such as Osx, Alpl (alkaline phosphatase), Col1a1 (collagen type I), Spp1 (osteopontin), Ibsp (bone sialoprotein), Mmp13 (matrix metalloproteinase 13), and Bglap (osteocalcin) (4,18,19). Most of these gene promoters are also regulated by Osx (5, 19 -22), and in fact, Osx is able to direct its own expression (17). Promoters of several osteoblast-specific genes contain both Runx2-binding sites (TGTGGT) and Sp1 boxes (which are bound by Osx). Thus, it is plausible that Runx2 and Osx work in a collaborative manner to activate the osteoblast genetic program and produce a bonespecific matrix. This hypothesis is supported by the described interaction between Runx2 and Osx in the transcriptional regulation of Mmp13 and Col1a1 genes (19,21). Conversely, in the regulation of Nell-1 expression, Osx and Runx2 seem to play opposite roles, as the former represses the expression of this gene, and the latter activates it (23). In the control of downstream events, the master function of Runx2 has also been shown to be tightly regulated by interaction with cofactors. For instance, interaction with Stat1 inhibits its nuclear localization, and interaction with Twist1, Nrf2, or COUP-TFII (chicken ovalbumin upstream promoter transcrip-tion factor II) blocks Runx2 DNA-binding ability (20,24,25). Positive coactivation of Runx2 has been described for Smad, TAZ, Dlx5, or Gli family members (26 -30). In turn, Osx also collaborates with other transcription factors and cofactors such as Sp1, NFATc1, and NO66, which regulate its activity (1,22,31). NFATc1 forms a complex with Osx and activates Col1a1 promoter activity, but it does not activate Runx2-dependent transcription (31). These studies evidence a complex cross-talk between these transcription factors and the transcriptional machinery but also highlight that our knowledge of their regulatory mechanisms is limited. Recent work has expanded our understanding of the role of p38 and ERK MAPKs in the control of osteogenesis and, in particular, their regulation of Runx2 and Osx transcriptional activity (32). Induction of Osx expression requires the activation of Dlx5 through p38-mediated phosphorylation (16). Furthermore, it has been shown that Osx itself is a substrate for p38 (33) and ERK MAPK (34,35), which increases recruitment of transcriptional coactivators (33). Similarly, Runx2 is strongly regulated through direct p38 and ERK MAPK phosphorylation. Phosphorylation of Runx2 at multiple sites leads to increased transcriptional activity. Thus, a regulatory network exists in which p38 and ERK MAPK phosphorylation is involved in the induction and control of Runx2 and Osx transcriptional activity. Here, we report functional cooperation between Osx and Runx2 modulating the expression of osteoblast genes Col1a1, Fmod (fibromodulin), and Ibsp, which are involved in the formation of a mature bone matrix. Induction of these genes is mediated through enhancer regions encompassing nearby Sp1 sites and Runx2 DNA-binding sites. Formation of a cooperative complex is mediated through DNA binding of Runx2 and Osx to their cognate sequences as well as protein-protein interactions between them. Moreover, we demonstrate that their phosphorylation by p38 and/or ERK MAPK at specific sites is required for efficient interaction and cooperation. Luciferase Reporter Assays-Saos-2 or C2C12 cells were cultured in 6-well plates and transfected for 8 h with Lipofectamine LTX with the indicated plasmids. The transfection efficiency was assessed by GFP expression. Luciferase activities were quantified at 48 h using the Luciferase assay system (Promega) and normalized using the ␤-Galactosidase Detection Kit II (Clontech). GST Pulldown Assays-The fusion proteins GST-Osx and GST-Runx2 and their derivatives were produced in Escherichia coli BL21 and purified by binding to glutathione-Sepharose beads. For in vitro binding assays, cells expressing Osx and/or Runx2 were washed twice with cold PBS and lysed with 0.3% CHAPS, 50 mM Tris (pH 7.5), 150 mM NaCl, and 10% glycerol supplemented with protease and phosphatase inhibitors at 4°C for 15 min. Lysates were collected and centrifuged at 22,000 ϫ g for 5 min to eliminate cellular debris. Supernatants were incubated with the appropriate chimeric protein bound to glutathione-Sepharose beads overnight at 4°C with rotation. The beads were then collected by centrifugation at 300 ϫ g for 1 min and washed five times with wash buffer (0.1% CHAPS, 50 mM Tris (pH 8.0), and 150 mM NaCl). Finally, proteins bound to the beads were subjected to immunoblotting. Immunoprecipitation-Primary osteoblasts or transiently transfected HEK-293 cells were lysed as described above. The supernatant fraction was incubated overnight with 1 g of anti-Osx or anti-Runx2 antibody, followed by incubation with 20 l of Protein A/G-Sepharose (GE Healthcare) for 1 h. Bound proteins were washed four times with lysis buffer and detected by immunoblotting. Western Blot Assay-To detect the presence of proteins in the cell extracts or pulldowns, we performed immunoblotting with anti-Osx, anti-Runx2, or anti-␣-tubulin antibody diluted at 1:1000. Immunoreactive bands were detected with horseradish peroxidase-conjugated secondary antibodies using an ECL kit (Biological Industries). ChIP-Saos-2 and MC3T3-E1 cells were cultured until confluence and fixed with 1% formaldehyde for 10 min, and the reaction was stopped with 0.01 M glycine for 5 min. Cells were lysed and sonicated to obtain 200 -1000-bp fragments. ChIP was carried out using 1 g of the indicated antibody (anti-Osx, anti-Runx2, anti-RNA polymerase II (Upstate), or anti-IgG (Upstate) as a control) and purified with 20 l of Magna ChIP Protein AϩG magnetic beads (Millipore). The complexes were washed once with four different wash buffers and eluted with a solution containing SDS and NaHCO 3 . Reversion of cross-link-ing was carried out by overnight incubation with 0.2 M NaCl at 65°C, followed by treatment with proteinase K and RNase A. The DNA fragments were purified using the QIAquick gel extraction kit (Qiagen) and analyzed by PCR. Immunofluorescence-Saos-2 and transfected C2C12 cells were fixed in 4% paraformaldehyde for 20 min, permeabilized with 0.2% Triton X-100, and blocked with normal goat serum for 1 h. Cells were stained with anti-Osx antibody at 1:150 dilution and anti-Runx2 antibody at 1:100 dilution, followed by SEPTEMBER 26, 2014 • VOLUME 289 • NUMBER 39 goat anti-rabbit IgG conjugated with Alexa Fluor 555 at 1:500 dilution or anti-mouse IgG conjugated with Alexa Fluor 488. Nuclei were stained using a 1:1000 dilution of DRAQ5. Labeling was detected using a Leica TCS SL inverted laser scanning confocal microscope. Physical and Functional Interaction between Osx and Runx2 Quantitative RT-PCR Analysis-Total RNA was isolated from C2C12 and MC3T3-E1 cells using TRIsure reagent (Bioline). 5 g of total RNA was reverse-transcribed using a high capacity cDNA reverse transcription kit (Applied Biosystems). Quantitative PCRs were carried out using the ABI Prism 7900 HT Fast real-time PCR system and a TaqMan 5Ј-nuclease probe method (Applied Biosystems). All transcripts were normalized to Gapdh, and transfection efficiency was assessed by GFP expression. Designed TaqMan assays (Applied Biosystems) were used to quantify gene expression of mouse Col1a1, Fmod, Ibsp, Gapdh, and osteocalcin. Statistical Analysis-Statistical analysis was performed using Student's t test. Quantitative data are presented as means Ϯ S.E. Differences were considered significant at p Ͻ 0.05. Coexpression of Osx and Runx2 Enhances Transcription of Osteogenic Genes-The expression of many osteogenic markers is modulated by Osx and/or Runx2. To determine their relative relevance, we transfected C2C12 and MC3T3-E1 cell lines with Osx and/or Runx2 expression vectors. As reported previously (21,33), quantitative RT-PCR assays demonstrated that overexpression of Osx or Runx2 in C2C12 cells can up-regulate the endogenous expression of collagen type 1 (Col1a1), fibromodulin (Fmod), and bone sialoprotein (Ibsp) (Fig. 1A). This effect was also observed in MC3T3-E1 preosteoblasts, where Col1a1, Fmod, and Ibsp expression was enhanced when Osx was overexpressed (Fig. 1B). More importantly, in both cell lines, coexpression of Osx and Runx2 had a strong additive effect in the expression of these osteogenic genes. To further analyze the mechanism of this cooperation, we focused on the presence of Runx2-or Osx-binding sites in the promoter sequences of these genes. Homology analysis of Ibsp, Fmod, and Col1a1 gene pro- moters revealed regions with a high degree of similarity among orthologs, which include one or more Runx2 sites in close proximity to Sp1 sites (Fig. 2). For instance, the study of a distal enhancer of the Ibsp gene revealed the presence of two Runx2binding sites close to an Sp1 site bound by Osx ( Fig. 2A) (33). We evaluated the Ibsp promoter activity in C2C12 cells and in the osteosarcoma cell line Saos-2 using a luciferase reporter driven by the Ibsp enhancer (33). Although expression of Runx2 had minor effects on promoter activity in C2C12 cells, we observed a 20-fold induction of Ibsp-plux activity in response to Osx and Ͼ40-fold induction when both Osx and Runx2 were coexpressed ( Fig. 2A). This cooperative induction of the Ibsp reporter was similar when analyzed in Saos-2 cells. Thus, these results indicate that Osx and Runx2 have cooperative effects on specific gene expression. Next, we assessed the importance of the specific cis-responsive sequences in the cooperative effects between Runx2 and Osx. The OC-p147-lux reporter is driven by the proximal Bglap promoter and contains two Runx2-binding sites and two Sp1 sites (Fig. 2B, left panel). As described previously (36), overex- Basal activities refer to those of the Ϫ2483pCol1a1-lux reporter. Luciferase activity was measured and normalized against ␤-galactosidase activity. B, C2C12 or Saos-2 cells were cotransfected with Osx and/or Runx2 constructs and the indicated Ibsp-lux reporters. Schemes of the mutated sites introduced in the Ibsp-lux reporter vector are shown. Luciferase activity was measured and normalized against ␤-galactosidase activity. C, C2C12 or Saos-2 cells were cotransfected with Osx and/or Runx2 constructs and the indicated OC-p147-lux reporters. Schemes of the deleted and mutated sites in the OC-p147-lux reporter vector are shown. Luciferase activity was measured and normalized against ␤-galactosidase activity. Relative luciferase activities are expressed as the mean Ϯ S.E. in triplicate of four independent experiments. #, p Ͻ 0.05; ** and ##, p Ͻ 0.01; *** and ###, p Ͻ 0.005 using Student's t test. SEPTEMBER 26, 2014 • VOLUME 289 • NUMBER 39 pression of Runx2 induced OC-p147-lux reporter activity (Fig. 2B). It has also been reported that although Osx binds to these Sp1 sites, it is unable to induce significant transcriptional activation (22). Our data showed that Osx expression conferred additive effects on Runx2 activation, which were more evident in the Saos-2 osteoblastic cells than in the C2C12 mesenchymal cell line (Fig. 2B). To further test the relevance of specific binding sites, we analyzed the activity of the Sp1-plux reporter, an artificial promoter containing a unique Sp1 site (Fig. 2B, left panel). The reporter was activated 2-fold by Osx expression. However, coexpression of Runx2 failed to induce significant additive transcriptional effects (Fig. 2B). Physical and Functional Interaction between Osx and Runx2 We also analyzed a pCol1a1-lux reporter, which contains functional Runx2-binding boxes and Sp1 sites (Fig. 3A, lower panel) (18,37). The Ϫ2483pCol1a1-lux reporter was activated by Osx and Runx2, and the coexpression of both factors also notably increased its induction. Furthermore, the Ϫ2483pCol1a1⌬1lux reporter, devoid of Runx2-binding sites, lost cooperativity between the two factors, as shown above with the Sp1-plux reporter. In the Ϫ2483pCol1a1⌬2-lux reporter, the proximal region is intact, and although it still maintains a single Runx2 binding site, we did not observe the additive effects, suggesting that this proximal Runx2 site may be less important for these effects. This result is in agreement with previous reports that Runx2 bound only weakly and did not transactivate the Col1a1 promoter from this Ϫ372 proximal Runx2 site (18). Moreover, we generated a set of Ibsp and OC-p147 reporter constructs with mutations at specific Runx2 and Sp1 sites. Mutation of the distal Runx2 site in the Ibsp enhancer did not abolish the additive effects of Runx2 and Osx. However, mutation of either the Sp1 or proximal Runx2 sites suppressed activation by Runx2 and/or Osx (Fig. 3B). Similarly, deletion of the most distal Runx2 site in the osteocalcin promoter completely eliminated transcriptional activation by Runx2 or Osx (Fig. 3C). Altogether, these results suggest that gene promoters activated cooperatively by Osx and Runx2 require the presence of adjacent Runx2-and Osx-binding sites. To confirm that functional interaction between Osx and Runx2 occurs in vivo, ChIP was performed in Saos-2 and MC3T3-E1 cells. As shown in Fig. 4A, both Osx and Runx2 bound to the responsive regions of the osteogenic genes Fmod, Ibsp, and Col1a1. Binding of these factors also correlated with recruitment of RNA polymerase II. Osx and Runx2 Physically Interact-The presence of Runx2 sites near Osx sites within the same promoter and the functional interdependence between them raised the possibility that both factors might associate through physical interaction. To evaluate this hypothesis, we carried out GST pulldown analyses. HEK-293T cells were transiently transfected with Osx or Runx2 expression vectors and processed with different lysis buffers. Extracts lysed with isotonic buffers containing 0.5% Triton X-100, 0.5% Nonidet P-40, or 0.3% CHAPS were tested. We analyzed the ability of Runx2 or Osx to interact with fulllength recombinant GST-Osx or GST-Runx2 and truncated forms in vitro. High affinity interaction was maximally retained with the 0.3% CHAPS lysis buffer (data not shown). Using the same approach, we determined which domains of Osx and Runx2 were involved. After Osx pulldown, we found that Runx2 was able to interact with the truncated forms of Osx. Osx⌬346 precipitated higher amounts of Runx2 (Fig. 4B), so we concluded that Osx interacted mainly through its N-terminal transactivation domain and that the zinc fingers were not involved. In contrast, whereas Runx2 with a carboxyl-terminal deletion to amino acid 361 still bound Osx, recombinant Runx2 in which amino acids 230 -521 had been deleted lost most of its capacity to interact with Osx (Fig. 4C). region 230 -361 did not encompass the Runt DNA-binding domain of Runx2 and has also been demonstrated to be involved in interaction with other proteins such as the vitamin D receptor and with histone acetyltransferases MORF and MOZ (38,39). Interaction between Osx and Runx2 was also observed in intact cells. Immunoprecipitation of Osx from transiently transfected C2C12 cell extracts also coprecipitated Runx2 (Fig. 5A), further suggesting interaction between these transcription factors in vivo. Regions involved in mutual interaction include the known nuclear localization signals for both Osx and Runx2. Moreover, because it has been demonstrated previously that some Runx2 interactors modify the nuclear or subnuclear localization of Runx2 (40,41), we analyzed the localization of these two transcription factors by immunofluorescence (Fig. 5B). Expression of either Runx2 or Osx alone displayed a constitutive nuclear localization for both. Coexpression of both factors did not alter their nuclear pattern of localization, suggesting that changes in their nuclear shuttling are not the mechanism involved in their functional interaction. p38 and ERK MAPK Activities Are Necessary for Functional and Physical Interaction between Runx2 and Osx-The activities of ERK and p38 MAPKs have been shown to phosphorylate and increase the transcriptional activities of Runx2 and Osx (6,(32)(33)(34)42) Moreover, the MAPK phosphorylation sites in Osx and Runx2 identified so far are localized within the regions described above as being involved in their physical interaction (33,42,43). Therefore, we tested the effect of the phosphorylation state of Osx and Runx2 on their transcriptional cooperation. We coexpressed Osx and Runx2 in C2C12 cells and treated them with the p38␣/␤ inhibitor SB203580 or the ERK1/2 inhibitor U0126. These inhibitors are known to block phosphorylation of either Runx2 or Osx (33,43). mRNA expression analysis of Col1a1, Fmod, Ibsp, and Bglap demonstrated that inhibition of p38 or ERK signaling resulted in complete abrogation of Osx and Runx2 additive effects in all genes studied (Fig. 6A). We also performed similar studies using the Ϫ2483pCol1a1-lux, Ibsp-plux, and OC-p147-lux gene report- SEPTEMBER 26, 2014 • VOLUME 289 • NUMBER 39 ers. Luciferase assays showed strong and consistent inhibition of reporter activity upon addition of inhibitors (Fig. 6B). Physical and Functional Interaction between Osx and Runx2 The results indicate that functional cooperation between the two transcription factors may be compromised because their phosphorylation is necessary for complete activity. However, these results did not discern whether phosphorylation hampers the interaction between the transcription factors or whether it is required only for the independent recruitment and function of transcriptional coactivators for each one. To further examine whether protein-protein interaction ability depends on the phosphorylated state, we carried out a pulldown assay using full-length GST-Osx and GST-Runx2. Assays performed with GST-Osx and lysates from C2C12 cells expressing Runx2 demonstrated the importance of the Runx2 phosphorylation state. The levels of Runx2 bound to Osx were lower in extracts from cells treated with MAPK inhibitors, despite similar levels of expression (Fig. 7A, upper panel). A complementary analysis for the requirement of Osx phosphorylation was also performed. As shown in Fig. 7A (lower panel), the interaction was also lower when extracts from cells treated with MAPK inhibitors were assayed. These data suggest that MAPK phosphorylation of both transcription factors is involved in protein interaction. We investigated whether inhibition of p38 and ERK MAPKs affects localization of endogenous Osx and Runx2 in Saos-2 cells. The addition of SB203580 and U0126 to Saos-2 cells decreased the protein expression levels of both transcription factors. However, they did not impair the nuclear co-localization of endogenous Osx or Runx2 (Fig. 7B). The MAPK requirement for Osx and Runx2 interaction was confirmed in vivo by immunoprecipitation of C2C12 cell extracts expressing Runx2 and Osx and treatment with MAPK inhibitors. Western blot analyses showed a strong decrease in their interaction when p38 or ERK1/2 MAPK activities were restrained. Interestingly, although ERK inhibition alone com-FIGURE 6. Runx2 and Osx phosphorylation effects on cooperative transcriptional activity. A, C2C12 cells were cotransfected with Osx and/or Runx2 expression vectors and treated with SB203580 or U0126 for 24 h. Col1a1, Fmod, osteocalcin (Bglap), and Ibsp mRNAs were measured by quantitative RT-PCR and normalized to Gapdh, and relative expression is presented as the mean Ϯ S.E. of four independent experiments. B, C2C12 cells were mock-transfected or cotransfected with Osx and/or Runx2 constructs and the indicated reporters. Cells were treated with SB203580 or U0126 for 24 h. Luciferase activity was measured and normalized against ␤-galactosidase activity. Relative luciferase activities are expressed as the mean Ϯ S.E. in triplicate of at least three independent experiments. * and #, p Ͻ 0.05; **, p Ͻ 0.01 using Student's t test). pletely blocked interaction, cells in which p38 MAPK activity was blocked still retained some interaction (Fig. 8A). However, because inhibition of one MAPK usually results in activation of the other, it is too early to make definitive conclusions about differences in p38 and ERK MAPK requirements. We further characterized the interaction between endogenous Runx2 and Osx in primary cultures of mouse calvarial osteoblasts treated with SB203580, U0126, or both inhibitors. As reported previously (16,21,42,44), Runx2 and Osx protein levels were decreased after MAPK inhibition by either SB203580 or U0126 treatment. More importantly, the amount of Runx2 that coprecipitated bound to Osx was significantly decreased by the simultaneous inhibition of both MAPKs (Fig. 8B). Together, these results indicate that both MAPK activities are required for a proper physical and functional interaction between these two transcription factors. MAPK Phosphorylation Sites Are Involved in the Osx-Runx2 Interaction-Previous studies have shown that ERK interacts through a D-domain-docking site and phosphorylates Runx2 at four sites (Ser-43, Ser-301, Ser-319, and Ser-510) (6,32,44,45). Among them, Ser-301 and Ser-319 both contribute to Runx2 function because Ser-to-Ala mutations at these sites greatly reduce its transcriptional activity at specific osteogenic promoters (6). Interestingly, these two sites have also been shown to be phosphorylated by the p38 pathway and are located in the Osx-Runx2 interaction region (Fig. 4) (42). In contrast, Osx is also phosphorylated by p38 at Ser-73 and Ser-77, located in the transactivation domain, which also has a positive effect on its osteogenic activity (21,33). We then sought to ascertain the importance of the specific phosphorylation sites of these proteins in functional cooperation. We analyzed the interaction in vivo by expressing combinations of wild-type Osx and mutant S73A/S77A with wild-type Runx2 and mutant S43A/S282A/ S319A. Immunoprecipitation analysis demonstrated that combinations expressing a phosphorylation-deficient mutant form (either Osx(S73A/S77A) or Runx(S43A/S282A/ S319A)) showed impaired interaction (Fig. 8C). These results prove that the phosphorylation sites targeted by p38 and ERK MAPKs in both Osx and Runx2 are the ones involved in the Osx-Runx2 interaction. DISCUSSION It is physiologically and clinically important to understand the mechanisms of the transcriptional network that drives osteoblastogenesis. In this study, we have shown that the key osteogenic transcription factors Runx2 and Osx cooperate in the induction of genes involved in bone matrix formation. Transcriptional activation of these promoters is mediated through enhancer regions encompassing nearby Sp1 and Runx2 DNA-binding sites. Our study shows that both Runx2 and Osx bind to their responsive regions in DNA and interact with each other through the Osx N-terminal transactivation region and the Runx2 Pro/Ser/Thr-rich activation domain. Therefore, the two transcription factors form a complex at specific promoters that increases expression of osteogenic genes such as Bglap, Col1a1, Fmod, and Ibsp. In addition, we demonstrated that Runx2 and Osx phosphorylation by p38 and/or ERK MAPK at specific sites located on their interaction surfaces is required for an efficient interaction between them. Runx2 is expressed as early as embryonic day 10 in developing mouse embryos, and Osx appears at embryonic day 18.5 (4,5). The Runx2-expressing osteochondroprecursors, prior to Osx expression, remain in the chondrogenic lineage and express high levels of Sox9 (5). Later, cells already expressing Runx2 and Osx differentiate into mature osteoblasts in which Sox9 is no longer expressed (1). Thus, it may be suggested that their sequential expression constitutes a mechanism of osteoblast maturation in which, once expressed, Osx controls further transcription independently of Runx2. For instance, in Osx-null embryos, there is a strong reduction of Col1a1 expression and an almost complete lack of late osteogenic markers, including osteonectin, osteopontin, Ibsp, and Bglap, despite normal expression of Runx2 (1,5,6). It may also be suggested that Runx2 and Osx regulate distinct subsets of osteogenic genes or, alternatively, act as allies to cooperatively promote maximal levels of osteogenic gene expression. Our data point to the latter hypothesis, in which Runx2 and Osx are cofactors in the same complex, up-regulating specific osteogenic target genes when they co-occupy their promoters. The presence of Osx and its association could prevent Runx2 repression by liberating it from factors that prevent Runx2 binding to the DNA. For instance, Twist1, Stat1, or Nrf2 inhib- Extracts were incubated with the indicated chimeric proteins bound to glutathione-Sepharose beads. Expression of the constructs and interacting proteins was identified by immunoblotting using anti-Runx2 or anti-Osx antibody. B, Saos-2 cells were incubated in medium without serum and treated with SB203580 and U0126 for 24 h. The subcellular localization of endogenous Runx2 and Osx was analyzed by immunofluorescence with anti-Runx2 or anti-Osx antibody. its Runx2 transcriptional activity by docking to the Runt DNA-binding domain of Runx2 (24,25,41). In osteochondroprogenitors, it has been proven that expression of Sox9 also down-regulates Runx2 transcriptional activity (46, 47). Conversely, some transcription factors such as Dlx5, Gli, and Smad also interact with Runx2 but increase its transcriptional activity (27)(28)(29)(30). In these cases, interactions involve domains other than the Runt DNA-binding domain. Although much less is known about Osx, it has been found that additional factors such as Sp1 and NFATc1 are required for functional activity on the Bglap or Col1a1 promoter (22,31). It has been documented that Runx2 changes its promoter-binding patterns during osteoblastogenesis (24). This study demonstrates that whereas the recruitment of Runx2 to a cluster of genes involved in general cell functions does not change throughout osteoblast maturation, its binding to osteoblast-specific gene promoters increases as osteoblast differentiation progresses. The Mmp13, Fmod, Ibsp, and Col1a1 gene promoters include one or more Runx2 sites in close proximity (100 -200 bp) to Sp1 sites (18,21,22,24,33,36). Thus, it is likely that osteogenic genes containing adjacent Runx2 and Osx sites are regulated in a similar fashion. These results are consistent with data reported by Lee and co-workers (34) showing cooperation of Osx and Runx2 in the regulation of osteogenic marker genes during differentiation of adipose stem cells into osteoblasts. Our results show that the region of Runx2 required for Osx interaction is amino acids 230 -361. This region does not involve the Runt DNA-binding domain and partially overlaps with the Pro/Ser/Thr-rich activation domain, which is a transactivation domain in Runt-related proteins (48) targeted by proteins such as MOZ, MORF, and the vitamin D receptor (38,39). The Runt domain is responsible for Runx2 binding to chromatin and, as mentioned above, is targeted by many Runx2 repressors, including Twist1, Elf1, COUP-TFII, and Stat1, which prevent Runx2 from binding to DNA (20,24,41,49). Our results show that Osx associates with Runx2 through its N-terminal region and that the zinc fingers are not required. Therefore, in the Runx2 and Osx cooperative mechanism, both Runx2 and Osx are able to bind to their responsive sequences on the promoters and interact with each other via regulatory regions that lead to stabilization of the transcriptional complex. This physical interaction between Runx2 and Osx was previously suggested in the regulation of Mmp13 (19). Runx2 transcription involves interaction with coactivators such p300, MOZ, and MORF (39,50). Osx also associates with other transcription factors and cofactors such as Brg1, p300, and NO66, which regulate its activity (1,22,31). Binding and interaction of both Runx2 and Osx may then also potentiate the recruitment of these coactivators and the function of the transcriptional machinery. ERK and p38 MAPKs are known to be induced by various stimuli in osteoblasts and play an important role in several steps of osteoblast lineage progression in vitro and in vivo (6,42,(51)(52)(53). Their effects have been attributed in part to their ability to phosphorylate Osx and Runx2 (6,33,42,43). MAPK phosphorylation of Osx at Ser-73 and Ser-77 does not change its affinity for binding to the Sp1 sequences analyzed but increases its ability to recruit coactivators (33). Runx2 is also a substrate of phosphorylation by ERK and p38 MAPKs, leading to enhanced transcription and recruitment of transcriptional activators (6,32,42,43). In addition, Runx2 contains a consensus MAPK-docking D-site, which allows competitive binding of ERK and p38 MAPKs (43). Binding to this D-site permits phosphorylation of Runx2 by MAPK and is probably also involved in the phosphorylation of Osx when bound together. More importantly, the phosphorylation sites of these two kinases correspond to the amino acids located in the interaction surfaces of both Runx2 and Osx (Ser-301 and Ser-319 for Runx2 and Ser-73 and Ser-77 for Osx). Accordingly, we demonstrated that the phosphorylation by p38 and/or ERK MAPK at these specific sites is required for an efficient interaction and cooperation. Therefore, in addition to their effects on each transcription factor alone, MAPK phosphorylations may modify the osteogenic activity of Runx2 and Osx, enhancing their ability to interact with each other. Phosphorylation of Runx2 and Osx by p38 and ERK signaling would then constitute an integration point at which extracellular stimuli lead to strong modulation of their transcriptional activity and control the osteoblastic phenotype.
6,896.4
2014-08-13T00:00:00.000
[ "Biology" ]
Discrete Conditional Diffusion for Reranking in Recommendation Reranking plays a crucial role in modern multi-stage recommender systems by rearranging the initial ranking list to model interplay between items. Considering the inherent challenges of reranking such as combinatorial searching space, some previous studies have adopted the evaluator-generator paradigm, with a generator producing feasible sequences and a evaluator selecting the best one based on estimated listwise utility. Inspired by the remarkable success of diffusion generative models, this paper explores the potential of diffusion models for generating high-quality sequences in reranking. However, we argue that it is nontrivial to take diffusion models as the generator in the context of recommendation. Firstly, diffusion models primarily operate in continuous data space, differing from the discrete data space of item permutations. Secondly, the recommendation task is different from conventional generation tasks as the purpose of recommender systems is to fulfill user interests. Lastly, real-life recommender systems require efficiency, posing challenges for the inference of diffusion models. To overcome these challenges, we propose a novel Discrete Conditional Diffusion Reranking (DCDR) framework for recommendation. DCDR extends traditional diffusion models by introducing a discrete forward process with tractable posteriors, which adds noise to item sequences through step-wise discrete operations (e.g., swapping). Additionally, DCDR incorporates a conditional reverse process that generates item sequences conditioned on expected user responses. Extensive offline experiments conducted on public datasets demonstrate that DCDR outperforms state-of-the-art reranking methods. Furthermore, DCDR has been deployed in a real-world video app with over 300 million daily active users, significantly enhancing online recommendation quality. INTRODUCTION Multi-stage recommender systems are widely adopted in online platforms like Youtube, Tiktok, and Kuaishou.As the final stage in recommender system, reranking stage takes top-ranking items as input and generates a reordered sequence of items for recommendation, and thereby directly affects users' experience and satisfaction [16,20]. Different from preliminary stages (e.g., matching, ranking) that produce predictions of a candidate item based on the item itself, the reranking stage further considers the listwise context (cross-item interplay) [31].It is widely acknowledged that whether a user is interested in an item is also determined by other items in the same list [20].Therefore, the key to reranking models is to model the listwise context and produce the optimal sequence. Generative models are well-suited for the reranking task considering the exponentially large space of item permutations [7,12].Previous studies have adopted the evaluator-generator paradigm, with a generator to generate feasible permutations and an evaluator to evaluate the listwise utility of each permutation [25].In this paradigm, the capacity of generator is of great importance.Despite the successes of traditional generative models like GANs and VAEs [8,26], their limitations, such as unstable optimization and posterior collapse, hinder their application in the reranking task for recommendation.Recently, diffusion models [9,22] achieve remarkable success in computer vision and other domains.Diffusion models usually involve a forward process that corrupts the input with noises in a step-wise manner; and a reverse process that iteratively generates the original input from the corrupted one with a denoising model.In light of the success of diffusion models, this paper aims to explore the potential of diffusion models for generating high-quality sequences in reranking.However, we find it is nontrivial to take diffusion models as the generator due to the following challenges: • Firstly, most diffusion models [9,17,23] are designed for continuous data domains, but the item permutations in recommender systems are operated in the discrete data space.The inherent discrete nature of item sequences in recommender systems brings challenges to the application of diffusion models.• Secondly, the recommendation task is different from conventional generation tasks as the purpose of recommender systems is to fulfill user interests.The generated sequence is expected to achieve positive user feedback, and hence the diffusion model should be controllable in terms of user feedback.• Thirdly, real-life recommender systems serve a huge number of users and the inference procedure of the reranking model is expected to be efficient.Since the generation process of diffusion models works in a step-wise manner, it poses challenges to the inference efficiency of diffusion models. To tackle the aforementioned challenges, we propose a novel Discrete Conditional Diffusion Reranking (DCDR) framework, which extends traditional diffusion models with a discrete forward process and a conditional reverse process for sequence generation.Specifically, in each step of the forward process, DCDR uses a discrete operation to add noises to the input sequence.We propose two discrete operations including permutation-level operation and token-level operation.In the reverse process, DCDR introduces user feedback into the denoising model as conditions for generation.In each step, the denoising model takes conditions and the noisy sequence as input and estimates the distribution of denoised sequence in the next step.This enables the reverse process to generate sequences with expected feedback during inference.To train the denoising model, we derive the formal objective function by introducing carefully designed sequence encoding and transition matrix for both discrete operations.Moreover, for efficient and robust inference, we propose a series of techniques to enable the deployment of DCDR in real-life recommender systems.We conduct extensive offline and online A/B experiments, the comparison between DCDR and other state-of-the-art reranking methods demonstrates the superiority of DCDR. DCDR actually serves as a general framework to leverage diffusion in reranking.The discrete operation and model architecture in DCDR are not exhaustive, which can vary according to specific application scenarios.We believe that DCDR will provide valuable insights for future investigations on diffusion-based multi-stage recommender systems.The contributions of this paper can be summarized as follows: • To the best of our knowledge, this is the first attempt to introduce diffusion models into the reranking stage in real-life multi-stage recommender systems. RELATED WORK 2.1 Reranking in Recommendation Reranking in recommendation focuses on rearranging the input item sequence considering correlations between items to achieve optimal user feedback.Therefore, reranking models [19,20,30] take the whole list of items (listwise context) as input and generate a reordered list as output.This is different from ranking models [3,15] in preliminary stages (e.g., matching, ranking) that consider a single candidate item at a time.Existing studies on reranking can be roughly categorized into two aspects: the first line of researches [1,19,20] focus on modeling item relations and directly learn a single ranking function which ranks items greedily with the ranking score; the other line of works [7,12] divides the reranking task into two components: sequence generation and sequence evaluation, with a generator produces feasible sequences an a evaluator selecting the best one based on estimated listwise utility. Our work adopts the evaluator-generator paradigm and endeavors to explore the potential of diffusion models as the generator for producing high-quality item sequences, thereby enhancing the performance of reranking models. Diffusion Models Diffusion models [5,9,11,22] have achieved significant success in generation tasks of continuous data domains, such as image synthesis and audio generation.Some studies [2,21] attempt to apply diffusion models on tasks of discrete data domains like text generation.One line of researches [2,10] design corruption processes and reconstruction processes on discrete data.Another line of researches [4,13] attempt to apply continuous diffusion models on discrete data domains by adding noises to the embedding spaces of the data or real-number vector spaces.DiffusionLM [13] is one of the state-of-the-art method which generates text sequence with continuous diffusion models.While diffusion models have achieved success, their potential for generating high-quality item sequences in recommendation remains under-explored.Recently, some studies have attempted to apply diffusion models on sequential recommendation [14,27,28], where the focus is to generate the next item based on user's historical interactions. However, it is important to note that the reranking task addressed in this paper is distinct from typical sequential recommendation.Specifically, reranking aims to generate feasible item permutations rather than focusing on the next item embedding [14] or user vector [27], which poses significant challenges for the application of diffusion models in the reranking stage. Diffusion Model Before we go deep into the details of DCDR, we first provide a brief introduction to diffusion models.The typical diffusion models consist of two processes: forward process and reverse process, which are illustrated in Fig. where (x , ) and (x , ) are modeled with neural networks. Training: The canonical objective function [9] is the variational lower bound: Since L is a constant, it is usually removed from the loss function.L 0 represents the reconstruction error and L represents the denoising error between denoised data at each step and the corresponding corrupted data in the forward phase. Inference: During inference, diffusion models start from a noisy sample x and draw denoising samples with (x −1 |x ) step by step.After steps, the generation process runs from x to x 0 i.e. the final generated sample. DISCRETE CONDITIONAL DIFFUSION RERANKING FRAMEWORK In this section, we provide a detailed introduction to DCDR.First, we introduce the overall framework in Section 4.1.Then, we elaborate on each component from Section 4.2 to Section 4.5.The characteristics of DCDR are discussed in Section 4.6. Overview of DCDR The framework of DCDR is illustrated in Fig. 2, which mainly consists of: 1) discrete forward process, 2) conditional reverse process.Specifically, the discrete forward process defines a discrete operation with tractable posteriors to add noises to the input sequence. Here we introduce permutation-level / token-level operation as an example, while other discrete operations are also feasible to be incorporated.The conditional backward process contains a denoising model that tries to recover the input from a noisy sequence at each step.Different from traditional diffusion models, we introduce the expected feedback of the original sequence as the condition to generate the last-step sequence, which is more consistent with the recommendation task. During training, DCDR first adds noises to the user impression list with the discrete forward process in a step-wise manner.Then, the denoising model in the reverse process is trained to recover the last-step sequence conditioned on user responses to the original impression list.During inference, an initial ranked item list is fed as input, and we set an expected feedback of each item as the condition.Then, the conditional reverse process is able to generate sequences step by step.To further improve the generation quality, we maintain sequences with top probabilities at each step and introduce an extra sequence evaluator to select the optimal sequence for the final recommendation. Discrete Forward Process The forward process in vanilla diffusion models adds Gaussian noises to continuous signals like images according to a specified schedule.However, it is sub-optimal for sequence generation tasks as explained in aforementioned sections.Therefore, we propose to use discrete forward process for sequence generation in the reranking task. Remember that to train a vanilla diffusion model, the variational lower bound in Eq.( 2) involves posteriors (x −1 |x , x 0 ) in L , which can be rewritten by applying Bayes' theorem: In the continuous setting with Gaussian noises, (x |x −1 ) and (x |x 0 ) are easy to compute and the posterior follows a Gaussian distribution [9].To enable discrete forward process, we need to design discrete operations that also yield tractable forward process Contextual Encoding Layer posteriors (R −1 |R , R 0 ).Here we introduce two examples of the discrete operation, i.e., permutation-level operation and token-level operation (shown in Fig. 3).Note that the DCDR framework is not limited to these two operations.Other discrete operations are also feasible to be incorporated in the future. Permutation-level Operation. We first propose to encode the item sequence in the permutation space.In this setting, each sequence R is maped to an integer (R ) ∈ [0, !) and represented as s ∈ R 1× !, which is a one-hot vector with length !.Here is the length of the final recommendation list.Then we design a simple yet effective discrete operation that forms a Markov chain in the permutation space.For each step in the forward process, we either keep the sequence same with that in last step or randomly swap a pair of items in the sequence.This corresponds to a transition matrix Q ∈ R !× ! at step as follows: where −1 (•) derives the counterpart permutation, 2 is the number of possible swap candidates, ∈ [0, 1] controls the noise strength at each step 1 .And ( −1 (), −1 ( )) denotes the difference between two sequences.For example, given We use an example in Fig. 3(a) to illustrate the permutation-level operation.Notice that (R , R −1 ) = (R −1 , R ), the transition 1 In this work, we set ∀ : = as a single hyper-parameter and leave advanced noise schedule mechanisms as future work. x T x t matrix is a symmetric matrix.Moreover, we show that this transition matrix induces a Markov chain with a uniform stationary distribution, which means the corrupted sequence would eventually become a fully random sequence.The detailed proof can be found in the Appendix. With the sequence encoding s and the transition matrix Q , the discrete forward process at each step can be formulated as: where (•; p) means a categorical distribution with probability p. After steps of noise adding, R given R 0 can be represented as: where As a result, we can compute the posterior (R −1 |R , R 0 ) as follows: ). ( This enables us to calculate the KL-divergence in L during training.Note that the computation of posterior at time requires a computation of Q , this can be time-consuming if we are to compute a matrix multiplication of two matrices with a size of !× !.However, the matrix is fixed when the length of sequence and are determined.Note that both and are determined before training, Q can be computed and stored in advance.Meanwhile, as s 0 is a one-hot vector, the calculation of is to select an entry of the matrix.Thus the computation of posterior is essentially very efficient during training and inference. Token-level Operation. Besides the permutation-level operation, we also propose a token-level operation beyond the permutation space.Here, we use a different way to encode an item sequence R as multiple token-level vectors z follows: where controls the noise strength at each step.This forward process also induces a uniform stationary distribution, and the detailed proof can be found in the Appendix. Then, we can formulate the discrete forward process as: Similarly, we can define O = O 1 O 2 . . .O t , and the probability of R given R 0 is represented as: This leads to the posterior as follows: Note that the computation of posterior also requires a calculation of O .Similarly this can be calculated and stored in advance. Conditional Reverse Process The reverse process in diffusion models attempts to recover the original sample from the noisy sample.For most diffusion models, this is parameterized as (x −1 |x ) (i.e., denoising model).However, the recommendation task is different from conventional generation tasks as the purpose of recommender systems is to fulfill user interests.Besides, users only response to the displayed item sequence.The utilities of other sequences are unknown.It may be problematic if we only train the denoising model to recover towards the impression list.As a result, we expect the reverse process to be conditioned on the user feedback (e.g., like, effective view).In this way, the denoising model attempts to model (R −1 |R , c), where c is a response sequence representing the expected feedback of sequence R 0 .During training, we can use the real feedback as condition; while at the inference stage, we can set c according to application scenarios (e.g., all positive feedback).c).Here we give an instantiation based on attention mechanism [24], shown in Fig. 4(a).Each item is first mapped into an embedding vector.Then the sequence R goes through a contextual encoding layer, which consists of a self attention layer among items in the sequence and a history attention layer that uses items in the sequence as queries to aggregate the user history.The output of these two layers are concatenated position-wisely as the contextual-encoded sequence representations.To introduce user feedback as condition, we map the feedback at each position as condition embeddings.Then we use these condition embeddings as queries to aggregate the contextual-encoded sequence representations of R .The outputs serve as the expected sequence representations of R −1 .Then the probability of drawing R −1 is computed as the cosine similarities between the expected representations of R −1 and the contextual-encoded sequence representations of R −1 position-wisely.As for the possible next sequence R −1 , it depends on the discrete operation chosen in the forward process.Compute (R −1 |R , R 0 ) according to Eq.( 5) or Eq.( 7); Inference of DCDR The inference procudure in vanilla diffusion models starts with a pure Gaussian noise x and samples x −1 step by step.To accommodate the recomendation reranking scenario, we make some changes to enable the deployment of DCDR in real-life recommender systems. • Condition setting: To fullfill user interests as much as possible, we set the expected condition c as a sequence with all positive feedback during inference.• Starting sequence: In traditional diffusion models, the inference starts with a pure Gaussian noise.However, the pure Gaussian noise may eliminate the important information like user preferences thus hurting recommendation quality.More importantly, starting from pure noise demands far more steps for high-quality generation.Therefore, we use the ordered sequence from the previous stage as the starting sequence for generation.• Beam search: To improve the robustness of the sequence generation process, we adopt beam search to generate a specific number of sequences and further adopt a sequence evaluator model (detailed in Section 4.5) to select the final recommended sequence.Starting from the first noisy sequence, we use the diffusion model to generate sequences for each candidate and keep sequences with top-probabilities at each step 2 .• Early stop: Despite the inference optimizations like starting sequence and beam search, the multi-step reverse process may still be costly in time.Therefore we further introduce an earlystop mechanism into the inference procedure.For each step, we compute the likelihood that generated sequences match the expected conditions.With the diffusion steps increasing, the likelihood is expected to increase accordingly.And the diffusion steps would terminate when the likelihood stops increasing or the increase becomes quite marginal. The detailed inference algorithm of DCDR is described in Alg.2. 2 Notice that when adopted token-level operation, the reverse process is possible to generate sequences containing duplicated items.We manually filter out such sequences and only retain sequences without duplicated items at each step. Sequence Evaluator The sequence evaluator model (R) mainly focuses on estimating the overall utility of a given sequence.Note that many existing methods for the sequence evaluator [6,29] can be incorporated in our DCDR framework.To center the contribution of this paper to the overall framework, we adopt an intuitive design of the sequence evaluator.The architecture of the evaluator model is depicited in Fig. 4(c).Specifically, the input sequence is encoded by the contextual encoding layer as that in the denoising model.Then, the representation at each position is fed into a MLP to predict the score of a given feedback label.The overall utility of the sequence is measured by the rank-weighted sum of scores at each position. Notice that the feedback label may vary across different recommendation tasks, such as clicks and purchases in e-commerce services, likes and forwards in online social media.Without loss of generality, we utilize the same feedback label as that in the conditional reverse process, and the objective function is a binary cross-entropy loss. Discussion The proposed DCDR provides a general and flexible framework to utilize diffusion models in recommendation.Various discrete operations applied in the forward process yield different diffusion models.For instance, the proposed permutation-level operation is well-suited for scenarios where the set of items to be displayed has been fixed (i.e., = ).The diffusion model learns how to generate the optimal permutation by iteratively swaping items conditioned on the expected feedback.Conversely, the token-level operation is suitable when the final displayed item list is a subset of the initial sequence (i.e., > ).The diffusion model learns how to generate the best sequence by step-wise substituting each item with a candidate item.Researchers can also develop other discrete operations tailored to specific application scenarios.Moreover, the architectures of the denoising model and the sequence evaluator model are also flexible to incorporate specific contextual features.Consequently, we believe that DCDR will provide valuable insights for future investigations on diffusion-based multi-stage recommender systems. EXPERIMENTS We conduct extensive experiments in both offline and online environments to demonstrate the effectiveness of DCDR.Three research questions are investigated in the experiments: • RQ1: How does DCDR perform in comparison with state-of-theart methods for reranking in terms of recommendation accuracy and generation quality?• RQ2: How do different settings of DCDR affect the performance?• RQ3: How does DCDR perform in real-life recommender systems? 5.1 Experiment Setup 5.1.1Dataset.For the consistency of dataset and the problem setup, we expect that each sample of the dataset is a real item sequence displayed in a complete session rather than a list constructed manually.Therefore we collect two datasets for offline experiments: • Avito 3 : this is a public dataset consisting of user search logs.Each sample corresponds to a search page with multiple ads and thus is a real impressed list with feedback.The data contains over 53M lists with 1.3M users and 36M ads.The impressions from first 21 days are used as training and the impressions in last 7 days are used as testing.And each list has a length of 5. • VideoRerank: this dataset is collected from Kuaishou, a popular short-video App with over 300 million daily active users.For the consistency of dataset and the problem setup, each sample in the dataset is also a real item sequence displayed to a certain user in a complete session.The dataset contains 100, 102 users, 1, 243, 877 items and 871, 834 lists where each list contains 6 items. 5.1.2 Baselines.We compare the proposed DCDR with serveral state-of-the-art reranking methods and a modified discrete diffusion method for text generation. • PRM [20]: PRM is one of the state-of-the-art models for reranking tasks, which uses self attention to capture the relations between items.And it has been used for reranking tasks in Taobao recommender systems.• DLCM [1]: DLCM adopts gated recurrent units to model the cross-item relations sequentially.Meanwhile DLCM is optimized with an attention-based loss function, which also contributes to its predictive accuracy.• SetRank [19]: SetRank tries to learn a permutation-invariant model for reranking by removing positional encodings and generates the sequence with greedy selection of the items.• EGRerank [12]: EGRerank consists of a sequence generator and a sequence discriminator.And the generator is optimized with reinforcement learning to maximize the expected user feedbacks under the guidance of evaluator.It is worth noticing that EGRerank has been used for reranking tasks in AliExpress.We treat items as words and modify DiffusionLM to take the same conditions with DCDR as inputs for controllable generation. Implementation Details. For different datasets, we use different user feedback as condition in the reverse process of DCDR. For Avito, we use the click behavior as feedback.For VideoRerank, we set the feedback condition as a binary value indicating whether a video has been completed watched.The settings for hyper-parameters of DCDR can be found in Section 5.2.2.As for other baselines, we carefully tune corresponding hyper-parameters to achieve the best performance. 5.1.4 Metrics.We use two widely-adopted metrics, namely AUC and NDCG, to evaluate the performance of different methods in the offline experiments. Offline Experiment Results For all offline experiments, we first pretrain a ranking model with Lambdamart and use the ranked list as the initial sequence for the reranking task.For our DCDR, the variant using permutationlevel operation is denoted as DCDR-P, while the variant using token-level operation is denoted as DCDR-T. Performance Comparison (RQ1 ).The performances of different approaches are listed in Table 1.Notice that DCDR achieves the best performances over other approaches, this verifies the effectiveness of DCDR in item sequence generation quality.Moreover the comparison between DCDR and DiffusionLM-R indicates that the generation quality by discrete diffusion models in DCDR outperforms DiffusionLM-R significantly.This verifies the benefits of discrete diffusion models for discrete data domains. Analysis of DCDR (RQ2). In this section, we provide a detailed analysis of DCDR from multiple aspects. Beam size : We alter the beam size in the reverse process for inference and the results are presented in Fig. 5.Note that the result is best when beam size is set to 6, this makes sense since a proper number of beam size improves the recommendation robustness without involving too many sequences to evaluate. Reverse steps : We alter the number of reverse steps from 1 to 5 and list the performances in Fig. 5.As shown in the figure, the performance gets improved when the number of diffusion steps increases.However the improvement becomes marginal when the number of steps reaches a certain value.This coincides with the intuition of adopting early-stop for efficiency.Noise scale : We alter the the swapping probability from 0.1 to 0.5 during training and present the results in Fig. 5.The result indicates that too much noise may add difficulty to the learning process and a proper amount of noise leads to satisfactory performances. Online Experiment Results (RQ3) We conduct online A/B experiments on KuaiShou APP. Experiment Setup. In online A/B experiments, the traffic of the whole app is split into ten buckets uniformly.20% of traffic is assigned to the online baseline PRM while another 10% is assigned to DCDR-P.As revealed in [32], Kuaishou serves over 320 million users daily, and the results collected from 10% of traffic for a whole week is very convincing.The initial video sequence comes from the early stage of the recommender system, which greedily ranks the items with point-wise ranking scores.And we directly use the initial sequence as the noisy sequence R for generation.To enable the controllable video sequence generation, we set two conditions for diffusion models: first, we expect the users to finish watching each video in the sequence; second, we expect the positive feedback from users (for example like the video) Experiment Results. The experiments have been launched on the system for a week, and the results are listed in Table 2. The metrics for online experiments include views, likes, follows (subscriptions of authors), collects (adding videos to collections) and downloads.Notice that DCDR-P outperforms the baseline in all the related metrics by a large margin.This verifies the quality of recommended video sequence of DCDR-P.As diffusion models generates the samples in a step-wise manner, the cost of computation and latency become a challenge for the deployment in real-life recommender systems.The computation costs and service latency are listed in Table 3.The step-wise generation causes inevitable latency costs of the recommendation service.However, the cost is acceptable in the system since it does not jeopardize the user experience. CONCLUSION In this paper, we make the first attempt to enhance the reranking stage in recommendation by leveraging diffusion models, which faces many challenges such as discrete data space, user feedback incorporation, and efficiency requirements.To address these challenges, we propose a novel framework called Discrete Conditional Diffusion Reranking (DCDR), involving a discrete forward process with tractable posteriors and a conditional reverse process that incorporates user feedback.Moreover, we propose several optimizations for efficient and robust inference of DCDR, enabling its deployment in a real-life recommender system with over 300 million daily active users.Extensive offline evaluations and online experiments demonstrate the effectiveness of DCDR.The proposed DCDR also serves as a general framework.Various discrete operations and contextual features are flexible to be incorporated to suit different application scenarios.We believe DCDR will provide valuable insights for future investigations on diffusion-based multi-stage recommender systems.In the future, we plan to study the impact of noise schedule and explore methods to enhance the efficiency and controllability of the generation process. APPENDIX Lemma 7.1.Let P be the transition matrix of a Markov chain.If P is a doubly-stochastic matrix, then the Markov chain defined by P has a uniform stationary distribution. Proof.Given a transition matrix P, there exists a collection of eigenvalues and eigenvectors.As P is doubly-stochastic (every row sums to 1 and every column sums to 1), it is easy to verify that 1 is an eigenvalue of P, with a corresponding eigenvector e/ (1 • e/ = P • e/ = e/).Therefore, we have = e/ and = P, indicating that uniform distribution is a stationary distribution of the Markov chain.□ Lemma 7.2.A Markov chain is ergodic if and only if the process satisfies 1) connectivity: ∀, : P (, ) > 0 for some , and 2) aperiodicity: ∀ : { : P (, ) > 0} = 1.And any ergodic Markov chain has a unique stationary distribution. Proof.The details can be found in the reference [18].□ Theorem 7.3.The Markov chain for the discrete forward process with permutation-level operation or token-level operation has a unique uniform stationary distribution. Proof.First, it is easy to verify that both transition matrices are doubly stochastic: ∀ : [Q] = 1 and ∀ : [O] = 1.Therefore, uniform distribution is a stationary distribution of both processes according to Lemma 7.1. Meanwhile, we can show that both Markov chains are ergodic.For the permutation-level operation, any permutation can be achieved through finite steps of swaps.For the token-level operation, it is easy to verify that each item has a chance to appear at each position.Therefore, both forward processes satisfy the connectivity condition.Besides, notice that [Q] > 0 and [O] > 0, thus the set { : Q (, ) > 0} and { : O (, ) > 0} contain 1.This indicates that { : Q (, ) > 0} = 1 and { : O (, ) > 0} = 1, which satisfies the aperiodicity condition.Therefore, both forward processes are ergodic, and hence only one stationary distribution exists according to Lemma 7.2. Combining the above conclusions, both processes have a unique uniform stationary distribution, which means that the sequences are almost randomly arranged after a sufficient number of steps.□ Figure 2 : Figure2: An illustration of the DCDR framework, which mainly consists of: 1) discrete forward process, and 2) conditional reverse process.In the forward process, two discrete operations with tractable posteriors are introduced; while in the reverse process, user feedback is introduced as the condition to control generation. Token-level operation (a) Permutation-level operation Figure 3 : Figure 3: Illustrations of permutation-level (left) and tokenlevel (right) operations in the discrete forward process. Figure 4 : Figure 4: Illustrative instantiations of the denosing model in the conditional reverse process and the sequence evaluator model.Noted that the model architectures provided are not exhaustive, which can vary across application scenarios. 4. 3 . 1 Denoising Model Architecture.Note that DCDR framework does not restrict the concrete architecture of the denoising model (R −1 |R , 4. 3 . 2 Denoising Model Training.The training objective function of the denoising model (R −1 |R , c) in the reverse process is the typical training loss of diffusion models: 5 : Compute (R −1 |R , c); 6: Compute L and update with ∇ L ; 7: until converged process (i.e.sample based on (R |R 0 )).Then the denoising model is optimized with L accordingly.The training algorithm is presented in Alg.1. Figure 5 : Figure 5: The performances of DCDR-P and DCDR-T on Avito dataset with different hyper-parameter settings, including beam size in reverse process for inference, the number of diffusion steps and the noise scale in forward process. • A novel Discrete Conditional Diffusion Reranking (DCDR) framework is presented.We carefully design two discrete operations as the forward process and and introduce user feedback as conditions to guide the reverse generation process.•We provide proper approaches for deploying DCDR in a popular video app Kuaishou, which serves over 300 million users daily.And online A/B experiments demonstrate the effectiveness and efficiency of DCDR. is the length of the final recommendation list.In practice, is usually less than 10 and can either be equal to or larger than . x −1 , I) where is the scale of the added noise at step . Algorithm 2 Inference Algorithm for DCDR Require: R : initial item sequence; c : expected user feedback; (R −1 |R , c): conditional denoising model; : maximal step; : number of candidate sequences; : sequence candidates set; (R): sequence evaluator model 1: = {R }; 2: for = { , − 1, . . ., 1} do Compute (R −1 |R , c ) given R and c ; Sample item sequences according to (R −1 |R , c ) with highest probabilities; Merge the sampled item sequences into and keep item sequences with the overall highest probabilities; Select the final sequence with (R) for recommendation; Table 2 : Online experiment results.All values are the relative improvements of DCDR-P over the baseline PRM.For Online A/B tests in Kuaishou, the improvement of over 0.5% in positive interactions (Like, Follow, Collect, Download) and 0.3% in Views is very significant. Table 3 : The additional cost of computation to the system and the additonal latency of reranking service.The computation is measured by CPU time.Avg (Comp)/Max (Comp) measure the average/maximum computation costs during the launch time of experiments respectively.Efficiency of DCDR.
7,641.2
2023-08-14T00:00:00.000
[ "Computer Science", "Mathematics" ]
Estimating the effect of corporate integrity culture on tax avoidance using a text-based approach: A research note Tax avoidance holds immense importance due to its substantial implications for government revenues and the fair allocation of resources. Consequently, understanding the factors that shape tax avoidance is critically important. Exploiting a cutting-edge measure of corporate integrity derived from state-of-the-art machine learning algorithms and textual analysis, we explore the effect of corporate integrity on tax avoidance. Our text-based measure is based on a textual analysis of earnings conference call transcripts. Our findings show that companies with greater corporate integrity are significantly less involved in tax avoidance. Further analysis corroborates the results, i.e., propensity score matching, entropy balancing, and an instrumental variable analysis. Our findings are especially noteworthy as they demonstrate that corporate culture, although intangible in nature, exerts a substantial influence on corporate behavior. I. Introduction Tax avoidance is a critically important issue due to its significant impact on government revenues and the equitable distribution of resources.Addressing this issue is essential for promoting a fair and sustainable economic system.Not surprisingly, tax avoidance has generated an immense volume of research in accounting, finance, economics, and other areas (see [1] for a literature review).We extend the body of knowledge in this area by investigating how corporate tax avoidance is influenced by one of the most important corporate cultural traits, i.e., corporate integrity. Corporate culture can be defined as the collective set of beliefs, values, and preferences shared by employees within a corporation [2,3].Corporate culture is abstract and is difficult to quantify empirically.However, recently, Li et al. [4] employ advanced machine learning algorithms and sophisticated textual analysis of earnings conference calls to identify corporate cultural traits.Using Li et al.'s [4] text-based approach, we explore how a culture of corporate integrity influences corporate tax avoidance. Firms with a strong culture of integrity should be less involved in corporate tax avoidance.Such firms prioritize transparency, compliance with laws, and ethical decision-making, which extends to their tax practices.Corporate tax avoidance often involves exploiting legal loopholes and engaging in aggressive tax planning to minimize tax liabilities, sometimes at the expense of society's broader interests.Companies that prioritize integrity are more likely to adhere to the spirit of the tax laws and contribute their fair share of taxes to the society in which they operate [4].Furthermore, firms with a strong culture of integrity tend to value long-term sustainability and the preservation of their reputation.Engaging in aggressive tax avoidance can be detrimental to a company's image and could lead to reputational damage if such practices are exposed or criticized by the public, shareholders, or other stakeholders.Therefore, we hypothesize that firms with a stronger culture of integrity exhibit less tax avoidance. Using a large sample of U.S. firms and a distinctive measure of corporate integrity derived from sophisticated machine learning algorithms and textual analysis, we show that greater corporate integrity results in a significant reduction in tax avoidance, corroborating our hypothesis.To mitigate endogeneity, we perform a variety of robustness checks, i.e., propensity score matching, entropy balancing, and an instrumental variable analysis.All robustness checks validate the results. Our results extend the existing literature in several important ways.First, we extend the literature on corporate culture.Previous research has predominantly examined corporate culture from a theoretical perspective [2][3][4][5][6].However, empirical investigations on corporate culture have been limited.Our study addresses this gap and stands as the first empirical research to document that a culture of integrity influences corporate tax avoidance significantly. In addition to our contribution to the literature on corporate culture, we also enrich the existing research on tax avoidance.Prior studies have primarily focused on accounting or financial factors as determinants of tax avoidance ( [1], provide a comprehensive review).Our study aptly augments this body of knowledge by demonstrating that corporate culture, a nonfinancial and abstract attribute, significantly influences corporate tax avoidance.Finally, we contribute to an emerging area of empirical research that employs textual analysis (see [7], for a literature review).We show that textual analysis can be used to extract abstract quality and create metrics that are empirically relevant and useful. II. Prior research on tax avoidance Our research belongs to a crucial stream of literature that investigates the determinants of tax avoidance.This area holds significant importance and has given rise to an extensive body of research.Early studies primarily concentrated on intrinsic corporate attributes such as company size and operational strategies [1,[8][9][10].By contrast, contemporary tax avoidance research has integrated corporate governance attributes aimed at mitigating agency conflicts [1,11].For instance, according to McGuire et al. [12], firms employing dual class share structures tend to exhibit a reduced inclination towards tax avoidance.This could be because insiders hold control over voting rights, potentially alleviating the pressure on management from external shareholders to partake in tax avoidance practices.Richardson et al. [13] find a decline in tax avoidance when the internal audit committee consists of more outside independent directors.External governance mechanisms, such as media exposure, are also relevant to tax avoidance.As discussed in Tian et al. [14]and Kanagaretnam et al. [15], lower levels of tax avoidance are documented when there is strong media exposure of aggressive tax behaviors. Product market considerations also play a role in tax avoidance.For instance, Kubick et al. [16] highlight that companies leading in the product market tend to employ greater tax avoidance strategies, leveraging their comparative advantage.Conversely, entities possessing valuable brands tend to exhibit lower levels of tax avoidance due to their concerns about maintaining a positive reputation [17].Several other factors also influence the extent of tax avoidance.Wang et al. [1] offer a contemporary and complete literature review of tax avoidance. While it is intuitive to assume that corporate integrity is relevant to tax avoidance, there is surprisingly scant research on this issue.One of the reasons is that it is challenging to capture corporate integrity.While corporate integrity is a straightforward and simple concept, it is difficult to operationalize it empirically.We address this gap in the literature by exploiting an innovative measure of a corporate culture of integrity based on advanced algorithms and textual analysis. III. Sample selection and data description Our original sample comes from Li et al. [4].Then, we combine the data with COMPUSTAT to obtain firm-specific characteristics and financial statement data necessary to estimate tax avoidance measures.The final sample comprises 41,138 firm-year observations from 2001 to 2018. The measure of corporate integrity used in our study is derived from Li et al. [4], who employ a sophisticated approach to extract phrases pertaining to corporate integrity from earnings conference call transcripts.The frequency of appearance of these phrases serves as an indicator of the level of integrity possessed by the firm.Li et al. [4] execute a variety of validation tests and conclude that their approach is reliable and useful.More information about the construction of the text-based corporate culture score, of which corporate integrity is an important component, is provided in the S1 Appendix. For tax avoidance, we utilize several alternative measures for robustness, i.e., (1) cash effective tax rate (cash ETR), (2) GAAP effective tax rate (GAAP ETR), (3) book-tax-differences (BTD) and (4) permanent book-tax-differences (Perm-diff).These measures of tax avoidance are widely used in the literature (De Simone et al., 2020).We multiply the cash ETR and the GAAP ETR by minus one in the regression analysis for ease of interpretation.Therefore, a higher value of each of our measures indicates greater tax avoidance. Additionally, we combine all the four measures discussed above into a single index using principal component analysis (PCA).PCA allows researchers to transform their original variables into a new set of uncorrelated variables, known as principal components.These components are ordered by their ability to explain the variance in the data, with the first component capturing the most variance and subsequent components capturing decreasing amounts.By reducing the dimensionality of our data while retaining the most important information, PCA helps identify the underlying structure and relationships among variables.By focusing on what the four different measures share, we can reduce errors considerably.We referred to the combined measure, which is the first component resulting from PCA, as the tax avoidance index. We incorporate several firm-specific attributes that may affect tax avoidance.Specifically, we include controls for firm size (natural logarithm of total assets), profitability (EBIT divided by total assets), leverage (total debt divided by total assets), capital investments (capital expenditures/total assets), cash holdings (cash holdings divided by total assets), intangible assets (research and development (R&D) expenses divided by total assets, and advertising expenses divided by total assets), asset tangibility (fixed assets divided by total assets), dividend payouts (total dividends divided by total assets), and discretionary spending (selling, general, and administrative (SG&A) expenses divided by total assets).Additionally, we include industry and year fixed effects to account for variations across industries and over time.It is difficult to include firm fixed effects in the context of our study as corporate culture changes slowly over time, making it challenging to incorporate firm fixed effects.Table 1 shows the summary statistics for the variables. Essentially, we run the following OLS regression analysis with industry and year fixed effects: where i indexes firms and t indexes years. IV. Results The results are presented in Table 2. Standard errors are clustered by firm.Model 1 has the cash ETR as the dependent variable.The coefficient associated with corporate integrity is significantly positive at the 10% level, suggesting that corporate integrity results in less tax avoidance.The dependent variable in Model 2 is the GAAP ETR, where the coefficient of corporate integrity is insignificant.Model 3 and Model 4 have book-tax differences and permanent book-tax differences as dependent variables respectively.The coefficients of corporate integrity in Model 3 and Model 4 are significantly negative, implying that greater corporate integrity diminishes tax avoidance.Finally, we use the combined measure of tax avoidance in Model 5 and obtain a significantly negative coefficient for corporate integrity.Overall, our results demonstrate a significant decline in tax avoidance in the presence of higher corporate integrity, consistent with our hypothesis. To minimize endogeneity, we run several robustness checks and show the results in Table 3. First, we perform propensity score matching (PSM).We classify firms in the top quartile of corporate integrity as our treatment group.For each observation in the treatment group, we select an observation in the rest of the sample that is most similar using ten firm-specific attributes (the ten control variables in the regression analysis).Hence, our treatment and control groups are indistinguishable in every observable dimension, except for corporate integrity.Model 1 in Table 3 contains the PSM result, showing a significantly negative coefficient for corporate integrity.In Model 2, we execute entropy balancing, where we adjust the weight of each observation such that the means and the variances of the treatment and control groups are comparable.Again, the coefficient of corporate integrity is significantly negative.Furthermore, we implement an instrumental-variable analysis (IV).Our first instrument is the value of corporate integrity in the earliest year for each firm.Since corporate integrity in the earliest year could not have resulted from corporate integrity in any of the subsequent years, reverse causality is mitigated.We conduct an instrumental-variable analysis in two stages.Initially, we regress the corporate integrity score on our instrumental variable and all control variables, subsequently saving the predicted value of the corporate integrity score.In the second stage, we regress tax avoidance on this predicted value and the control variables.This approach aligns with the standard procedure for instrumental variable analyses.Model 3 is the second-stage regression result, where the coefficient of corporate integrity instrumented from the first stage is significantly negative.The Shea partial R 2 for our first-stage regression stands at 30.89%, and with an F-statistic of 14,907.08, it is clear that our instrument is robust and statistically significant. Finally, we employ another instrumental variable.Due to investors' clientele, local competition, and social interactions, companies located nearby tend to share similar characteristics, including corporate culture [18].Our second instrumental variable is the average value of corporate integrity of all firms located in the same city.The value at the city level should influence corporate integrity at the firm level but is not directly correlated with firm-specific tax avoidance because there are many firms in the same city.Again, we carry out a two-stage instrumental-variable analysis.First, we regress the corporate integrity score against our instrumental variable and control variables, then save the predicted score.In the second stage, we regress V. Conclusions We hypothesize that companies with a stronger culture of integrity should be less involved in tax avoidance.Based on a unique measure of corporate integrity generated by cutting-edge machine learning algorithms and textual analysis, our findings show that greater corporate integrity brings about a significant reduction in corporate tax avoidance, corroborating our hypothesis.Additional robustness checks validate the results, i.e., propensity score matching, entropy balancing, and an instrumental variable analysis.Our findings are crucially important as they demonstrate the tangible influence of corporate culture on corporate behavior, despite its abstract nature. Our research findings carry significant practical implications for diverse stakeholders.First, shareholders and managers gain insights into the significance of corporate culture, despite its abstract nature, as it correlates with corporate behavior.Consequently, promoting corporate integrity becomes imperative.Secondly, regulators and policymakers can utilize our findings to consider the impact of corporate culture while formulating tax-related regulations.This understanding can lead to more effective and targeted policy measures.Thirdly, investors benefit from our research by recognizing the relevance of corporate culture in influencing corporate behavior.This awareness enables them to make more informed and accurate assessments of companies.Lastly, tax authorities can leverage our findings to make well-informed decisions on reducing tax avoidance.In summary, our study provides actionable insights for stakeholders to foster corporate integrity, create effective tax policies, make informed investment decisions, and address tax avoidance challenges. Finally, a couple of limitations of our research can be noted.First, corporate integrity has a broad meaning.We use a text-based measure of corporate integrity culture as a proxy for corporate integrity.While our measure is useful in capturing the extent of corporate integrity, some aspects of corporate integrity may be left out.One way to address this limitation is for future research to utilize other proxies for corporate integrity that may reflect other aspects of corporate integrity.Second, while we employ several proxies for tax avoidance for robustness, there are a few other proxies that are not included.Future researchers may extend our study by including additional measures of tax avoidance.For instance, recently, a new measure of tax avoidance called "uncertain tax positions" has been examined in the literature.This new measure is enabled by the Accounting Standard Codification (ASC) 740.It would be interesting for future research to explore the impact of corporate integrity using this new proxy for tax avoidance. Table 3 . Robustness checks.://doi.org/10.1371/journal.pone.0298528.t003tax avoidance against this predicted score and control variables.Model 4 shows the secondstage regression, where the coefficient of corporate integrity is significantly negative.The firststage regression has a Shea partial R 2 of 30.89% and a highly statistically significant F-statistic of 14,907.08,confirming the robustness of our instrument.Overall, there is robust evidence that companies with stronger corporate integrity are significantly less aggressive in avoiding taxes, corroborating our hypothesis. https
3,458
2024-05-14T00:00:00.000
[ "Business", "Economics", "Law" ]
Modeling, analysis, and code/data validation of DIII-D tokamak divertor experiments on ELM and non-ELM plasma tungsten sputtering erosion We analyzed recent DIII-D tokamak tungsten divertor probe experiments using advanced, coupled, sputter erosion/redeposition, plasma, and surface response code packages. Modeling is done for ELMing H-mode, and L-mode plasmas, impinging on various size tungsten deposits on Divertor Material Evaluation System (DiMES) carbon probes. The simulations compute 3D, full kinetic, sub-gyromotion, impurity sputtering and transport, including changes in tungsten surface composition and response due to mixed deuterium and carbon ions irradiation. Per our analysis, ELM (edge localized mode) plasma sputtering in DIII-D mostly involves free-streaming high energy (∼500–1000 eV) D+ and C+6 ions, with high near-surface plasma density. L-Mode sputtering is due to impurity sputtering (C, W) only, with lower density. All cases show complete redeposition of tungsten on the divertor, with significant redeposition on the tungsten spots themselves, and low self-sputtering. Comparison of ELM plasma gross tungsten erosion simulation results with in-situ spectroscopic data is good, as are code/data comparisons of net erosion using post-exposure Rutherford backscattering (RBS) data for the L-mode probes. The analysis, extrapolated to a full tungsten divertor, implies low net erosion and negligible plasma contamination from sputtering. These results support the use of high-Z plasma facing surfaces in ITER and beyond. Introduction Among the critical issues for future tokamak fusion reactors are sputtering erosion of the divertor plasma facing surface and Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. possible resulting core plasma contamination by high-Z materials. Both issues depend heavily on near-surface/in-plasma transport and redeposition of sputtered particles, for example the net erosion rate depends on the difference between sputtering and redeposition and can be one to two orders of magnitude less than the gross rate [1,2]. It is important to study these processes in present devices and maintain a robust coupling between modeling results and experimental data. There is a long history of experiments and associated modeling using the 5 cm diameter removable DiMES divertor probe in the DIII-D tokamak at General Atomics-to study plasma/material interactions, particularly sputtering erosion, for numerous candidate surface materials, and to predict/validate performance for ITER and future fusion reactors, e.g. [3,4]. The past focus has been for non-ELMing plasmas but understanding of high energy transfer edge localized mode (ELM) sputtering response is important for future tokamak operation. In particular, ELMs could possibly be suppressed in ITER but this is uncertain. Continued understanding of non-ELM and inter-ELM performance is also of obvious major importance. Another issue for ITER is performance of the low-Z-wall/high-Z-divertor mixed-material Be/W system-DIII-D simulations can provide insight into this by virtue of the analogous DiMES C/W system. We therefore analyzed a series of DIII-D experiments in which tungsten-on-carbon deposited spots on DiMES divertor probes were exposed to both H-mode ELMing plasmas [5] and L-mode (low confinement, relatively high turbulence) plasmas, with a range of near-surface electron temperatures and densities. The ELM experiments exposed a fully tungsten coated DiMES probe to multiple plasma shots, with different ELM sizes, measured gross erosion via in-situ photon emission, and used analytical models to assess results. The L-mode experiments used the 'big spot' (1.5 cm diameter) and 'small spot' (1 mm diameter) technique of Stangeby et al [6], with multiple repeat-shot exposure and post-exposure lab measurements, to assess net (big spot) and approximate gross (small spot) tungsten erosion. We performed advanced computational simulations for both experimental series and compare with available data. The code/data comparisons are generally good and add to the model's validity. The simulation outputs can thus provide insight into the physical processes of sputter erosion and transport, and aid analysis of subsequent experiments at DIII-D and elsewhere. ELM plasma erosion analysis The DIII-D ELM experiments are described in detail in [5]. Briefly, time-dependent, in-situ spectroscopic measurements were made of gross erosion of a tungsten coated DiMES probe, for well-characterized ELMing H-mode plasma shots. The tungsten coating consisted of a~200 nm magnetron sputter-deposited layer on top of a finely polished ATJ graphite substrate. W-coated DiMES probes were inserted into the divertor plasma just outboard of the outer strike-point (OSP). The spectroscopic emission intensity from the WI 400.9 nm line was monitored via the DIII-D WI filterscope diagnostic with 50 kHz time resolution, cross-referenced against the high-resolution multichordal divertor spectrometer for absolute intensity calibrations [7]. This measurement was converted into absolute gross erosion of tungsten atoms via the ionizations/photon (S/XB) method. The background plasma density, temperature, ion flux, and heat flux to the DiMES location were also measured concurrently via Divertor Thomson Scattering, Langmuir probes, and IR thermography, respectively. In these discharges, the inter-ELM WI filterscope signals were too noisy to make a robust comparison of the relative contribution of the intra-ELM and inter-ELM phases to the total tungsten erosion rate. Previous analysis of a database of W erosion measurements from the DIII-D Metal Rings Campaign demonstrated that the intra-ELM W sputtering is small (~10%) relative to the inter-ELM phase at low ELM frequency (~10 Hz) but the two phases produce comparable amounts of W sputtering at higher frequencies (~50 Hz) [8]. We note that this is different from the JET-ILW result that the JET intra-ELM W sputtering dominates in all ELM regimes [9] because of: (a) the higher physical sputtering yield of C on W, in DIII-D, relative to Be on W, at the low impact energies of the inter-ELM phase; and (b) the higher C impurity content in DIII-D (1%-2%) than Be impurity content in JET (0.5%). We performed simulations for the ELM experiments involving plasma surface interaction with the full 5 cm diameter tungsten coated probe, for the highest erosion, intra-ELM period, peak power loading time. Our analysis computes the tungsten sputtering and transport, at the fixed peak time, using the REDEP/WBC [1,2] 3D, full-kinetic, Monte Carlo, sub-gyro-orbit, erosion/redeposition code package, coupled with DiMES mixed-material surface response simulations from the ITMC-DYN (Ion Transport in Materials and Compounds-Dynamics) [10,11] dynamic surface mixing and sputter code. Plasma near-surface parameters (N e , T e , T i , flow velocity, pre-sheath electric field, etc) and impinging particle fluxes and energies are given by the 'free-streaming model' and related models of [5,12,13] and REDEP/WBC models (e.g. oblique magnetic field incidence sheath structure and potential) as applicable. Following the approach of [5], the plasma conditions at the target during ELMs are inferred via the free-streaming model, which assumes that a flux tube from the plasma pedestal top detaches into the scrape off layer directly to the divertor. We define a reference ELM plasma model for our analysis with free-streaming D + flux of 5.0 × 10 22 m −2 s −1 and 2% C +6 flux, both with impinging ion energies of 1000 eV; a 2% flux of recycling-based 250 eV C +2 ions; and near-surface 25 eV plasma temperature and 1.2 × 10 20 m −3 densitycorresponding to peak ELM conditions for the 'Case 1' high temperature pedestal, strongly attached divertor conditions of [5]. The free-streaming model implies significant sputtering of tungsten by the majority plasma deuterium ions, unlike non-ELM cases where D + energies are typically below the W sputtering threshold, and as further explained in [5]. Sputter yields and atom velocities are given by ITMC-DYN, for all incident particle species, for carbon saturated tungsten. (Such C saturation results from exposure of the W surface prior to the ELM experiments.) ITMC-DYN simulations have been benchmarked against laboratory experiments as well as NSTX experiments [14]. The code uniquely integrates all collisional and near surface thermal processes to study the effect of impurities, surface segregation, and erosion on hydrogen isotope retention in plasma facing materials under multiple mixed ions irradiation during steady state and transient events. The general physics picture of carbon impingement on a tungsten surface, from ITMC-DYN simulations of various DIII-D experimental conditions, shows C implantation to~15 nm depth, peaking along a~1-5 nm zone, of order 50% C/W, reached in several seconds. For the present studies the ITMC-DYN simulations compute the self-consistent DiMES surface C/W ratio profile based on incident particle fluxes, and include ion/atom collisional interactions, D diffusion, retention, and desorption, and considering traps concentration during implantation and material heating. As noted in [5], the 2% carbon flux used in ITMC-DYN for the coupled code-package ELM simulations is in good agreement with measurements from the DIII-D edge charge exchange recombination spectroscopy system for discharges with a similar shape to those studied here. Per these inputs, surface equilibrium is found by ITMC-DYN to be reached before the ELM measurements in question, with ITMC-DYN computing the ELM-period W sputtering yields and velocity distributions (energy, elevation angle, azimuth angle) for each incident D, C, and W ion species. These are then used in WBC to launch W atoms from the probe surface, on a particle-by-particle basis using Monte Carlo. Figure 1 shows typical computed sputtered W neutral and ion trajectories-in this figure to better show distances involved-from the central 1 mm diameter portion of the probe. The trajectories involve initial ionization of W atoms, ionization to higher charge states, velocity-changing collisions with the incident plasma, and Lorenz force motion due to magnetic fields and sheath and pre-sheath electric fields. As shown, there is high tungsten redeposition within a range of several millimeters. We also note that the transit time for the redeposition process (<10 -6 s) is much shorter than an ELM duration time or characteristic time for change in plasma parameters. Table 1 summarizes results for the reference case as well as for an experimental case conducted with smaller ELMs, 'Case 3' of [5], resulting in lower free-streaming and recycling energies, with differences in near-surface plasma parameters, and with slightly higher (20%) free-streaming flux (scaling with pedestal density and plasma sound speed). Both simulation cases show a similar qualitative picture. The fractional contributions to tungsten sputtering by the reference ELM case are found to be 50% by free-streaming D + ions, 40% by C +6 ions, and the remainder by recycling C +2 ions. Such fractional contributions determine the energy distribution of the sputtered tungsten atoms. The sputtered tungsten atom energy, averaged over all sputtered particles, is 24 eV for the reference case and 16 eV for the smaller ELM case. Sputtered flux differs by about a factor of two between the two ELM conditions, obviously due to the different free-streaming energies and resulting sputtering coefficients. There is very high redeposition of sputtered tungsten on the probe itself, approaching unity. This high fraction is a major result of the high near-surface electron density for the DIII-D ELM plasmas. Tungsten ions redeposited on the probe tend to be those that were ionized within the~1 mm magnetic sheath region. Another finding is that W self-sputtering by redepositing ions is low,~3%-8%, due to the moderate redeposited ion average energy, primarily from sheath acceleration, of order 100 eV, (as shown, however, with high variance). Of major significance is that redeposition on the entire divertor, for all cases, is 100%, i.e. with~5% W deposition on the non-DiMES part of the DIII-D divertor. No tungsten (for 10 6 histories per case) leaves the (~0-5 cm) near-surface region. A key result is that the computed tungsten gross sputtered fluxes, per table 1, are a reasonable match to the relevant peak rates seen in the time-varying W1 spectroscopic data [5]. This and the other simulation outputs tend to provide reasonable validation of the model/assumptions of the Abrams et al analytical type analysis [5]. Although analysis of peak ELM power/particle loading erosion is the most critically needed and cost-effective modeling activity, we performed some analysis of other portions of the ELM discharges, for the Case 1 experiment. We find the same qualitative features of the erosion/transport process, with some differences in redeposition rates but high in any event. For example, for a near-surface electron density of half the peak value (i.e. for N e = 0.6 × 10 20 m −3 ) occurring at 2 ms after the peak, the probe redeposition fraction is .91, down from the peak-conditions .95. This would tend to increase the net erosion, however, the particle flux is much lower at this time. Also, in terms of sensitivity to changes in the plasma near-surface temperature, we found little difference in redeposition fractions for a T e range of 20-30 eV. Extrapolating the present and past modeling results, e.g. [2,15], to a divertor with complete tungsten coverage, the net erosion rate would be much smaller (up to two orders of magnitude) than the gross rate, and with negligible core plasma contamination by sputtering. As with the present DiMES experiments, this would be due to very short W atom ionization distances, resulting intense redeposition via electric and magnetic field acceleration, and impurity/plasma collisions. Such high redeposition of tungsten and other high-Z divertor materials has been likewise predicted by various past studies, such as REDEP/WBC code package analysis for tokamaks in general e.g. [1,2], DIII-D e.g. [3], and ITER [15]; ERO code modeling of JET ELM effects [16], and DIII-D [17]; and SOLPS code package modeling of ITER [18]. In view of the critical importance of this issue, the present additional findings for the DIII-D ELM experiments and code/data validation are encouraging. Regarding plasma contamination however, our findings apply to the studied main plasma/divertor interaction areathere are other potential sources of core contamination, in DIII-D, ITER, etc, such as from sputtered or plasma-transient vaporized W, transported from the divertor to more remote boundary regions, and then re-emitted into a lower density near-surface plasma with significantly longer ionization distances. We also studied the effect-for both the ELM simulations and the L-mode analysis to be described-of some model changes such as in WI ionization rate coefficients (which rates are somewhat uncertain), and magnetic field and sheath related changes in impinging ion azimuthal angle distributions (e.g. isotropic incidence vs. the reference non-isotropic). No significant qualitative change in the results was seen, i.e. with the simulation outputs still showing high redeposition, low self-sputtering, and in related parameters. Likewise, substantially increasing the temperature gradients of the near-surface plasma, affecting the so-called thermal force, showed essentially no change in results. These insensitivities are due to the small tungsten transport distances, for both the ELM and L-mode plasmas studied, where strong Lorentz forces and plasma collisions dominate the impurity transport. L-mode plasma analysis Further model benchmarking activity was performed on Lmode DIII-D experiments conducted at high divertor electron density to roughly simulate, in steady state, the divertor plasma conditions that are present transiently during ELMs. The plasma discharge scenarios were similar to those described in [3], but at higher heating power and gas puffing rate to produce high density L-mode divertor plasmas with sufficiently high electron temperature to cause measurable amounts of tungsten gross and net sputter erosion. The diagnostic setup was effectively identical to the H-mode cases discussed above and in [5]. Figure 2 shows the probe structure and geometry for the Lmode experiments. In addition to the central 1.5 cm diameter big W spot there are two 1 mm diameter small spots each in the toroidal and radial directions. The idea is that the small spots exposure gives a good indication of gross erosion, i.e. having minimal redeposition, whereas the big spot data will show net erosion [6]. To interpret the data it is then vital to have highly accurate computations of the redeposition rates and the related self-sputtering contribution. We analyzed two DIII-D L-mode experiments that had well defined near-constant plasma conditions. These were a higher near-surface plasma temperature case, DiMES 'Cap #3', using 3 discharges with 10.80 s total exposure; and a 'Cap #1' case with lower T e , higher N e , with 5 discharges and 13.25 s exposure. Table 2 shows Rutherford backscattering (RBS) erosion data for the two L-mode experiments. Also measured by RBS were toroidal and radial tungsten erosion profiles on the DiMES probe through the 1.5 cm spot center. A key goal for our analysis is to predict/explain the central spot observed net erosion-for code validation purposes and to assess, for example, ITER divertor performance with full tungsten coverage. A secondary goal is to provide insight into the toroidal small spots erosion. An issue for the latter is the asymmetry in the T1, T2 data. In theory, toroidal symmetry should be obtained in a tokamak. As shown in table 2, however, there is a 31% difference in T1, T2 erosion, oddly enough the same for both cases (although measurement uncertainty percentage is high for the Cap #1 data). One contributor to asymmetry is different transfers of central spot material to the small spots. As discussed below we computed this and it explains some of the asymmetry. The general methodology for DiMES experiments and our associated modeling is described in several publications, e.g. [3,4]. We note that the ref [3]. experiment used a 1 cm big spot and a single 1 mm small spot of W on a DiMES probe with Mo substrate; thus the new probe experiments analyzed here have more than twice the central spot area, use four vs. one small spots, with carbon substrate, also with L-mode exposure vs. previous H-mode plasma. We expect, however, that general sputtered impurity transport features, i.e. role of plasma collisions, electric and magnetic fields, etc, should be similar, and new modeling results here should be highly useful to compare and extend past results and conclusions. The above described coupled code modeling technique for the ELM analysis is likewise used for the L-mode simulations, with the ITMC-DYN code supplying sputter yields and sputtered velocity distributions to REDEP/WBC. The L-mode plasma background inputs to WBC are from data-calibrated OEDGE/DIVIMP code calculations. and various direct DIII-D B-field, geometry etc data. OEDGE uses a 1D fluid equation solver along individual flux surfaces (parallel to the field lines) starting from the target surface. The code uses Langmuir probe measurements of the target condition profiles (J sat , T e ) as boundary conditions for the 1D fluid solver. Divertor Thomson scattering measurements of electron density and temperature along the field lines are used to further constrain the reconstruction of the experimental plasma conditions. Monte Carlo simulations of carbon erosion, transport and deposition, incorporating both physical and chemical sputtering of carbon, are then run using DIVIMP to determine profiles of carbon fluxes/fluences, charge state and average impact energy across DIMES. The DIVIMP computation of absolute C/D ratio, however, is relatively uncertain, involving large area, less clear, far-boundary plasma parameters, in contrast to the relative C ion fluxes and energies to the DiMES probe, involving the better characterized plasma scrapeoff layer region near the small DiMES probe. We therefore use a preferable, data-based method, to be described, using the OEDGE/DIVIMP relative carbon ion state flux profiles, and impinging energies, with the absolute C fluence calibrated to the RBS data. The resulting plasma inputs to WBC are the 2D nearsurface (~0-5 cm above the divertor) temperature, density, magnetic and pre-sheath electric field, etc spatial profiles, data-calibrated carbon ion fluences, and incident energy profiles-for each carbon ion charge state-across DiMES. In contrast to the ELM plasma case, for the L-mode plasma all sputtering is by carbon ions and self-sputtering, because D + energies are below the W sputter threshold. Summarizing some key background plasma inputs to WBC, plasma electron temperature at the DiMES probe center is 25 eV for the Cap #1 experiment and 11 eV for Cap #2, with corresponding pre-sheath electron densities shown in table 3. Ion temperature is about the same as electron temperature near the surface but is moderately higher elsewhere. Plasma flows to the divertor surface at the sound speed. Plasma parameters are nearly constant in the radial direction, at the divertor along the 1.5 cm central spot. Temperatures and density fall off radially outside this DiMES central region. Impinging carbon-ontungsten ion charge states vary from +1 to +4, with energies mostly determined by sheath acceleration, e.g. at the Cap #3 center the average C +3 impinging energy is~300 eV, of which 225 eV is due to sheath acceleration. The major modeling goal is to compute sputtered tungsten transport/redeposition parameters, and resulting net erosion, for the central spot. This requires first determining the gross carbon-on-tungsten sputtering rate, i.e. before any redeposition. One way of doing this is first-principles computation of sputtering from the input plasma model but this requires very accurate knowledge of the impinging plasma carbon content, in terms of the C/D fraction. Direct C/D data is unavailable for the L-mode plasma shots in question, however, as mentioned, we use a different procedure based directly on the RBS W erosion data. Namely, we use the average T1 and T2 measured gross tungsten erosion fluence-with a small adjustmentas the data-based carbon sputtering of the central spot, on the basis of assumed constant plasma parameters along the tokamak toroidal direction. The adjustment is made due to the small spots actually having some redeposition as well as some self-sputtering. Using this approach, the inferred small spot gross tungsten erosion fluence, due to carbon sputtering is given by: for measured fluence F RBS (table 2) averaged over the T1 and T2 spots, small spot redeposition fraction R S , and small spot self-sputtering yield Y ZS ; with R S and Y ZS computed by WBC. To summarize, the WBC code tungsten atom launch velocities are determined by the ITMC-DYN code, per the incident carbon ion distributions (and a small self-sputtering contribution), and W atom/ion trajectories are computed using the background plasma profiles. Redeposition is computed. The central spot net tungsten erosion fluence is then determined by the computed big spot redeposition fraction and self-sputter yield applied to the data-based C-W erosion, per the relation: for big spot redeposition fraction R B and average selfsputtering yield Y ZB . This computed F NET value can then be directly compared to the central spot eroded fluence RBS data. To further describe the computational technique, the OED-GE/DIVIMP carbon ion flux fractions and energies to the big spot, for each charge state, are used in ITMC-DYN to compute the respective C-W sputter yields, sputtering fluences, and sputtered W velocity distributions. For the Cap #3 case the contribution fractions of carbon sputtered tungsten are: .037 for C +1 ; .451 for C +2 ; .461 for C +3 ; and .051 for C +4 . For Cap #1 we have zero sputter fraction for C +1 (impinging energy being below the W sputtering threshold); .670 for C +2 ; .305 for C +3 ; and .025 for C +4 . WBC launches W atoms from the spots, per these carbon sputter results, and W self-sputtering, and then calculates the resulting W ion transport and redeposition rates, using the near-surface OEDGE plasma parameters. Finally, the central spot net tungsten erosion fluence is computed using the WBC results in the above equations. To summarize, net erosion is computed using plasma, sputtering, and transport parameters from the three coupled code packages, with gross erosion calibration to the post-exposure DiMES RBS data. Table 3 and figure 3 show selected L-mode analysis results. For central spot sputtering, tungsten ionization mean free paths and transit times are short, with low resulting redeposited charge states and energies. The critical central spot redeposition fraction is high, as shown of order 75%. Small spot redeposition is of order 15%, small but not insignificant. As for the ELM analysis, the L-mode predicted redeposition fraction for the probe/divertor as a whole is 100%. The average self-sputtering coefficient for redeposited W ions varies from 6% to 1% for the higher and lower T e cases respectively, thus low in any event. The erosion fluence from the T1 'downstream' 1 mm spot is reduced by 4%-5%, relative to the T2 'upstream' 1 mm spot, due to transfer of sputtered tungsten from the central spot to the small spot along the toroidal magnetic field. There is negligible transport to the upstream toroidal spot. This can explain at least some (16% for Cap #3; 24% for Cap #1) of the experimentally observed 1 mm spots toroidal erosion asymmetry. Such asymmetrical transport is seen in our past tokamak divertor simulations as well-and is mostly due to tungsten ion collisions with the incident plasma flowing at or near the sound speed along the net magnetic field direction. The last row of table 3 shows the code and RBS data comparisons for the 1.5 cm spot net erosion fluence. The comparisons are good for both experiments. The code values are a sensitive function of the computed redeposition rate and, in general, of the numerous simulated sputtering and transport processes. The close agreement here provides a positive indication of the simulation validity. Also, the sputtered tungsten transport profiles shown in figure 3 for the Cap #3 caseshowing differences in W deposition in both the upstream/downstream toroidal and inboard/outboard radial directionsare a reasonable match to the RBS profile data, an example of which is shown in figure 4. The code/data profile trend comparisons are likewise good for Cap #1. Another modeling result is that the inferred L-mode impinging plasma C/D ratio-based again on the RBS DiMES erosion data calibration method-is about 3% for the Cap #3 experiment and 0.5% for Cap #1; unfortunately there is no carbon concentration data to compare with, but these are in Although outside the main scope of our present study we expended some effort on code computations and comparisons for the 1 mm radial spots erosion. Based again on calibration to the 1 mm spots T1 and T2 RBS data, and using WBC-ITMC with OEDGE background plasma inputs, we compute net erosion fluences very close (within experimental error) to the measured values (table 2) for the Cap #1 radial spots. However, for Cap #3 the simulation under-predicts the observed radial spots net erosion by about a factor of 2. This may reflect plasma background modeling issues with the radial spots being~1.6 cm away from the strike point, and/or experimental issues. Further study of this is planned. Discussion Comparing one key metric for the H and L-mode plasmas in this study, for the high power cases, the peak measured gross W erosion rate for the ELM plasma is~10 times higher than the L-mode rate (the latter derived using the average/adjusted T1 and T2 spots RBS fluence data divided by the 10.8 s exposure time.) Considering the different computed redeposition factors, 'R', i.e. 95% vs. 75% respectively, the predicted DIII-D net erosion rates, scaling as (1-R), are within a factor of two. For both plasma regimes, and per our earlier comment regarding extrapolation to future devices, sputtered tungsten redeposition rates for full-divertor high-Z coverage in ITER or DEMO type reactors, along the high particle/heat loading area, would likely approach unity, with consequent minimal net sputter erosion. There are other factors involved, of course, including effects of higher or lower near-surface plasma densities and temperatures. Conclusions We used advanced, rigorous, coupled code package simulations to analyze DIII-D carbon-containing tungsten DiMES divertor probe sputter response for ELMing H-mode and L-mode plasma experiments. Such ELMing plasmas, in particular, are considered likely in ITER and post-ITER tokamak fusion reactors, and with computation of DIII-D C/W mixing effects being likewise applicable to the ITER Be/W system. The simulations are believed to include all known relevant processes for sputtering, material mixing, and sputtered particle transport. They are, however, obviously dependent on accurate inputs of background plasma parameters/profiles and experimental conditions. With that qualification we see a good code/data match for key metrics of gross tungsten sputter erosion flux for the ELM cases, and net erosion and redeposition fluence for the Lmode cases. The simulations predict 100% redeposition of sputtered tungsten on the probe/divertor and no resulting core plasma contamination from the main plasma/divertor interaction region. These are similar to past predictions-for DIII-D and other devices with high-Z plasma facing surfacesbut primarily for non-ELMing plasmas. The DIII-D ELM plasma cases studied do not cause undue sputtering erosion of a tungsten divertor segment, with the predicted net erosion rates being similar to non-ELM cases. In addition, the DIII-D DiMES modeling gives continued insight into sputtered impurity transport physics, including sub-mm W atom ionization mean free paths; strong redeposition forces due to electric and magnetic fields and impurity/plasma collisions; sub-µs W ion redeposition times; and resulting moderate redepositing energies and low self-sputtering. For both ELMing and non-ELM plasmas the present results, extrapolated to a full tungsten divertor, imply acceptable, very low, net sputter erosion rates. These conclusions, at least for the cases studied, are encouraging for ITER and future tokamak fusion reactors using high-Z plasma-facing divertor surfaces. There are several unresolved issues including a toroidal asymmetry in probe erosion data, and an issue for tungsten sputter response in the radial direction away from the divertor OSP. These are not highly significant in terms of our main conclusions but warrant further analysis. We plan further study of ongoing DIII-D/DiMES experiments, in general, and with particular focus on ELM exposures. laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. Disclaimer This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
6,977.6
2020-10-14T00:00:00.000
[ "Physics", "Engineering" ]
A New Few-Shot Learning Method of Bacterial Colony Counting Based on the Edge Computing Device Simple Summary Here, we proposed a few-shot learning bacterial colony detection method based on edge computing devices, which enables the training of deep learning models with only five raw data through an efficient data augmentation method. Abstract Bacterial colony counting is a time consuming but important task for many fields, such as food quality testing and pathogen detection, which own the high demand for accurate on-site testing. However, bacterial colonies are often overlapped, adherent with each other, and difficult to precisely process by traditional algorithms. The development of deep learning has brought new possibilities for bacterial colony counting, but deep learning networks usually require a large amount of training data and highly configured test equipment. The culture and annotation time of bacteria are costly, and professional deep learning workstations are too expensive and large to meet portable requirements. To solve these problems, we propose a lightweight improved YOLOv3 network based on the few-shot learning strategy, which is able to accomplish high detection accuracy with only five raw images and be deployed on a low-cost edge device. Compared with the traditional methods, our method improved the average accuracy from 64.3% to 97.4% and decreased the False Negative Rate from 32.1% to 1.5%. Our method could greatly improve the detection accuracy, realize the portability for on-site testing, and significantly save the cost of data collection and annotation over 80%, which brings more potential for bacterial colony counting. Introduction Bacterial Colony Counting (BCC) is a time consuming but important task for many fields such as microbiological research, water quality monitoring, food sample testing, and clinical diagnosis [1][2][3]. The predominant need for these applications is accurate quantification, and with the growing problems of microbial contamination, the need for on-site testing is also increasing daily [4][5][6]. Fast and accurate on-site testing can reduce the cost of transporting samples and decrease the risk of leakage and further contamination [7], which is important for the management of contaminants [8]. To achieve accurate counting, image analysis methods play the most important role in BCC, and there are three main types for the quantification now: manual counting, traditional image segmentation algorithms, and deep neural networks [9]. The bacterial colonies are difficult to be recognized when directly cultured on solid agar plates because they have the features of high density, low contrast, adherence, and overlap [10]. So, manual counting is still the gold standard for BCC because of the high precision, but manual counting is quite time consuming and cannot be adapted to high throughput industrial testing [2]. Traditional algorithms such as threshold segmentation, watershed, and wavelet transform provide possibilities for automation recognition, but they face difficulty in processing images with low contrast and a complicated overlap situation [11,12]. On the contrary, deep learning networks based on convolution neural networks (CNN) are good at dealing with complicated problems [13,14]. However, most deep neural networks are designed to be deployed on professional deep learning workstations, which have a high requirement for device configuration [9,15,16]. However, professional workstations are expensive and bulky, making them difficult to meet the requirement of portability, and for many application scenarios such as remote reservoirs and farms, samples can only be taken back to the laboratory for colony testing, which has a great delay and increases the cost of sample transportation [17][18][19]. In addition, the training of deep neural networks usually requires a large number of data, but there is no large public dataset of bacterial colony images at present, so the cost of data collection and annotation would be high if the traditional deep learning strategy was adopted [20][21][22][23]. To solve these problems, we propose a new few-shot learning method that consists of the improved You Only Look Once (improved YOLOv3) for image detection and Random Cover Targets Algorithm (RCTA) for data augmentation. The improved YOLOv3 adopted multi-scale features for object detection through the Feature Pyramid Network (FPN), which effectively improves the detection accuracy for small targets. Therefore, our method does not require special equipment such as colored Petri dishes or high-resolution cameras to enhance the contrast, which greatly reduces the detection costs. On the other hand, the training of most neural networks usually requires a large number of training images, but colony images are expensive to collect and time consuming to label. The RCTA proposed in this paper utilized the prior knowledge of bacterial colony images to solve this dilemma, which could effectively increase the data to more than 300 times after combining with cutting and rotation operations. Thus, our network was able to be successfully trained by only five raw data and achieved the high accuracy of 97.4% on an edge computing device that cost less than USD 100. Our method greatly reduced the cost of data collection and annotation, increased the detection accuracy, and met the portability requirement of on-site testing. In addition, with TesnorRT acceleration, our model could achieve all detection on edge computing devices locally, eliminating the need to interact with the cloud for data transmission, which reduced the dependence of network and costs significantly. Data Preparation and Materials The strain used in this experiment was Escherichia coli (ATCC8739) [24][25][26][27], which was purchased from Guangdong Huankai Microbiology Technology Co Ltd. The Tryptic Soy Broth (TSB) was used as the liquid medium [28], the Plate Count Agar (PCA) was used as the solid medium [29], and the 0.9% sterile saline was used as the dilution solution [29,30]. The steps of colony culture are: first, inoculate the strain into 100ml TSB medium and incubate it at 37 • C and 200 rpm for 20 h to obtain the bacterial solution [24]; second, according to the Chinese National Standard [29], dilute the bacterial solution with saline to 30-300 CFU/mL, take 1 mL diluted bacterial solution and mix it with 15 mL PCA, then place it at 37 • C for 48 h. Equipment The training of the models was performed on the deep learning workstation with an Intel Core I7-9800X processor and 2 GeForce RTX 2080 Ti graphics cards. The testing of the models was performed on a microcomputer jetson nano, which had a 128-core NVIDIA Maxwell™ GPU and 4GB of 64-bit LPDDR4 memory. The prototype of our portable BCC device based on deep learning method is shown in Figure 1. Most detection devices of BCC require professional cameras with high resolution to enhance the contrast and thus improve the algorithm detection accuracy. Considering the mobility and portability of detection, we choose the smartphone rather than professional cameras to afford photographs. The experimental images in this paper are all taken by the Sony IMX586. The camera of IMX586 has 48 megapixels with a resolution of 8000 * 6000 (width * height) and 0.8 µm per pixel, and its field of view is 88 • . Figure 1. The prototype of bacteria counting based on the edge computing device. The Petri dish contains the bacterial colonies, the smartphone is the photo device, and the data are transmitted to the jetson nano via Bluetooth. The jetson nano takes responsibility for the image detection, and the screen is responsible for visualizing the detection results. Dataset The dataset used in this paper can be divided into three types: train, validation, and test. The train and validation datasets were used to train the model, which contained 864 images and 96 images, respectively. All images of the train and validation dataset were augmented by 5 original pictures, that of 2560 * 2590 (width * height) pixels. During the augmentation procedure, we first used RCTA to randomly cover targets in each image, which expanded the number of images to 60. Secondly, we divided the image into four equal parts, which could magnify the relative proportion of targets and effectively augment the data four times. For the final augmentation, we adopted three rotation operations: 90 degrees, 180 degrees, and 270 degrees. Finally, we obtained 960 images of 1280 * 1290 (width * height) pixels. These pictures were randomly allocated as the ratio of 90% and 10% to form the training dataset and validation dataset. As for the test dataset, the main function of it was to measure the performance of trained models. So, we carried out 60 colony culture experiments to obtain 60 completely new images that were independent of the train and validation dataset. Then, we randomly selected 10 images to compare the performance of four models: simple threshold, comprehensive threshold, tiny YOLOv3, and improved YOLOv3. Method Overview Our method can be mainly divided into two stages as Figure 2 shows: training and prediction. Since there is a high cost attached to data collection and annotation, it is quite expensive to collect a large amount of dataset for BCC. So, we first need to effectively augment the original data by Random Cover Target Algorithm, which is proposed in Section 2.5. By rewriting the pixel values of target points, RCTA could change the structure of images and make the target points decrease regularly as iterations increase. After adopting RCTA, only the source images need to be fully manually annotated. For augmented images, RCTA would copy the annotation file of the source file, and we only need to remove the redundant labels rather than repeatedly mark all targets, which can greatly reduce the annotation cost by over 80%. With this method, the improved YOLOv3 could successfully achieve the few-shot learning with only 5 original training images and accuracy over 95%. Figure 2. Method overview. The input image is first augmented by RCTA, then bacterial colonies need to be manually annotated. The annotation files and the augmented images will be used as the input of improved YOLOv3 for training. The trained model needs to be converted to the intermediate ONNX format first and then converted to the TRT format that can be deployed on jetson nano with high processing speed. The colony detection and analysis will be performed by the converted model. After successfully training the model with reliable accuracy through the darknet framework, we need to deploy the model on the embedded device for ensuring the portability of BCC. However, since the hash rate of embedded devices is much lower than workstations, we need to optimize the model by TensorRT to guarantee the processing speed. Since TensorRT does not support darknet models directly, we need to convert the trained model and weights into .onnx format at first. Then, TensorRT will further convert the model to trt format by building an inference engine that optimizes the CNN networks of the improved YOLOv3 through precision calibration, interlayer merging, and dynamic memory management. After optimization, due to the reduction in the number of data transfers between layers and the narrowing of the data precision range, the processing speed of the model will be greatly improved. In addition, by precisely positioning the detection boxes, we can calculate the length, width, and size of each bacterial colony. The detection boxes of the improved YOLOv3 contain two sets of information: (x 1 ,y 1 ) and (x 2 ,y 2 ), which represent the coordinates of the top left and bottom right points of the detection box, respectively. So, the width of the bacterial colony is x 2 − x 1 , height is y 2 − y 1 , and area is width * height. With these data, we are able to count the size interval and number distribution of the corresponding bacterial colonies as shown in the analysis of Figure 2. A New Data Augmentation Method In recent years, deep neural networks have made great improvements in the field of target recognition, solving many complex problems that are difficult to be processed by ordinary algorithms. However, deep neural networks require a large number of labeled images for training, and the annotation cost is pretty high for targets with complex features or rare datasets [31]. Few-shot learning strategy can solve the problems of insufficient data and high cost of labeling through effective data augmentation methods, and this strategy is become an increasingly important branch of deep learning. For bacterial colony images, the targets are relatively small and the number of colonies for a single image usually ranges from hundreds to thousands. Additionally, even under the condition of heating catalysis, the culture time of bacterial colonies lasts hours. So, it is difficult to obtain a large number of data, and the manual labeling method for traditional deep learning networks is quite time consuming. Therefore, we proposed a data augmentation method, Random Cover Targets Algorithm, to achieve effective few-shot learning with lower cost in this paper. Functionally, RCTA realized the data augmentation by accurately changing the pixel values of the target areas to the values of background, which could change the structure of images and make them into new images. To achieve the above functions, RCTA would first use threshold segmentation to initially segment the background and effective targets. Then, we used the cv2.findContours() function to identify the contour of effective targets, which aimed to obtain the area of each recognition area and set the approximate effective range for secondary selection. The effective range was usually decided by past experience and manual fine-tune, and it was (60, 3500) in this experiment. For secondary selection, only the areas that were among an effective range would be kept. Third, RCTA would choose the central area that excluded the boundary area as the final coverage area. For the effective points in the coverage area, RCTA would store their (x, y) coordinates and radius into the array, and it randomly selected one at a time as the parameter of cv2.circle() function. Finally, in order to blend the target area with the surrounding background as much as possible, we used the average pixel value of (x r : x r+20 , y r : y r+20 ) to calculate the R mean , G mean , and B mean . According to the gray value analysis, the pixel value of the background area was usually lower than 20, so we only chose the pixel values that were lower than 20 for calculation. The calculation method is as follows: In Formulas (1)- (3): (x r , y r ) represent the coordinates of the bacterial colonies centers; R mean , G mean , and B mean represent the mean value of the red, green, and blue channels; R (x,y) , G (x,y) , and B (x,y) represent the pixel values that are lower than 20 in the calculation area; N represents the number of pixels. In the actual experiments, the brightness value of the background is not related to the surrounding environment since the photography of BCC is usually carried out in a shading environment to avoid reflections of light spots. So, the background brightness of BCC is relatively uniform, and the average value of adjacent pixels can achieve a good coverage effect. Formula (4) is the calculation principle for rotation operations, where x o and y o represent the coordinates of the original image; x R and y R represent the coordinates after rotation; h and w represent the height and weight of original image, respectively; H and W represent the height and weight of rotation image; θ represents the rotation angle. For our experiments, we adopted three angles for rotation, which are θ = 90 • , 180 • , 270 • . Due to the low brightness and weak contrast of the colony, the threshold segmentation and cv2.findContours() function can only find limited targets among the image, which cannot be used as the precise quantitative method. However, they can effectively provide a benchmark for the background transfer procedure in RCTA. The example of the RCTA augmentation result is shown in Figure 3, through effective adjustment of parameters and iterations, RCTA can amplify an original picture to more than hundreds, which can hugely decrease the demand for the original training dataset. In addition, the annotation time comparison between the traditional annotation method and RCTA is shown in Figure 4. Traditional annotation methods require fully manual annotation for each image, which is time consuming because the colony images usually contain a large number of targets. For this example, the annotation time for a single image using the traditional method is typically over 18 min. With RCTA, we only need to manually annotate the source image, and the subsequent augmented images will copy the annotation file of the source image, so that only redundant boxes after masking need to be removed, which could greatly reduce the average annotation time by over 80%. Training Strategy Since most CNN models are not sensitive to small targets, for the previous works that do not adopt any pre-process to increase the contrast between culture dish and bacteria, their recognition rate for small targets is relatively poor [9]. Although YOLOv3 improves the detection performance for small targets by dividing the grids at three different scales, which aims to predict the contour of the targets falling into the grid through different densities and receptive fields, due to the extremely small size of the dotted bacterial colonies, they are still difficult to recognize directly. To solve this problem, we experimented a scaling mapping strategy between cutting images and original images. By reducing the length and width of the image, we could increase the relative size of the object. In our experiment, we divided the original image into four equal parts so that the width and height of each part were reduced from 2560 * 2590 pixels to 1280 * 1295 pixels. Cutting images off did not change the absolute length of the bacterial colonies, but due to the decrease in image size, the relative size ratio of the bacterial colonies would be increased by two times. The corresponding relationship between the bacterial colonies' diameters before and after cutting is shown in Formula (4): where L cut and L original represent the length of the bacterial colony, W cut and W original represent the width of the image, and D represents the number of division parts for the image. Structure and Acceleration The structure and backbone of the improved YOLOv3 is shown in Figures 5b and 6. Our test equipment has a low price that costs less than USD 100. So, the embedded device has a relatively limited processing power compared with the workstations. If the network is deployed directly on the embedded device, it is difficult to compute efficiently. Therefore, we need to accelerate the model via TensorRT. First, we need to convert the darknet (.cfg) into the ONNX (.onnx) model. Then, TensorRT will build the inference engine, which accelerates the CNN network of improved YOLOv3. The TensorRT will take the following steps to improve the inference speed: (1) Precision calibration. Deep neural networks need high precision data to ensure accuracy during the training step, but the data precision can be moderately reduced during the inference process. So, we improve the inference speed by decreasing the data type from float32 to FP16. (2) Layer fusion. TensorRT will fuse the structure of deep neural networks. For example, it will fuse the conv, BN, and relu into one layer, so no more separate calculations are performed on contact layer, which can significantly reduce the data transfer time. (3) Multi-stream execution. TensorRT will perform parallel computation on different branches with the same input and dynamically optimize the memory according to batch size, which effectively reduces the transmission time. With the above optimizations, the data transfer efficiency and computation speed of the model can be greatly improved so that the model can perform inference at a rate of more than 1 frame per second (FPS) on a jetson nano. Comparative Methods We choose the simple threshold, comprehensive threshold, and tiny YOLOv3, which are commonly used in the field of BCC, as the comparative methods. Furthermore, we adopt the human count result as the gold standard. Simple threshold segmentation firstly adopts the cv2.cvtColor() function to change the input image into the grayscale mode, then uses the cv2.threshold() function to segment the grayscale image. Since the contrast between target and background is low, the segmentation effect of the automatic threshold value is relatively poor, so we need to manually determine and adjust the appropriate threshold value. After the threshold value is determined, the cv2.threshold() function will regard the pixels below the threshold value as background and the pixels above the threshold value as valid targets. Since simple threshold segmentation treats every continuous area larger than the threshold value as the effective target, it is very susceptible to noise interference. Comprehensive threshold segmentation introduces the size filtering function on the basis of simple threshold segmentation. By using the Cv2.findContours() function, the comprehensive threshold calculates the circular contour of the simple threshold segmentation result, which could be used for classifying the radius and area. Then, we need to adjust and set the min_area and max_area manually. Finally, only the targets that are among the range of (min_area, max_area) will be regarded as effective. Therefore, comprehensive threshold segmentation can filter out the interference of small noise effectively, but bacterial colonies have a large number of overlapping targets that are difficult to process by threshold segmentation, so the accuracy rate of the comprehensive threshold is still low. Due to the computing power limitation of edge computing devices, tiny YOLOv3 is one of the few deep neural networks that can be deployed on a jetson nano with relatively good performance. The structure of tiny YOLOv3 is shown in Figure 5a. Tiny YOLOv3 removes the residual layers and some feature layers, and it only retains a backbone of 6-layer conv+max with 2 independent prediction branches that own sizes of 13 * 13 and 26 * 26 to extract features and make predictions. Tiny YOLOv3 greatly decreases the network depth and reduces the performance requirements of computing devices, but it correspondingly sacrifices the accuracy of feature extraction, so the accuracy rate is inferior to improved YOLOv3. Table 1 and Figure 7 show the test results of different algorithms for low contrast bacterial colony images. The validation dataset contains a total of 60 images, and we adopted the sampling survey strategy to verify, which took out ten images randomly then calculated the accuracy with human measurement results as the gold standard. In Table 1, True Positive (TP) represents the number of positive targets that are correctly identified as positive; False Positive (FP) represents the number of negative targets that are incorrectly identified as positive; False Negative (FN) represents the number of positive targets that are incorrectly identified as negative; True Negative (TN) represents the number of negative targets that are correctly identified as negative; Average Accuracy (ACC) represents the average percentage of positive and negative targets that are correctly identified; True Positive Rate (TPR) represents the percentage of positive targets correctly identified as positive; False Negative Rate (FNR) represents the percentage of positive targets incorrectly identified as negative [32,33]; Detection Time (DT) represents the average processing time for each image. In our experimental results, since TN represents the number of pixels that belong to the background, which is an unquantifiable and unnecessary parameter, the TN is defaulted to 0 [34]. The human reference represents the results of the manual counting method for the colony images. Additionally, the formulas used are calculated as follows: FNR = FN/(TP + FN), ACC = (TN + TP)/(TN + TP + FN + FP), TPR = TP/(TP + FN). Since there is no True Negative situation in our samples, FN defaults to 0. Results Comparison Among the three contrast methods, the comprehensive threshold and simple threshold belong to traditional algorithms, and they are the most commonly used methods for bacterial colony counting. However, they face difficulty when dealing with overlap and edge targets. Therefore, the accuracy of the traditional algorithms is not satisfying. The simple threshold is highly susceptible to small noise interference and generates a large number of false-positive targets, so its accuracy is only 4.4%. The comprehensive threshold was based on the simple threshold, and it added the size selection function. So, most small noise could be effectively removed, but the accuracy of the comprehensive threshold was still as low as 65% due to the phenomenon of adhesion and blurring of contours between bacterial colonies. For these complex targets, it is difficult for traditional algorithms to distinguish them effectively. For example, if there are multiple adhering or overlapping targets, traditional algorithms usually incorrectly consider them as the same target, thus affecting the accuracy. Table 1. Performance comparison. This table compares the performance of the five methods for bacterial colony detection. Human reference is the manual count result, which is used as the gold standard; ACC represents the average accuracy; TPR represents the percentage of colonies that are correctly identified; FNR represents the percentage of colonies that are incorrectly identified as background; DT(s) represents the average detection time in seconds for each image. Simple threshold has the most noise and false-positive results due to the small difference in gray value between the colony and background; comprehensive threshold segmentation reduces noise interference but has more false-negative results; tiny YOLOv3 has a large improvement in accuracy but is less effective for small targets; improved YOLOv3 has optimal results. Method As for tiny YOLOv3, it is one of the few lightweight deep learning networks that can be deployed on a jetson nano directly; it improved the accuracy to 85% compared with traditional algorithms. However, due to its shallow network depth, the recognition rate for edge targets and overlapping targets is relatively poor when compared with improved YOLOv3. On the contrary, the improved YOLOv3 proposed in this paper adopted the FPN structure such as Figure 6 shows, which will reverse the features extracted by the high-level convolution network to the lower-level convolution network, thus allowing a trade-off between speed and accuracy. For example, there are targets of different sizes in the bacterial colony images, the area of dotted bacterial colonies and circular bacterial colonies differ greatly, so the improved YOLOv3 can recognize large circular bacterial colonies through the shallow layers and the 13 * 13 feature map, as well as recognize small dotted bacterial colonies through the deep layers and the 52 * 52 feature map, which effectively improves the detection efficiency. Improved YOLOv3 not only ensured feasibility on a jetson nano but also retained the detection speed, which decreased the FNR to 2% and increased the accuracy to over 97% at the processing speed of more than 1 FPS. The manual counting method is the gold standard that has the highest accuracy, but it also takes the most time to detect. The improved YOLOv3 has comparable accuracy to the manual counting method, while detection time is greatly reduced. Discussion The principle of the simple threshold is to divide the image as valid and background parts by the threshold value, where the pixel gray value below the threshold is set to be 0, and the part above the threshold is set to be 1. This method is susceptible to noise interference, and it is also difficult to calculate the targets on the edge of the culture dish and overlap situation. Therefore, it is only applicable to pure pictures without noise and owns the lowest accuracy among all methods. The comprehensive threshold is an upgraded version of the simple threshold. It uses the findContours() function for calculating the radius of each target on the basis of the simple threshold segmentation, thus adding a size-based filtering function that can effectively reduce the interference of small noise. However, the comprehensive threshold is also unable to deal with overlap and adherent situations, and therefore the accuracy rate can only reach about 65%. Additionally, tiny YOLOv3 is a lightweight deep neural network, which is able to effectively identify more of the overlapping and adhering bacterial colonies and significantly improve the performance compared with traditional methods. However, since the structure of tiny YOLOv3 is relatively simple, the recognition performance for edge targets and extremely small targets is not as good as improved YOLOv3, so the accuracy rate is around 85%. In addition, traditional deep learning algorithms usually require at least hundreds of raw data to complete the effective training of deep neural networks. The adequate number of training datasets is one of the most effective methods to avoid the over-fitting problem of the deep learning models. Our method can effectively change the image structure by the data augmentation method, which can make the augmented images be regarded as a new training image for the deep neural network, and thus can reduce the original data requirement to less than 10. Compared with other traditional neural networks that require at least hundreds of training datasets [20,35,36], our method reduces the data collection cost by more than 90% while maintaining a high accuracy rate. Conclusions BCC plays a vital role in water contamination monitoring, food sample testing, and biological experiments. With the widening of application scenarios, accurate on-site testing is becoming important, daily, for BCC, and the three most important challenges of it are accuracy, data shortage, and portability. Currently, many labs and companies still adopt the manual counting method for BCC; this is because BCC images often have low contrast and overlap situations, making them difficult to be accurately counted. The commonly used traditional algorithms such as simple threshold and comprehensive threshold often require special color Petri dishes or professional photographic devices to enhance the contrast between bacterial colonies and the background, but these devices will increase the cost of the BCC. The development of deep learning brings new possibilities for BCC. However, general deep neural networks usually require a large number of training data and need to be deployed on professional workstations, which face difficulty in meeting the portability of the on-site testing and have a high cost of data collection. In this paper, we propose a new few-shot learning method that consists of improved YOLOv3 and RCTA to solve the above problems. This method enables us to train a network with the detection accuracy of over 97% on a jetson nano by only five raw data. RCTA can effectively augment the original training data over 300 times and save the annotation time over 80%, which greatly reduces the cost of data collection and annotation. Improved YOLOv3 can be employed on an embedded device that has low cost and achieves a high detection accuracy, which meets the portability and precision requirement of on-site testing. Compared with traditional algorithms, improved YOLOv3 greatly optimized the detection accuracy for complex targets such as overlap and adherent, which decreased the FNR from 32% to 1.5% and increased the ACC from 64% to 97%, respectively. Additionally, if compared with one of the most widely used deep neural networks for embedded devices, tiny YOLOv3, our method decreased the FNR by over 6% and increased the average accuracy by over 10%. Moreover, our few-show learning strategy is able to train the deep learning networks with less than ten raw data, which is an amount of data that is difficult for any traditional neural network to train effectively. Furthermore, our model can achieve an inference rate of more than 1 FPS on edge computing devices after acceleration, which brings more possibilities for accurate on-site testing in the field of BCC. Author Contributions: B.Z. designed and developed the whole methodology, algorithm, and the testing experiment parts, as well as finished the writing, review and editing of the paper; Z.Z. and W.C. provided the experiment images of bacteria colonies; X.Q. and C.X. were responsible for the design of the mechanical structure of the prototype; W.W. supervised this project. All authors have read and agreed to the published version of the manuscript. Acknowledgments: We would likely to give our appreciation to the Project of Hetao Shenzhen-Hong Kong Science and Technology Innovation Cooperation Zone. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
7,425.6
2022-01-19T00:00:00.000
[ "Computer Science" ]
Morphology, Molecular Genetics, and Bioacoustics Support Two New Sympatric Xenophrys Toads (Amphibia: Anura: Megophryidae) in Southeast China Given their recent worldwide declines and extinctions, characterization of species-level diversity is of critical importance for large-scale biodiversity assessments and conservation of amphibians. This task is made difficult by the existence of cryptic species complexes, species groups comprising closely related and morphologically analogous species. The combination of morphology, genetic, and bioacoustic analyses permits robust and accurate species identification. Using these methods, we discovered two undescribed Xenophrys species, namely Xenophrys lini sp. nov. and Xenophrys cheni sp. nov. from the middle range of Luoxiao Mountains, southeast China. These two new species can be reliably distinguished from other known congeners by morphological and morphometric differences, distinctness in male advertisement calls, and substantial genetic distances (>3.6%) based on the mitochondrial 16s and 12s rRNA genes. The two new species, together with X. jinggangensis, are sympatric in the middle range of Luoxiao Mountains but may be isolated altitudinally and ecologically. Our study provides a first step to help resolve previously unrecognized cryptic biodiversity and provides insights into the understanding of Xenophrys diversification in the mountain complexes of southeast China. Introduction Accurate taxonomic recognition is a prerequisite for preserving amphibian biodiversity, especially in the context of amphibian declines and extinctions worldwide [1]. This fundamental task is challenged by the existence of cryptic species complexes [2], a group consisting of two or more species that are reproductively isolated from each other but virtually identical in morphology [3]. Frogs and other groups of Amphibians are known to harbor substantially underestimated cryptic species diversity [4]. Hence unambiguous species delineation may be difficult in some frog groups based on morphological characteristics exclusively [5], but it is very important to provide a solid basis for conservation management, as well as deeper understanding of the macroevolutionary patterns in amphibians [6]. The horned toads, Megophrys Kuhl & Van Hasselt, 1822 and Xenophrys Günther, 1864, in the family Megophryidae, are an exemplary group with high cryptic species diversity [7][8][9][10], making their systematics and taxonomies poorly understood and considerablly debated, even though herpetologists have employed various taxonomic methods [8,[11][12][13][14]. Pending comprehensive phylogenetic and morphological research, we followed the recommendations of Li & Wang [13] and Pyron and Wiens [15] that Xenophrys is distinguished from Megophrys and all previously known Megophrys species in China and should be assigned to the genus Xenophrys. Currently, the genus Xenophrys contains 46 species and is distributed in Southeast Asia from the southern and eastern Himalayan regions to Borneo [16]. There are 31 species of Xenophrys recognized from China, among which only five have a body length less than 50 mm in both males and females in southeast China [10]. Species in this group include X. boettgeri and X. kuatunensis in the Wuyi Mountains, X. huangshanensis in the Huangshan Mountains, and the recently described X. jinggangensis from Mount Jinggang (26u139-26u529N, 113u599-114u189E), situated in the border between the Jiangxi and Hunan provinces [10]. Noticeably, three new species of Megophryid toads described from northeast India very recently [9]. Together with X. jinggangensis, these discoveries raise the possibility of new cryptic species that might be discovered in Southeast Asia and China. However, due to the morphological similarity of Megophryid toads, only morphological characters may not be sufficient to diagnose taxonomies. To implement unbiased species delineation in cryptic amphibians, integrating evidence from morphology, DNA sequence data, and behavior may be critically necessary [17]. In particular, molecular genetic approaches enable us to decipher phylogenetic relationships and thus the evolutionary history of a large number of species, thereby solving taxonomic uncertainties [18]. In addition, bioacoustic analysis is a very useful approach in species diagnosis because many male frogs and toads have species-specific advertising vocalization during breeding seasons [19]. Like birds, these sounds are courtship signals used for mate choice by females and male-male competition, and thus presents an important behavioral pre-mating reproductive barrier under sexual selection [20]. Indeed, combined multiple lines of evidence from different methods assist in discovering unrecognized amphibian species regularly and this substantially inflates amphibian species diversity across the world, e.g. [21][22][23][24]. To understand the distribution and ecology of the newly described X. jinggangensis, we carried out extensive herpetological surveys during 2011-2013 around the middle range of Luoxiao Mountains, SE China. Interestingly, we also found two small and unknown Megophryid toads. Both of these species are small in body length (,45 mm), they can be assigned to the genus Xenophrys based on the following characteristics: head broad and depressed, tympanum distinct, tubercles on the outer edge of the upper eyelids short, tubercles on the snout absent, no mid-dorsal fold, no black horny spines on dorsum, hind limbs long, and heels overlap [13]. Furthermore, we also noticed that the vocalizations and phylogenetic relationships of these two unknown Xenophrys toads seemed to be distinct from that of X. jinggangensis. Thus, we conducted morphological, bioacoustics, and molecular genetic analyses to resolve the taxonomic status and affinities of these two taxa. Based on all these evidence, we describe two new species from southeast China. Ethics Statement Permissions to visit the study sites were issued by the management administration of the reserves. We obtained permissions for specimen and tissue collection from Jiangxi Provincial Forestry Bureau. This study did not involve endangered or protected species. All the animal operations were approved by the Institutional Ethical Committee of Animal Experimentation of Sun Yat-sen University and strictly complied with the ethical conditions by the Chinese Animal Welfare Act (20090606). DNA sample collection To reconstruct the phylogenetic relationships among Xenophrys species in southern China, we collected samples of X. jinggangensis, X. cheni sp. nov., and X. lini sp. nov. from the middle range of Luoxiao Mountains, situated in the border between the Jiangxi and Hunan Provinces, X. brachykolos from Hong Kong, X. boettgeri from Mt. Yangjifeng and Mt. Tongbo, Jiangxi Province, X. kuatunensis from Guadun (Kuatun) Village, Fujian Province, situated in the Wuyi Mountains, X. huangshanensis from Wuyuan County, Jiangxi Province, situated in the Huangshan Mountains, X. mangshanensis from Mt. Nanling, Guangdong Province, and X. minor from central Sichuan province (Figure 1). An additional 16s rRNA sequence of X. minor deposited in GenBank was incorporated into our dataset. The data of all voucher specimens of the above species are shown in Table 1. All muscle tissue was preserved in 95% ethanol and stored in 280uC. DNA extraction and sequencing Genomic DNA was extracted from the muscle tissue using a standard phenol-chloroform extraction protocol [25]. We amplified a fragment of the mitochondrial 16s rRNA gene from Xenophrys species using the primer pairs L3975 and H4551 [26], and mitochondrial 12s rRNA gene using the primer pair Fphe40L and 12S600H [27]. PCR amplifications were performed in a reaction volume of 25 ml containing 100 ng of template DNA, 0.3 mM of each PCR primer and 10 ml Premix EX Taq TM (Takara, Dalian, China). The PCR condition for 16s rRNA was, an initial denaturation step at 94uC for 1. 5 min; 33 cycles of denaturation at 94uC for 45 s, annealing at 55uC for 45 s, and extending at 72uC for 90 s, and a final extension step of 72uC for 10 min. The PCR condition for 12s rRNA was, an initial denaturation step at 96uC for 2 min; 35 cycles of denaturation at 94uC for 15 s, annealing at 55uC for 1 min, and extending at 72uC for 1 min, and a final extension step of 72uC for 10 min. PCR products were purified with the GenElute TM PCR clean-up kit (Sigma-Aldrich, Dorset, UK). The purified products were sequenced with both forward and reverse primers using a BigDye Terminator v 3.1 Cycle Sequencing Kit (Applied Biosystems, Carlsbad, CA, USA) according to the manufacturer's instructions. The products were sequenced on an ABI Prism 3730 automated DNA sequencer (Applied Biosystems) in the Beijing Genomics Institute. All sequences have been deposited in GenBank (Table 1). Phylogenetic analyses The resulting sequences were first aligned using the Clustal W algorithm [28] in BioEdit 7.0 [29], with default parameters and the alignment being checked and manually revised, if necessary. The degree of polymorphism of our sequences was assessed using DnaSP 5.10.1 [30]. The General Time-Reversible model [31] assuming a gamma-shaped distribution across sites [32] for both 16s rRNA and 12s rRNA were selected as the best-fit nucleotide substitution model using Akaike's Information Criterion [33] in jModelTest 1.0 [34]. The aligned sequences of 16s rRNA and 12s rRNA dataset were analyzed separately, using both maximum likelihood (ML) implemented in PhyML [35] on the ATGC online server (http://www.atgc-montpellier.fr/), and Bayesian inference (BI) using MrBayes 3.12 [36]. For ML analysis, the bootstrap consensus tree inferred from 1000 replicates was used to estimate nodal supports of inferred relationships on phylogenetic trees. Branches corresponding to partitions reproduced in less than 50% of the bootstrap replicates were collapsed. For the tree searching and optimization, we applied strategies described in Liang et al. [37]. For BI analysis, two independent runs, each comprising four Markov Chain Monte Carlo simulations were performed for two million iterations and sampled every 1000 th step. The first 25% of the samples were discarded as burn-in. Convergence of the Markov Chain Monte Carlo simulations was assessed by checking the average standard deviation of split frequencies between two runs using Tracer v.1.4 (http://tree.bio.ed.ac.uk/software/tracer/ ). We further applied ML and BI for the joint dataset of 16s and 12s rRNA using the above settings. For both ML and BI, the homologous sequences of Paramegophrys oshanensis and Megophrys nasuta from GenBank were chosen as outgroup (Table 1). Apart from phylogenetic tree-based methods, we also calculated 'net between groups means distance' between Xenophrys species at both genes using the mentioned nucleotide substitution models in MEGA 5.2 [38]. This method takes within-group variations (individual level) into account when calculating the average distances between taxa. Furthermore, to incorporate both 16s rRNA and 12s rRNA datasets, we generated Neighbor-Net (NN) networks [39] of Xenophrys samples using uncorrected p-distance as implemented in SplitsTree 4.10 [40]. Bioacoustics analysis We recorded the advertisement calls of male Xenophrys species (X. jinggangensis, X. cheni sp. nov., X. lini sp. nov., X. boettgeri, X. huangshanensis, and X. kuatunensis) by SONY ICD-MX20 IC at our sampling localities in southern China during 2011-2012. The calls of these species were frequently heard from June to September, temperatures varied in a small range between 15-18uC during the sound sampling period. Stereotypical male calls, which lasted between 20 s and 2 min, of each species were recorded in SONY MSV format and converted to 16-bit mono PCM format with resampling of 22 kHz. The spectrograms of male calls were generated in the Avisoft-SAS lab Lite software. We define a continuous vocalization with a pause less than 1 second as a call, and define the smallest non-split syllable as a note. The duration and frequency parameters of vocalization, such as call duration, notes per call and per second, note duration, inter-note interval, high and low frequency and frequency band-width were measured from the spectrograms. Differences between these Xenophrys species were tested with one-way ANOVA and Kruskal-Wallis tests in IBM SPSS Statistics 21. Morphological analysis Specimens for morphometric analysis were fixed in 10% buffered formalin and then transferred to 70% ethanol for longterm preservation in The Museum of Biology, Sun Yat-sen University (SYS), Guangzhou, Guangdong Province, China. Measurements were made with digital calipers (Neiko 01407A Stainless Steel 6-Inch Digital Caliper, USA) to the nearest 0.1 mm. Abbreviations used are SVL = snout-vent length; HDL = head length, from the tip of the snout to the articulation of the jaw; HDW = head width, between left and right articulations of the quadratojugal and maxilla; SNT = snout length, from the tip the of snout to the anterior corner of the eye; EYE = eye diameter, from the anterior to the posterior corner of the eye; IND = internasal distance; IOD = interorbital distance; TMP = tympanum diameter; TEY = tympanum-eye distance, from the anterior edge of the tympanum to the posterior corner of the eye; HND = hand length, from the distal end of the radioulna to the tip of the distal phalanx of III; RAD = radioulna length; FTL = foot length, from the distal end of the tibia to the tip of the distal phalanx of III; TIB = tibia length; TaL = tail length in tadpoles, from the tip of the tail fin to the vent. Differences between these parameters in the two new species (males only) were further analyzed with Mann-Whitney U test in IBM SPSS Statistics 21. Nomenclatural Acts The electronic edition of this article conforms to the requirements of the amended International Code of Zoological Nomenclature, and hence the new names contained herein are available Molecular phylogenetic analyses While we obtained a 422 bp of mitochondrial 16s rRNA sequence alignment from 52 Xenophrys samples, and we got a 498 bp of mitochondrial 12s rRNA sequence alignment from a smaller number of samples (N = 25) due to the lack of DNA or amplification difficulties (Table 1). The 16s rRNA alignment yielded 115 variable sites and 98 were parsimony-informative with five insertions. The 12s rRNA yielded 84 variable sites and 74 were parsimony-informative with four insertions. Indels were removed before phylogenetic analyses. For 16s rRNA, the ML and BI phylogenetic approaches resulted in virtually identical topology and all terminal clades had relatively high-supporting values. Both bootstrap supports and posterior probabilities for the clades representing X. cheni sp. nov. and X. lini sp. nov. were high (. 80% for bootstrap proportions and 1.0 for Bayesian posterior probability, respectively; Figure 2). The X. cheni sp. nov., X. lini sp. nov., X. jinggangensis, and X. brachykolos might form a clade. The large-bodied X. mangshanensis was basal to the rest of the eight small-size congeners on the phylogenetic tree and showed large net average distances to others (12%-16%, Table 2). Phylogenetic analyses based on 12s rRNA also revealed well support for the clades representing the two new species ( Figure S1), so as the results inferred by ML and BI based on the joint dataset of two mitochondrial locus ( Figure S2). The net average genetic distance between the two new species was 3.6% (16s rRNA) and 5.7% (12s rRNA), respectively ( Table 2). This differentiation was comparable to the divergence between X. jingangensis to X. brachykolos. Interestingly, the net average genetic distance between X. boettgeri and X. huangshanensis was only 0.005, in contrast to the values between the remaining comparisons (Table 2). Furthermore, in consistent with the phylogenetic gene trees, the inferred multilocus network based on concatenated sequences of 16s and 12s rRNA (910 bp) strongly supported the genetic distinctness of X. cheni sp. nov. and X. lini sp. nov. (Figure 3). Acoustic analysis of advertisement calls We recorded the male calls of 13 individuals from three sympatric Xenophrys species in Mt. Jinggang, X. jinggangensis, X. cheni sp. nov., and X. lini sp. nov. (four, three, and six individuals, respectively). We also recorded the vocalizations of X. kuatunensis (N = 2), X. huangshanensis (N = 3), and X. boettgeri (N = 3) permitting additional comparisons (the typical calls of six analyzed Xenophrys species are available in Appendix S1). The male call of X. cheni sp. nov. has a much slower pace than X. jinggangensis and X. lini sp. nov. (notes per second, see Table 3 and Figure 4). Compared with X. jinggangensis and X. cheni sp. nov., X. lini sp. nov. has a unique pattern of calls, in each single call the inter-note intervals gradually increase as the call comes to an end (Table 3 and Figure 4). The Kruskal-Wallis test of the three individual-independent vocalization parameters (call duration, notes per call, and notes per second) showed significant differences both between the first three sympatric species in Luoxiao Mountains or across all six Xenophrys species studied (p,0.05 for each parameter of the groups). The test on the five call-independent vocalization parameters (note duration, inter-notes interval, high frequency, low frequency, and frequency band-width) also revealed significant differences within the three sympatric species and across the six Xenophrys species recorded (Kruskal-Wallis test, p,0.05 for each parameter and groups) Morphological comparisons Specimens examined for morphometric analysis are listed in Table 4 and other specimens for morphological comparisons are listed in Appendix S2. We compared the two new Xenophrys species with their 46 known congeners. Of these 46 species, 22 (bodylength .45 mm) were larger than the two new species (bodylength ,40 mm). Of the remaining 23 species, the two new species can be distinguished from 11 of them (X. ancrae, X. jinggangensis, X. daweimontis, X. oropedion, X. pachyproctus, X. palpebralespinosa, X. parallela, X. parva, X. serchhipii, X. zhangi, and X. zunhebotoensis) by the absence of vomerine teeth; the two new species differ from nine of them (absent horn-like tubercle in X. binchuanensis, X. pachyproctus, X. wawuensis, X. wuliangshanensis, X. wushanensis and X. zhangi; present large horn-like tubercle in X. jinggangensis, X. palpebralespinosa and X. parallela) because they have a small horn-like tubercle at edge on the upper eyelid; of the remaining eight species, the two new species differ from six of them by having wide lateral fringes Diagnosis: Xenophrys lini sp. nov. is characterized by the combination of the following characters: (1) a small-sized species with 34.1-39.7 mm SVL in adult males, 37.0-39.9 mm SVL in adult females; (2) head length approximately equal to head width (HDL/HDW ratio 1); (3) snout short, obtusely pointed in dorsal view, almost truncate and sloping backward to the mouth in profile, protruding well beyond the margin of the lower jaw; (4) vomerine teeth absent; (5) margin of the tongue smooth, not notched behind; (6) hind limbs elongated, heels overlapping and tibio-tarsal articulation reaching the anterior corner of the eye; (7) relative finger length II#I,IV,III; (8) lateral fringes on the digits wide, toes with rudimentary webbing at their bases; (9) subarticular tubercle on each digit distinct; (10) dorsal skin smooth with scattered granules, usually a few curved weak ridges on back, several tubercles on flanks; (11) ventral surface smooth; (12) a small horn-like tubercle at the edge of the eyelid; (13) supratympanic fold narrow, light colored, curving from the posterior corner of the eye to a level above the insertion of the arm; (14) light brown or olive above, a dark interorbital triangular marking and X-shaped dorsal marking bordered with a light edge; (15) scattered, tiny, black nuptial spines covering the middle of the dorsal surface of the first finger; (16) single vocal sac in males; (17) gravid females bear pure yellowish eggs. Holotype description: Adult male, SVL 39.1 mm. Head length approximately equal to head width (HDL/HDW ratio 1.0); snout short (SNT/HDL ratio 0.4, SNT/SVL ratio 0.1), obtusely pointed in dorsal view, almost truncated and sloping backward to the mouth in profile, protruding well beyond the margin of the lower jaw; loreal region vertical, not concave; canthus rostralis well-developed; top of head flat; eye large and convex, eye diameter 35% of head length; pupil vertical; nostril oblique ovoid with low flap of skin laterally; internasal distance larger than interorbital distance; tympanum distinct, TMP/EYE ratio 0.53; tympanum-eye distance great, TEY 2.3 mm, TEY/TMP ratio 0.96; choanae large, ovoid, partly concealed by the maxillary shelves; two vomerine ridges weakly, oblique, posteromedial to choanae, no vomerine teeth; margin of tongue smooth, not notched behind. Forelimbs moderately slender; radioulna length 23% SVL, hands without webbing, moderately long, 23% of SVL; fingers slender, relative finger length II#I,IV,III; tips of digits round, slightly dilated; subarticular tubercle distinct at the base of each finger; slight lateral fringes from bases of each fingers to terminal phalanges; two metacarpal tubercles, substantially enlarged. Hind limbs relatively elongated and moderately robust; heels overlapping when the flexed legs are held at right angles to the body axis; tibio-tarsal articulation reaching the anterior corner of the eye, when leg stretched along the side of the body; tibia length 49% of SVL; foot length 66% of SVL; relative toe lengths I,II,V,III, IV; tips of toes round, slightly dilated; subarticular tubercle distinct at the base of each toe; toes with rudimentary webbing at their bases, lateral fringes wide; tarsal fold absent; but as outer lateral fringes on toe V from hough to terminal phalanges; inner metatarsal tubercle ovoid; no outer metatarsal tubercle. Skin of all upper surfaces and flanks smooth with scattered granules; back with a few curved weak, discontinuous ridges; several tubercles on the flanks and dorsal surface of thighs and tibias; a curved ridge on the upper eyelid, anteriorly starting near canthus rostralis (not in contact), extending backward to the middle of the upper eyelid and bending to the inside, where there is a slightly large, horn-like tubercle; supratympanic fold distinct, narrow, curving posteroventrally from posterior corner of the eye to a level above insertion of the arm; ventral surface smooth; pectoral gland large, round, prominently elevated relative to the ventral surface, closer to the axilla than to the mid-ventral line; single larger femoral gland on rear of thigh; distinct granules on Table 2. Net average genetic distances between Xenophrys species in southeast China. Live holotype coloration: Olive-brown above with distinct dark brown markings bordered with light edge and obscure markings; a distinct dark triangular marking between the eyes, apex of triangle over occiput; a distinct X-shaped dark marking on the back; a small longitudinal dark stripe on the dorsum of the snout; dorsal surface of the limbs and digits with dark brown transverse bands; side of head with dark brown vertical bars, one from the tip of the snout to behind the nares; one under the eye, one along the supratympanic fold, coving tympanum; supratympanic fold light colored; lower lip black with six vertical white spots; lateral surface of trunk and anterior surface of thighs pinkish near the groin; ventral surface reddish brown, with an obscure longitudinal black streak down the center of the throat, with several white blotches on the belly; ventral surface of the limbs reddish brown with pale gray wormlike marks; palms and soles uniform reddish brown, tip of digits pale white; inner metatarsal tubercle and two metacarpal tubercles pinkish; pectoral and femoral glands white; pupils black; iris dark grey. Preserved holotype coloration: Blackish green above with a black triangle and an X-shaped marking bordered with a distinct light edge line; dorsal surface of limbs and digits with black transverse bands; ventral surface darker brown with white blotches; creamy white replaces the pinkish color in the anterior surface of the thighs and lateral surface of the trunk. Tadpole description: Body slender, oval, flattened above; tail depth slightly greater than body depth, dorsal fin arising behind the origin of the tail, maximum depth near mid-length, tapering gradually to narrow, pointed tip; tail 2.3-2.5 times as long as body length, tail depth 18-20% of tail length in the 28 th -34 th stages; maximum body width 37% of body length in the 34 th stage, 35% in the 33 rd stage, 30-33% in the 28 th -31 st stages; body depth 30% of body length in the 33 rd and 34 th stages, 29% in the 32 nd stage, 24-25% in the 28 th -31 st stages; eyes large, lateral; nostril dorsolateral, slightly closer to the eyes than to the umbelliform oral disk, rim raised; internasal wider than interorbital; spiracle on left side of the body, closer to the eye than to the end of the body; anal tube extends backward above the ventral fin, opening medial; oral disk terminal, lips expanded and directed upwardly into a typical Xenophrys umbelliform oral disk; transverse width of expanded funnel 38-42% of body length in 28 th -34 th stages. Coloration in preservative: All upper and lateral surfaces brown grey with black marks; ventral surface of head red-brown, belly black with pale grey marks, tail and hind limbs creamy white. Variation: Measurements and body proportions of type series are given in Table 4. Color patterns in paratypes are more similar to the holotype, but SYS a001419, 001423, 001424, 002369, 002373, 002375, 002379, 002380, 002382, 002383 and 002385 had a dark triangular marking between the eyes with a light center between the eyes; five female paratypes were light brown above; lower lip black with eight white bands; ventral surface with pinkish, brown, white, and black markings; one longitudinal black streak down the center of the throat, surface of the posterior abdomen near the groin white; several large, black spots on ventral surface of the hind limb, forearm, and wrist. Secondary sexual characteristics: Single vocal sac; scattered, tiny, black nuptial spines cover a circular area at the middle of the dorsal surface of the first finger in two male paratypes and holotype; five gravid female paratypes bear pure yellowish eggs in the oviducts. Distribution and biological ecology. Currently, X. lini sp. nov. is known only from the Bamianshan, Jingzhushan, Nanfengmian Nature Reserve and Dabali, within the range of Mt. Jinggang, Jiangxi Province, and from adjacent Taoyuandong Nature Reserve, Hunan Province which are located in the middle of Luoxiao Mountains, running along the border between the Jiangxi and Hunan Provinces, China. All individuals were found in rushing mountain streams surrounded by moist subtropical evergreen broadleaved forests between elevations of 1100-1610 m (Figure 1-VIII: b, c, e, g and i). All adult specimens were collected on the 13 th and 19 th September, 2011 and 5-6 th October 2013; males were heard calling day and night during the survey. The male paratype SYS a001421 has mature spermaries in the abdominal cavity, measuring 4.962.1 mm in the major and minor axes, respectively. All female paratypes bear pure yellowish mature eggs and atrophic ovary fat. According tadpole stages defined by Gosner [41], the individuals in 28-34 th stages were found under rocks in the stream on the 5 th December, 2011. Juveniles were collected on 21 th , May, 2013. Thus, we assume the breeding season of this species likely begins September-October. Xenophrys cheni Wang and Liu sp. nov. Diagnosis: Xenophrys cheni sp. nov. is characterized by the combination of the following characteristics: (1) a small-size species with 31.8-34.1 mm SVL in adult females, 26.2-29.5 mm SVL in adult males; (2) head length approximately equal to head width (HDL/HDW ratio 1.00-1.06); (3) snout short, obtusely rounded in dorsal view, almost truncate, and sloping backward to the mouth in profile, protruding well beyond the margin of the lower jaw; (4) vomerine teeth absent; (5) margin of tongue notched behind; (6) tympanum distinct or indistinct, usually its upper part hidden under the supratympanic fold; (7) hind limbs elongated, the heels more overlapping and tibio-tarsal articulation reaching the region between the nostril and tip of snout; (8) relative finger length I, Table 3. Vocalization parameters of six Xenophrys species in southeast China. Holotype description: Adult male, SVL 27.2 mm. Head length approximately equal to head width (HDL/HDW ratio 1.0); snout short (SNT/HDL ratio 0.44, SNT/SVL ratio 0.14), obtusely rounded in dorsal view, almost truncate and sloping backward to the mouth in profile, protruding well beyond the margin of the lower jaw; loreal region vertical, not concave; canthus rostralis well-developed; top of head flattened; eye large and convex, eye diameter 38% of head length; pupil vertical; nostril oblique ovoid with low flap of skin laterally; internasal distance larger than the interorbital distance; tympanum (TMP) distinctly rounded, TMP/ EYE ratio 0.42; tympanum-eye distance (TEY) 1.5 mm, TEY/ TMP ratio 0.94; choanae large, ovoid, partly concealed by the maxillary shelves; two vomerine ridges weakly, oblique, posteromedial to choanae, no vomerine teeth; margin of tongue notched behind. Forelimbs moderately slender; radioulna length 25% of SVL, hands without webbing, moderately elongated, 25% of SVL; fingers slender, relative finger length I,II,IV,III; tips of fingers round, slightly dilated, narrower than width of terminal phalanges; subarticular tubercles indistinct; markedly enlarged lateral fringes from the bases of fingers to the terminal phalanges; two metacarpal tubercles. Hind limbs relatively elongated and moderately robust; heels overlapped, when the flexed legs are held at right angles to the body axis; tibio-tarsal articulation reaching the region between the nostril and tip of snout, when leg stretched along the side of the body; tibia length 52% of SVL; foot length 71% of SVL; relative toe lengths I,II,V,III,IV; tips of toes round, slightly dilated, narrower than the width of the terminal phalanges; toes with fleshy webs at their bases; subarticular tubercle indistinct; lateral fringes wide; tarsal fold absent; inner metatarsal tubercle ovoid; outer metatarsal tubercle absent. Skin of all upper surfaces and flanks smooth with tubercles, usually forming a dorsolateral tubercle row of parallel to contralateral row, and an X-shaped weak ridge on the dorsum of body; five large transverse tubercle rows on the dorsal surface of the shanks; ventral surface smooth; a weak horn-like tubercle at the edge of the upper eyelid; supratympanic fold swollen, curving poster-oventrally from the posterior corner of the eye to a level above the insertion of the arm; pectoral gland small, closer to the Live holotype coloration: Olive-brown above with dark reticular markings, including a triangular marking bordered with a light edge between the eyes, apex of triangle over occiput, an Xshaped marking bordered with a light edge on the dorsum of body, five dark transverse bands on the dorsal surface of the thigh, four dark transverse bands on the dorsal surface of the shank; side of head with dark brown vertical bars, one from the tip of the snout to behind the nares; one under the eye, one along the supratympanic fold, coving tympanum; supratympanic fold light colored bordered by a black lower edge; lower lip black with six white bars; lateral surface of trunk of body and anterior surface of the thighs near the groin pinkish; ventral surface of body olive with pinkish and white spots; an obscure longitudinal darker streak down the center of the throat; several white blotches on the belly; the ventrolateral regions covered with olive-green bordered black zigzag lines; ventral surface of limbs darker brown with white spots; palms and soles uniform darker brown, tip of digits pale grey; inner metatarsal tubercle and two metacarpal tubercles pinkish; pectoral and femoral glands white; pupils black; iris dark brown. Preserved holotype coloration: Dorsal surface sallow with darker brown reticular markings; triangular and X-shaped markings on dorsum and transverse bands on limbs and digits distinct; ventral surface yellowish with grey spots; black zigzag lines distinct; creamy-white substitutes the pinkish in the anterior surface of the thighs and lateral surface of the trunk. Variation: Measurements and body proportions of type series are given in Table 4. Color patterns in 14 male paratypes are more similar to the holotype. In the male paratypes SYS a002124, 002126, 002140, 002141 and 002145, tympanum or its edge is indistinct; in SYS a002126 and 002127, upper part of tympanum hidden under the supratympanic fold; in the female paratypes, red-brown above with brown reticular markings; lower lip black with white spots; gular region and chest black with red and white blotches; posteriorly black fades and becomes marbled with light and dark All specimens were collected from April to September, during which all male individuals were calling and bearing dilated spermaries in specimens collected July and September, lacking nuptial spines on the dorsal surface of the first finger; mature ovaries bearing pure yellow eggs and dilated oviducts in the female paratype SYS a001429. Thus, the breeding season of this species is likely from April to September, but no tadpoles were found. Discussion Most cryptic congeners in the genus Xenophrys are difficult to distinguish from each other due to the superficial similarities in morphologies: drab colorations, complicated markings and even changeable colorations and skin markings of the same individual under different environmental conditions [9][10][11]. These result in considerable challenges in field identification, which in turn cause ambiguities in taxonomy and distributions [11]. For example, in Luoxiao Mountains covered by our surveys, small sized Xenophrys toads occurred were misidentified as either X. boettgeri, X. kuatunensis or X. minor and were documented in the local amphibian checklists [42,43]. This issue seems ubiquitous throughout the geographical range of Xenophrys toads [9]. To solve these problems, extensive sampling with careful and robust diagnosis is necessary in order to unveil the cryptic species diversity of Xenophrys toads. In this study, we characterized the cryptic diversity of Xenophrys toads with intensive surveys in the middle range of Luoxiao Mountains, with the realm about 250 km 2 in southeast China. We discovered two undescribed Xenophrys species, namely Xenophrys lini sp.nov. and Xenophrys cheni sp.nov. using morphology, molecular genetics and bioacoustics. The two new species, together with X. jinggangensis, are sympatric in Luoxiao Mountains but altitudinally and perhaps ecologically isolated (see discussion below). Morphologically these two new species can be reliably distinguished from other known congeners. Although the present genetic analyses are based on only two mitochondrial genes, the genetic differences between the two new species are of a comparable magnitude as other known Xenophrys species in southeast China (Table 2). Interestingly, the close phylogenetic relationships between the three sympatric species in Luoxiao Mountains and other known species in southeast China may indicate sign of evolutionary radiation in the region. However, our phylogenies in this study were just partial without including Xenophrys species in western China and Himalayas. This encourages further comprehensive phylogenetic analyses with extensive taxa coverage in order to resolve the systematics in Xenophrys. Furthermore, although our bioacoustics analysis shows that the advertisement calls of male Xenophrys are very similar, consisting of several rather short and repeated notes, the call styles, especially in the aspects of the note frequency ranges, note durations and inter-note intervals show significantly differences between all three sympatric species in Luoxiao Mountains and other three compared congeners. Based on the present knowledge, the geographic ranges of the two new species as well as X. jinggangensis are endemic to several sites in Luoxiao Mountains between Jiangxi and Hunan provinces, China. Luoxiao Mountains are situated in the middle of southeast Chinese subtropical mountain ranges with complex topography and biogeography [44]. It is connected to Nanling Mountains in the south, which is a stronghold for X. mangshanensis [7] indicating potential parapatric. It is further paralleled with Huangshan-Tianmu and Yandang-Wuyi-Daiyun Mountains in the east, which harbors X. kuatunensis, X. huangshanensis and X. boettgeri [7], indicating potential allopatric. In a finer scale, we found that three sympatric Xenophrys species in Mt. Jinggang were distributed in different microhabitats, where the altitudes and the characters of the water bodies varied subtly. X. jinggangensis was found in slowmoving streams between 700-850 m a.s.l. [10] and X. lini sp. nov. in rushing streams between 1100-1610 m a.s.l. In contrast, X. cheni sp. nov. was restricted to swamps at forest edges around 1200-1530 m a.s.l. (Figure 1). Whether this pattern of spatial and ecological segregation in Xenophrys toads in Luoxiao Mountains is associated with local adaptation to divergent environments requires further investigation on the diversification mechanisms using ecological [45] and genetic (genomic) approaches [46]. This study further provides a few fresh insights into the taxonomy of Xenophrys toads. Most importantly, the identification of two new species may indicate previously underestimated diversity and endemism Xenophrys in southeast China [47]. However, the potential spatial or ecological limits of these species are still poorly known. Moreover, the closely related X. huangshanensis and X. boettgeri show moderate differences in morphology and vocalization and a small genetic distance (0.005) in contrast to the other species studied. The species validation between these two, X. huangshanensis and X. boettgeri, needs to be revisited. Finally, though we used only the mitochondrial 16s and 12s rRNA as genetic markers for its universal application in molecular analysis in amphibian systematic studies [48], the integrative manner of this study using multiple approaches merits robust species delineation. Regardless, our results provide a first step in the right direction to help resolve previously unrecognized amphibian biodiversity. More spatially extensive sampling ideally combined with habitat characteristics and bioacoustic recordings will be necessary to further understanding the cryptic diversity and diversification of Xenophrys toads in the mountain complexes of southeast China. In conclusion, we show that the two sympatric Xenophrys species, X. lini sp. nov. and X. cheni sp. nov. have congruent differences in morphology, bioacoustics, genetic and habitats. We are reasonable to treat them as separate species based on the 'biological species concept' [49,50]. Nevertheless, more studies are needed on the distributions, ecology and life history of these locally endemic species, as well as on their conservation status. These efforts are very important given the ongoing declines of amphibians both regionally and globally [47,51,52]. Supporting Information Appendix S1 Recordings of typical male advertisement calls of six Xenophrys species.
8,486.8
2014-04-08T00:00:00.000
[ "Biology", "Environmental Science" ]
A cost analysis of systematic vitamin D supplementation in the elderly versus supplementation based on assessed requirements Hypovitaminosis D is common among older people and treatment with vitamin D is associated with reduced risk of falls and fractures. This paper provides a cost analysis of assessing the vitamin D status of and providing the pharmaceuticals for elderly citizens in Kalmar County, Sweden (population approximately 230,000). Four hypothetical interventions were analyzed: (a) systematic vitamin D/calcium supplementation to all elderly (≥75 years), (b) assessment of vitamin D status in elderly and supplementation to those with insufficient levels, (c) systematic vitamin D/calcium supplementation to all nursing-home residents, and (d) assessment of vitamin D status in nursing-home residents and supplementation to those with insufficient levels. The calculations were based on an estimated reduction in overall costs due to the assessed number of hip fractures after vitamin D/calcium supplementation. The annual net economic benefit of vitamin D/calcium supplementation was estimated at (a) €304,000, (b) €860,000, (c) €755,000, and (d) €740,000. The provision of systematic vitamin D supplementation to nursing-home residents would provide a substantial net economic benefit to society and assessment of the vitamin D status before starting supplementation does not seem to be necessary. Although assessment of all elderly citizens would be more comprehensive, the true proportion with insufficient vitamin D levels in the general population is uncertain and to reaching consensus on the most advantageous daily vitamin D intake, vitamin D blood levels are necessary. Also, systematic supplementation to all elderly would result in other outcomes that could be worth the cost, but that remains to be evaluated. DOI : 10.14302/issn.2474-7785.jarh-17-1724 Corresponding author: Pär Wanby, MD, PhD, Section of Endocrinology, Department of Internal Medicine, Kalmar County Hospital, SE-391 85 Kalmar, Sweden, E-mail<EMAIL_ADDRESS>Phone: +46(0)480-448204 Introduction Vitamin D is essential for skeletal metabolism, muscle function, calcium homeostasis, and the immune system (1). It has also been presented as a preventive factor for chronic diseases such as cardiovascular disease, type 2 diabetes, autoimmune diseases, and various cancers (2)(3)(4), and for non-vertebral and hip fractures in older patients (2,3,5). Furthermore, low vitamin D levels are reported to be associated with increased mortality among the elderly in Sweden (6,7). The main source of vitamin D is from sensible sun exposure, and other sources are food and dietary supplements (1,8). People at risk of insufficient vitamin D levels include the elderly and individuals with limited sun exposure, such as those in nursing homes (9). Moreover, the elderly often avoid direct sunlight and also have a reduced capacity to synthesize vitamin D in their skin (10). Consequently, there are numerous guidelines/recommendations on the management of vitamin D status (1,3,4,8,(11)(12)(13)(14). Most of the recent guidelines/recommendations suggest that S-25(OH)D levels ≥50 nmol/L reflect sufficient vitamin D (1,4,11,13,14). However, in fragile older adults with an elevated risk of falls and fractures, it has been suggested that the minimum S-25(OH)D level should be ≥75 nmol/L (8,13). S-25(OH)D levels were recently reported to be <50 nmol/L in >65% of elderly patients (aged ≥75 years) with hip fractures (15). Falls and fractures are common among the elderly. Some studies have found that vitamin D supplementation reduces the incidence of falls and fractures (2). Others have found that vitamin D alone does not seem to prevent fractures (16,17), whereas supplements of vitamin D plus calcium reduce the risk of falls (18) and hip fractures in the elderly (16 All costs were taken from published literature and official databases. Study population The data were from a Swedish study (15) Control group The control group in the main study (n = 169) Study design The cost analysis was based on the following We hypothesized that vitamin D supplementation, with or without calcium, could reduce the number of hip fractures by 30%, according to a published analysis on fracture prevention (19). Resource units and costs All costs were estimated in Swedish Crowns (SEK) using 2014 prices and were then converted to euros (€1.00 = SEK9.097); where applicable, the consumer price index was applied as the conversion factor. The hip fracture cost was based on two previous Swedish studies (32,33) ( Table 1). The cost of transportation to and from Swedish healthcare units was obtained from a previous study (36). The cost of transportation to and from Swedish nursing homes was calculated as the average wage during an estimated one-hour trip with additional travel expenses (€0.203 per km) as per the Swedish public reimbursement policies ( Table 1). Assessment of vitamin D status We Nursing homes The healthcare personnel would arrange Approximately 40% of the total hip fracture cost stemmed from nursing-home residents ( Table 2). Discussion The provision of systematic vitamin D supplementation to nursing-home residents would provide a substantial net economic benefit to society, and assessment of the vitamin D status before starting supplementation does not seem to be necessary in this high-risk group for vitamin D deficiency. The intervention with the largest net economic benefit ( Table 2) Among the nursing-home residents, as many as 76% had a S-25(OH)D level <50 nmol/L (15). A high prevalence of insufficient vitamin D in nursing-home residents has been reported previously (38). In addition, a Swedish study examining the vitamin D status of the elderly in nursing homes found that this highly prevalent vitamin D deficiency was associated with increased mortality (7). Our decision to use 50 nmol/L as the cut-off point for insufficient S-25(OH)D levels (4,11,13,14) is years, with similar reductions in mortality (22). Another study showed that treatment of the elderly population (aged ≥60 years) with vitamin D3 800 IU daily was associated with reductions in mortality and substantial cost-savings through fall prevention (21). Supplementation with vitamin D and calcium has been shown to reduce the risk of hip fractures (16). However, the elderly are often already taking many medications and the addition of yet another pharmaceutical to be taken daily could meet resistance among both physicians and patients. None of the participants in the control group were prescribed vitamin D supplements whereas 14% of the nursing home residents were given supplements (15). However, half of these residents were given a low dose (400 IU) vitamin D. Furthermore, there was no information regarding intake of food sources rich in vitamin D or vitamins not prescribed by a physician, or about exposure to sunlight in either groups (15), which could be important aspects influencing the vitamin D status. The study was based on only one county in Sweden with a population of approximately 230,000 citizens. Conclusion The provision of systematic vitamin D supplementation to nursing-home residents would provide a substantial net economic benefit to society, and assessment of the vitamin D status before starting supplementation does not seem to be necessary in this high-risk group for vitamin D deficiency. This advice is in accordance with recommendations offering vitamin D Although assessment of all elderly citizens would be more comprehensive, the true proportion with insufficient vitamin D levels in the general population is still uncertain and to reaching consensus on the most advantageous daily vitamin D intake, vitamin D blood levels are necessary. Also, systematic supplementation to all elderly would result in other outcomes that could be worth the cost, but that remains to be evaluated.
1,671.6
2017-09-08T00:00:00.000
[ "Medicine", "Economics" ]
Exosomal miR-543 Inhibits the Proliferation of Ovarian Cancer by Targeting IGF2 Objective Ovarian cancer (OvCa) is the most lethal gynaecological malignancy worldwide. We aimed to illustrate the potential function and molecular mechanism of exosomal microRNA-543 (miR-543) in the oncogenesis and development of OvCa. Methods Differentially expressed microRNAs in exosomes derived from OvCa cell lines were identified by bioinformatic analysis and verified by RT-PCR. Cell proliferation ability was estimated by clonogenic and 5-ethynyl-2′-deoxyuridine assays in vitro and in vivo. Potential involved pathways and targets of exosomal miRNAs were analysed using DIANA and verified by pyrosequencing, glucose quantification, dual-luciferase reporter experiments, and functional rescue assays. Results Bioinformatic analysis identified miR-543 and its potential target genes involved in the cancer-associated proteoglycan pathway. The expression of miR-543 was significantly decreased in exosomes derived from OvCa cell lines, patient serum, and OvCa tissues, while the mRNA levels of insulin-like growth factor 2 (IGF2) were increased. Furthermore, the overexpression of miR-543 resulted in the suppression of OvCa cell proliferation in vitro and in vivo. Moreover, miR-543 was significantly negatively correlated with IGF2 in OvCa tissues in comparison with paracarcinoma tissues. Notably, upregulation of miR-543 led to increased cell supernatant glucose levels and suppressed cell growth, which was rescued by overexpression of IGF2. Conclusions Exosomal miR-543 participates in the proteoglycan pathway to suppress cell proliferation by targeting IGF2 in OvCa. Introduction Ovarian cancer (OvCa) is the most lethal gynaecological malignancy globally, with an unimproved 5-year survival rate of less than 45% and a 10-year survival rate of less than 30% during the last 30 years [1][2][3]. Maintenance therapy, which has been developed from targeted treatment and is a newly implemented but important approach following debulking surgery and chemotherapy, is an essential and promising component of the whole-course management of OvCa, especially at the late stage (FIGO IIB-IV) [4]. Currently, antiangiogenic drugs [5,6] and poly (ADP-ribose) polymerase inhibitors (PAPRi) [7] are the two main strategies of maintenance therapies. Eligible OvCa patients undergoing the appropriate maintenance therapy have a significantly improved prognosis [7,8]. However, the heterogeneity of OvCa and resistance to maintenance therapies acquired in advanced disease pose major obstacles to the universal use of this therapeutic strategy. Therefore, a better understanding of OvCa pathophysiology and the exploration of new potential diagnostic and therapeutic targets for overcoming the current issues are required. Herein, increasing evidence reveals the significance of exosomes in OvCa pathogenesis and progression. Epithelial ovarian cancer (EOC) is the most common type of OvCa (accounting for approximately 80% of cases) and has the highest mortality among all types of OvCa [9,10]. Exosomes are endosome-packaged, 30-150 nm lipid bilayer extracellular vesicles that are produced by most cells, including cancer cells and immune cells [11]. In addition, exosomes function as important regulatory signaling transporters between parental invasive cancer cells and target cells and are involved in cellular energy metabolism [12], angiogenesis [13,14], protumorigenic signaling pathways [15], immune escape [16], proliferation, and metastasis [17]. Specifically, exosomes are enriched in extracellular RNA (miRNAs and mRNAs) and proteins and express exosomal-specific markers (CD9, CD63, and TSG101) but lack glycolytic enzymes, extracellular DNA, and cytoskeletal components [18]. Therefore, exosomes likely regulate tumour energy metabolism by delivering extracellular RNAs from tumour cells to target cells. However, the mechanisms linking OvCa metabolic dysregulation and exosomal miRNAs are incompletely understood. This manuscript focuses on revealing correlations between exosomal miRNAs and energy metabolism in OvCa and investigates possible molecules as tumour diagnostic and therapeutic targets. miRNA Microarray Data. To study miRNAs in OvCaderived exosomes, we used the keyword "ovarian cancer exosome miRNA" to search the Gene Expression Omnibus (GEO) database [19] and found one miRNA microarray dataset, GSE76449 [20]. One normal human ovarian surface epithelial cell line (HIO180), six different invasive OvCa cell lines, namely, HEYA8_MDR (multidrug-resistant), A2780_ CP20 (cisplatin-resistant), and SKOV3_TR (Taxol-resistant), and the chemosensitive OvCa cell lines HEYA8, A2780_PAR, and SKOV3_ipl and their exosomal samples were analysed by 4.0 miRNA Affymetrix chips. Two biological repeats were employed in each sample. The series matrix and platform files were downloaded as TXT files. (DE-miRNAs). Data were investigated by using GEO2R (http:// www.ncbi.nlm.nlh.gov/geo/geo2r/). Significant miRNAs with the thresholds of |log fold change ðlogFCÞ | >0:58 and P value < 0.05 were subjected to cluster analysis. To further determine the reliability of the bioinformatic analysis, the overlapping miRNAs were shown using a Venn diagram. The DE-miRNAs were further selected by their differential expression in both cancer cell-isolated exosomes vs. normal cell-isolated exosomes and OvCa cells vs. normal cells. Heatmaps and volcano plots of DE-miRNAs were generated using R software. Prediction of Key Targeted Genes by DE-miRNAs. The online analysis tool DIANA (http://diana.imis.athenainnovation.gr/DianaTools/) predicted DE-miRNA-targeted genes and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways. DIANA-MicroT-CDS was used to predict target genes of DE-miRNAs, and mirPath v.3 was used for the DE-miRNA pathway analysis. To further investigate the targeted genes in the glucose-related metabolic pathway, we annotated, visualized, and integrated them by using the STRING database (http://string-db.org) to construct a protein-protein interaction (PPI) network. The DAVID online database was applied to analyse key genes in terms of KEGG pathways and Gene Ontology (GO) terms, which included the biological process (BP), molecular function (MF), and cellular component (CC) ontologies. 2.4. Association of Targeted Genes, Patient Prognosis, and Cancer Stages. GEPIA2 (http://gepia2.cancer-pku.cn/#index) is an online survival analysis tool that was used to identify genes associated with the age, histological grade, stage, treatment, and overall survival (OS) of OvCa. Kaplan-Meier survival curves were constructed for the high-and low-expression groups. 2.5. Cell Culture and Human Tissues. Human OvCa cell lines (SKOV3, COCI, CAOV3, OVCAR3, SW626, OV90, and HEY) and a human normal ovarian cell line (IOSE80) were obtained from the Shanghai Cancer Institute. OV90 cells were cultured in MCDB105/medium 199 complete medium (ScienCell, Shanghai, China). SKOV3 cells were cultured in McCoy's 5A medium (HyClone, Logan, USA), and the other seven OvCa cell lines were cultured in DMEM (HyClone, Logan, USA). All the media were supplemented with 10% (volume/volume) foetal bovine serum (FBS) (Gibco, Invitrogen, USA) and 1% (volume/volume) penicillin/streptomycin (P/S). All cell lines were incubated in a 37°C humidified incubator with 5% CO 2 . We enrolled 30 patients pathologically diagnosed with EOC and 30 normal controls at Fujian Provincial Maternal and Children Hospital from September 2016 to September 2020. Samples of cancer-adjacent tissues and OvCa lesions were collected for methylation analysis. All experiments involving human tissue samples in this study were approved by the Ethics Committee of Fujian Provincial Maternal and Children Health Hospital, and written informed consent was obtained. 2.6. Cell Transfection. A lentiviral vector carrying miR-543 (3 ′ -GTCCGGACTCAGATCTCGAGCTTGACGGTTG CCCGGTGCGCATCAG GACCCATGTGCTCTCAG-5 ′ ) was transfected into SKOV3 and HEY cells in the logarithmic growth phase following the manufacturer's protocol (Invitrogen, California, USA). The expression of miR-543 was assessed by using quantitative reverse transcription PCR (qRT-PCR) to verify the transfection efficiency. Exosome Purification and Identification. Serum samples were obtained from all participants after fasting for 8 hours. Exosomes were extracted from patient serum using a total exosome isolation kit following the manufacturer's protocol (Invitrogen, California, USA). Additionally, the culture medium of transfected cells was collected and centrifuged to remove cell debris and other impurities. Then, the exosomes were extracted and purified according to the instructions. Finally, a Zetaview instrument was utilized for transmission electron microscopy (TEM) and nanoparticle tracking analysis (NTA) to verify the exosomes. For TEM, exosomes were fixed successively with 4% glutaraldehyde and 1% osmium 2 Journal of Immunology Research Journal of Immunology Research was then subjected to RT with the microRNA RT kit (Promega, Wisconsin, USA) using a two-step process according to the instructions from the manufacturer. qRT-PCR was performed with GoTaq Green Master Mix (Promega, Wisconsin, USA). U6 (Sangon Biotech, China) was used as an internal reference to standardize miRNA concentrations (forward primer sequence: 5 ′ -CTCGCTTCGGCAGCACA-3 ′ , reverse primer sequence: 5′-AACGCTTCACGAATTTGCGT-3′). The forward primer sequence of miR-543 was 5′-CGAAACATTCG CGGTGCA-3′, and the reverse primer sequence was 5′-AGTGCAGGGTCCGAGGTATT-3 ′ . The miRNA expression value was determined using an ABI7500 instrument purchased from Applied Biosystems. The 2 −ΔΔCt method was used to calculate the expression of target miRNAs and genes to generate relevant standard curves. 2.10. Colony Formation Assay. The transfected cells were resuspended, diluted, and inoculated into a 6-well plate at a density of 1 × 10 3 /ml in serum-free medium. After 14 days, the colonies were fixed with 4% paraformaldehyde, stained with 0.5% crystal violet, imaged, and counted. EdU Proliferation Assay. To measure ovarian cancer cell proliferation, a 5-ethynyl-2 ′ -deoxyuridine (EdU) assay was performed following the manufacturer's protocol (US EVERBRIGHT, Suzhou, China). SKOV3 and HEY cells transfected with NC or miR-543 were plated in 6-well plates at a density of 1 × 10 6 cells/well and then treated with 10 nM docetaxel for 48 hours. Harvested cells were washed twice with PBS and incubated in 10 μmol/L EdU (US EVERB-RIGHT, Suzhou, China) diluted with serum-free DMEM for 2 hours. Then, the cells were fixed, subjected to DNA staining, and imaged using fluorescence microscopy, and five random fields were calculated. The intensity of the bands was measured and analysed quantitatively with GAPDH as the control. 2.14. Subcutaneously Implanted Tumour Model. BALB/c female nude mice aged approximately 5 weeks were purchased from Charles River Laboratories (Zhejiang, China) and housed in pathogen-free cages. Then, miR-543-up and negative control HEY cells resuspended in 0.1 ml PBS were injected subcutaneously into the right flanks of nude mice at a density of 1 × 10 6 /ml suspended in 200 μl PBS. The tumour volume was estimated and recorded once a week by the formula volume = π/6 × length × width × height. The tumours were excised from the mice and imaged after 8 weeks. The procedure was approved by the Ethics Committee for Animal Experiments of Fujian Province Maternal and Children Hospital. 2.15. Dual-Luciferase Reporter Assay. The amplified 3 ′ -UTR of IGF2 was cloned upstream of the firefly luciferase gene (Promega, Wisconsin, USA) to construct the wild-type luciferase reporter plasmid. Meanwhile, the mutant plasmid of the IGF2 3 ′ -UTR was constructed by mutating the predicted miR-543 binding site using a mutagenesis kit (Gene, Shanghai, Key Targeted Genes in the Proteoglycans in the Cancer Pathway. GSE71449 was downloaded and processed from the GEO database. Significantly DE-miRNAs were identified and are shown in a heatmap and volcano plots, respectively (Figures 1(a)-1(d)). A Venn diagram indicated that of the 24 Metabolism is vital in the progression of OvCa [21]. To identify metabolism-related pathways, we used DIANA tools and found that the DE-miRNAs mainly regulated 12 pathways (miRNA-4876-3p was excluded due to a lack of annotation in the database) (Figure 1(f)). The DE-miRNA-enriched metabolism-related pathways included biosynthesis of unsaturated fatty acids, proteoglycans in cancer, glycosphingolipid biosynthesis-lacto and neolacto series, and other pathways (Figure 1(g)). The results demonstrated that miR-543 was involved in the highest number of pathways and proteoglycans in OvCa and was selected for further study for its crucial role in tumour proliferation and angiogenesis [22]. Overall, a total of 26 genes were included because they were the predicted miR-543 targeted genes that regulated proteoglycans in OvCa. To elucidate the unknown genes unique to EOC involved in glucose metabolism, we constructed an interaction network of the 26 predicted miR-543 target genes by applying the STRING online database. Consequently, the interlinked network between genes from the predicted genes closely related to the proteoglycans in the cancer pathway is illustrated in Figure 2(a). Furthermore, GO functional enrichment was performed for these genes. All results were ranked by statistically enriched score [−log ðP valueÞ], and the top hits of each category are displayed in Figure 2(b). In terms of biological processes, the top 3 enriched terms were response to growth factor, cellular response to growth factor stimulus, and tissue In addition, the Edu (e, f) proliferative abilities were all rescued by upregulating the expression of IGF2 in lv-HEY-miR-543-up cells. IGF2 was significantly downregulated after being cocultivated with OvCa-derived exosomes than that with controls. Meanwhile, the glucose secretion was significantly increased, and the proliferation ability in OvCa cells was significantly decreased after the treatment of exosome-derived miR-543 in tumour cells (g-l). Abbreviation: lv: lentivirus; Edu: 5-ethynyl-2′-deoxyuridine. * P < 0:05 vs. control (unpaired t-test), * * P < 0:01 vs. control (unpaired t-test). 13 Journal of Immunology Research development. In addition, cellular response to fibroblast growth factor stimulus, protein binding, and receptor binding were the top 3 enriched terms in the cellular component analysis, while membrane raft, cytosol, and caveola were the top enriched terms in the molecular function analysis. Apart from proteoglycans in cancer, pathways in cancer and focal adhesion were ranked in the top three pathways in the KEGG analysis (Table 1). Correlation between Key Genes, Patient Clinicopathological Factors, and Survival. To determine the correlation between the patient prognosis and stage in patients with EOC, Kaplan-Meier survival curves and stage plots comparing the expression of the 24 predicted target genes of miR-543 and patient prognosis and stage in TCGA cohort were generated. IGF2 (P = 0:042) was identified as a strong indicator of the clinical survival time of EOC patients (Figure 2(c)). Besides, ITGB1, HGF, TWIST1, IGF-1, PPP1R12A, and BRAF exhibited no significant correlations with overall survival time (Figures 2(d)-2(i)). However, none of the genes manifested statistically significant differences in the tumour stage, patient age, or tumour grade. Overexpression of miR-543 Inhibits the Proliferation of EOC. Tumour invasion and colony formation are vital and final malignant behaviours during EOC progression. To test whether miR-543 is required for cell invasion and proliferation, we examined the expression of miR-543 in wild-type normal ovarian cells and OvCa cells (Figure 3(a)). Based on the results, we overexpressed miR-543 in SKOV3 and HEY cells because they had relatively low expression of miR-543 among the tested OvCa cells. Further experiments confirmed obvious overexpression of miR-543 in these two cell lines by stable transfection (Figure 3(b)). Furthermore, overexpression of miR-543 significantly decreased the proliferation rates of SKOV3 and HEY cells in colony formation assays (Figures 3(c) and 3(e)) and the suppressed proliferative function was explored in the Edu assay (Figures 3(d) and 3(f)). We further investigated whether miR-543 also plays a proliferative suppressor role in EOC in vivo. Similar to the in vitro results, we observed that nude female mice injected with HEY cells overexpressing miR-543 presented obviously smaller tumours than those injected with control cells (Figures 3(g)-3 (h)). The in vivo assays provided additional evidence that miR-543 plays a tumour suppressive role during EOC 14 Journal of Immunology Research progression. Ki67, which represents the proliferation ability in vivo, was expressed at significantly lower levels by IHC after overexpressing miR-543 in a subcutaneous xenograft mouse model in comparison with that of controls (Figures 3(i) and 3 (j)). 3.4. Exosomal miR-543 Is a Strong Indicator of OvCa. Dataset (GSE71449) analysis showed that the level of miR-543 expression was significantly lower in exosomes derived from OvCa cells than in exosomes derived from normal ovarian cells. To confirm the differential expression in clinical samples, we next investigated the expression of miR-543 in exosomes derived from patients with EOC, EOC tissues, and the corresponding controls. The expression of miR-543 was significantly lower in EOC tissues (n = 60) than in normal ovarian tissues (n = 60) (P = 0:026, Figure 4(a)). Exosomes extracted from EOC patient serum and controls were tested by TEM and NTA. Figure 4(b) shows that the exosomes were confirmed to be typical round-plate structures with sizes of 30-150 nm. Consistent with the findings in tissues, the exosomal level of miR-543 was significantly lower in EOC patients than in the normal ovary controls (P = 0:0047, Figure 4(c)). Regulatory Network of miR-543 in Inhibiting Proliferation in EOC. Methylation of miRNAs is closely associated with tumour proliferation and is known to be increased in gastrointestinal cancer [23]. Therefore, we assessed the methylation frequency of miR-543 in EOC tissues and paired normal tissues. As shown in Figure 4(d), the methylation frequency of miR-543 was considerably higher in EOC tissues (96% ± 3%) than in adjacent tissues (93% ± 3%). Therefore, methylation was shown to downregulate miR-543 in EOC progression. Bioinformatic analysis allowed us to identify potential targets of miR-543 that are associated with the proteoglycans in cancer pathway. After stable overexpression of miR-543 in SKOV3 cells, the mRNA levels of IGF2 (P = 0:0076), IGF1 (P = 0:022), and TWIST1 (P = 0:019) were notably decreased in comparison to those in the control cells, as demonstrated using RT-PCR assays (Figure 4(e)). IGF2 is an essential glucose regulatory factor that promotes the proliferation of several cancers [24,25]. Because according to RT-PCR, the reduction degree of IGF2 was the most significant, we next tested the concentration of glucose in the supernatant of miR-543-overexpressing cells. As expected, the level of glucose was significantly higher in miR-543overexpressing cells than in control cells (Figure 4(f)). We further performed the WB assay to demonstrate that miR-543 reduced IGF2 in SKOV3 cells at the protein level (Figure 4(g)). Figures 4(h)-4(l) show that IGF2 was relatively conversely expressed in OvCa cell lines and transplanted tumor in the mouse model at the protein and mRNA levels, respectively, compared with the expression of miR-543. 3.6. IGF2 Is Responsible for the miR-543-Mediated Suppression of Proliferation in EOC. Only one predicted binding site of miR-543 in the 3 ′ -UTR of IGF2 mRNA (5736-5742 nt) was found. Subsequently, to confirm that IGF2 is a direct target of miR-543, we conducted a luciferase reporter assay and found a 56.3% reduction in luciferase activity when SKOV3 cells were cotransfected with miR-543 compared with the control cells, suggesting that miR-543 directly targets IGF2 (Figure 4(m)). Moreover, we quantified IGF2 and miR-543 mRNA levels in ovarian cancer and paracancerous tissues. These results showed that miR-543 was significantly negatively correlated with IGF2 (Figure 4(n)). To prove that downregulation of IGF2 is essential for miR-543-mediated suppression of proliferation in EOC, we next performed functional rescue assays. HEY cells overexpressing miR-543 (miR-543-up cells) were cotransfected with an IGF2-overexpressing plasmid ( Figure 5(a)). The difference in IGF2 expression at the protein level was analysed in Figures 5(b) and 5(c). Conversely, the concentration of glucose in the cellular supernatant was significantly lower in the cotransfected cells than in the control cells ( Figure 5 (d)). The glucose supply is very important for tumour proliferation. Consequently, upregulation of IGF2 in miR-543-up HEY cells reversed the suppressive effect of miR-543 in the EdU assay (Figures 5(e)-5(f)) in vitro. Besides, 50 μg/ml of exosomes extracted from CAOV3 cells (expressing the most miR-543 in Figure 3(a)) was added into HEY cell lines and cocultured for 24 hours. At mRNA and protein levels, Figures 5(h)-5(i) show that the expression of IGF2 was significantly downregulated after being cocultivated with OvCa-derived exosomes compared with controls. Meanwhile, the glucose secretion was significantly increased and the proliferation ability in OvCa cells was significantly decreased after the treatment with exosome-derived miR-543 in tumour cells (Figures 5(g)-5(l)). These functional rescue results indicated that IGF2 is a bona fide target of miR-543 in the suppression of OvCa proliferation, and the associated mechanism is shown in Figure 6. Discussion OvCa is a highly heterogeneous cancer with a poor 5-year survival rate of less than 45% [3]. To improve the unsatisfactory clinical outcome of OvCa, there is a pressing need to identify more effective drug targets and cancer-associated molecular mechanisms. To date, studies have revealed that exosomal miRNAs show a range of cancer-regulating properties, including the control of cancer growth. Exosomal miRNAs that are differentially expressed in cancer result in abnormal proteoglycan pathways, thus leading to tumour growth and metastasis [26]. In the current study, database analysis revealed that the expression of miR-543 was significantly lower in exosomes derived from OvCa cells than in those derived from normal ovarian cells. Furthermore, predicted miR-543 targets were enriched in the proteoglycans in the cancer pathway. Both in vitro and in vivo functional assays indicated that the miR-543 mimic significantly suppressed the invasive and proliferative abilities of OvCa cells. We also observed that methylation reduced miR-543 in OvCa tissues compared to adjacent normal tissues. Importantly, IGF2, which is involved in the proteoglycan pathway, was identified as a direct target of miR-543 and rescued 15 Journal of Immunology Research miR-543-related suppression of proliferation in OvCa cells. These findings provide additional evidence of a suppressive role of miR-543 and indicate the diagnostic and therapeutic value of miR-543 for OvCa progression. The basic and terminal hallmark of tumour development is proliferation, in which reprogramming of the energy metabolism pathway occurs [27]. Reprogramming of energy metabolism is a complex process that includes metabolismrelated enzymes and membrane transporters. Exosomes carry molecular cargo to transfer signals from tumour cells into the tumour and tumour microenvironment, thus regulating metabolism and consequently proliferation. Recent experimental assays have shown that exosomes released by tumour cells into the tumour microenvironment are an important source of functional RNAs and proteins but lack "free circulating" DNA, glycolytic enzymes, and cytoskeletal components [18]. Therefore, "free circulating" RNAs in tumour-derived exosomes likely regulate glycolytic enzymes and cytoskeletal constituents by targeting metabolic and cytoskeletal genes. Lactate derived from glucose or glycogen breakdown is an important energy supplement for tumour proliferation [28]. Most published studies have investigated exosomal miRNAs derived from cultured tumour cells, which may not be consistent with those derived from OvCa patients. In the current study, we initially provided evidence that miR-543 secreted by OvCa patient exosomes was downregulated and promoted proliferation by regulating the target IGF2 to participate in proteoglycan pathways, which regulate metabolism and cytoskeletal synthesis [29]. Similar to our results, miR-543 has been identified as a tumour suppressor in pancreatic cancer, colorectal cancer [30], breast cancer [31], glioma [32], and cervical cancer [33]. However, other studies have indicated an oncogenic role of miR-543 in digestive and urinary system cancers [34]. These controversial findings indicate that miR-543 is involved in a number of pathways in different cancer diseases. Yu et al. reported that miR-543 was downregulated at the cellular and tissue levels in OvCa [35,36]. Moreover, mechanistic analysis showed that lncRNA PVT1 and placental growth factor significantly reduced miR-543 expression, and SERPINI1 and TWIST1 were the target genes. Currently, there is no experimental evidence that shows the role of methylation and target genes of miR-543 involved in metabolism in OvCa. In the current study, we demonstrated that methylation downregulated miR-543 in OvCa tissues and exosomes, and IGF2 is a critical direct downstream metabolic target involved in tumour proliferation [19,37]. Epigenetic alterations, such as changes in miRNAmediated processes and RNA methylation, are involved in proliferation in various types of invasive cancers [38]. This is the first study to show the high level of miR-543 methylation in OvCa, and its delivery by exosomes leads to IGF2 dysfunction. IGF2 binding activates IGF1R and IGF2R and is associated with aberrant glucose metabolism and proteoglycan dysregulation, which is responsible for cancer development [39]. Proteoglycans are important molecules that participate in cytoskeletal processes, such as the synthesis of the extracellular matrix and cell membrane. Furthermore, high expression of IGF2 was correlated with poor clinical outcome, chemoresistance, and increased proliferation and migration of OvCa [40][41][42]. Drugs that block IGF2 and decrease glucose levels, such as metformin, have become a promising approach to prevent and treat cancer [43,44]. Similar results were observed in the current study, whereby miR-543 overexpression in OvCa cells significantly increased the concentration of glucose in the medium and suppressed proliferation, while rescuing IGF2 expression in miR-543 mimic-transfected OvCa cells resulted in decreased glucose in the medium and increased cell growth. In conclusion, our findings provide evidence for OvCasecreted exosomes that downregulate miR-543 by methylation and thus rescue the inhibitory effect on IGF2 to promote proliferation. These findings improve our understanding of the involvement of miR-543 in metabolism and cytoskeletal biology and identify miR-543 as a candidate for clarifying OvCa development and a crucial therapeutic and diagnostic biomarker. Data Availability The datasets used and/or analyzed during the current study are available from the corresponding authors on reasonable request.
5,562.2
2022-03-29T00:00:00.000
[ "Biology" ]
Noninvasive Estimation of Tumor Interstitial Fluid Pressure from Subharmonic Scattering of Ultrasound Contrast Microbubbles The noninvasive estimation of interstitial fluid pressure (IFP) using ultrasound contrast agent (UCA) microbubbles as pressure sensors will provide tumor treatments and efficacy assessments with a promising tool. This study aimed to verify the efficacy of the optimal acoustic pressure in vitro in the prediction of tumor IFPs based on UCA microbubbles’ subharmonic scattering. A customized ultrasound scanner was used to generate subharmonic signals from microbubbles’ nonlinear oscillations, and the optimal acoustic pressure was determined in vitro when the subharmonic amplitude reached the most sensitive to hydrostatic pressure changes. This optimal acoustic pressure was then applied to predict IFPs in tumor-bearing mouse models, which were further compared with the reference IFPs measured using a standard tissue fluid pressure monitor. An inverse linear relationship and good correlation (r = −0.853, p < 0.001) existed between the subharmonic amplitude and tumor IFPs at the optimal acoustic pressure of 555 kPa, and pressure sensitivity was 1.019 dB/mmHg. No statistical differences were found between the pressures measured by the standard device and those estimated via the subharmonic amplitude, as confirmed by cross-validation (mean absolute errors from 2.00 to 3.09 mmHg, p > 0.05). Our findings demonstrated that in vitro optimized acoustic parameters for UCA microbubbles’ subharmonic scattering can be applied for the noninvasive estimation of tumor IFPs. Introduction The interstitial fluid pressure (IFP) is an important characteristic of the solid tumor microenvironment. Several studies reported a high IFP in various types of tumors, including cervical cancer, breast carcinoma, and malignant melanoma [1]. The major driving force of tumor IFPs is the microvascular pressure (MVP), which is affected by the balance of fluid exchange between the microcirculation and interstitium [2]. In tumor tissues, blood vessel irregularity and leakiness, limited lymphatic drainage, interstitial fibrosis, and a contraction of the interstitial space mediated by stromal fibroblasts contribute to an increased IFP. Moreover, tumors with a high IFP tend to be correlated with high distant metastasis and recurrence rates [3,4]. Thus, monitoring IFP during the treatment process would be beneficial to identify therapy resistance and explore individual treatment schemes [5]. The IFP can be measured by the wick-in-needle technique or its modified methods, all of which involve inserting a needle into the tumor [6]. These invasive methods that may result in complications, including hemorrhage and needle tract implantation metastasis, are not suitable for widespread application. In the past decades, many noninvasive approaches have been explored to achieve a real-time assessment of IFP. Although dynamic contrast enhanced (DCE) MRI has been proven to evaluate the IFP level and heterogeneity distribution in a tumor by evaluating the infusion of a contrast agent in the tumor tissue [7], the inconvenience, time-consuming nature, and high cost limit its real-time application. Ultrasound imaging, as the most convenient and portable examination, has been widely used for disease screening, diagnosis, and as a treatment guide. Contrast-enhanced ultrasonography (CEUS) utilizes blood pool ultrasound contrast agent (UCA) microbubbles to enhance the intravascular backscatter and harmonic imaging technique to achieve a high contrast-to-tissue ratio (CTR) and visualization of vascular structures [8]. The clinical UCAs such as SonoVue, Sonazoid, and Definity usually consist of compressible phospholipidshelled bubbles filled with a high molecular weight but low diffusion gas. When subjected to an incident acoustic wave, the compressibility of the microbubble makes it subject to nonlinear oscillation and generates various harmonics (f 0 , 2f 0 , 3f 0 , etc.), subharmonics (1/2f 0 ) with the frequency at half the transmission frequency (f 0 ), and ultraharmonics (3/2f 0 ) [9,10]. Since tissues can also generate the second, third harmonic signals, etc., tissue signals by extracting higher harmonics cannot be completely suppressed [11]. While in the tissue it is usually difficult to generate a subharmonic scattering, subharmonic imaging and three-dimensional subharmonic imaging with a higher CTR have been developed and applied to multiple organs and tumor imaging [12][13][14][15]. Besides the ultrasound contrast subharmonic imaging, an excellent linear relationship was observed between the microbubble's subharmonic scattering amplitude and the ambient pressure in the surrounding fluid medium: when the ambient pressure increased from 0 to 186 mmHg, the amplitude of the subharmonic signal was decreased by 10-20 dB in vitro for several UCAs including Levovist, Optison, Definity, Sonazoid, and SonoVue [16][17][18][19][20][21]. Shi et al. demonstrated that the subharmonic amplitude grows with the increase in acoustic pressure and the generation of the subharmonic can be divided into three stages: occurrence, growth, and saturation. During the growth stage, the subharmonic component is sensitive to pressure changes [16]. For SonoVue, Xu et al. found the existence of the second growth stage with a higher pressure sensitivity than that of the first growth stage before the saturation stage [22]. The above results suggest that the subharmonic is an ideal indicator of pressure variation. Compared to the conventional B-mode image, pressure-dependent subharmonic imaging has the advantages of both enhancing CTR to improve image quality and providing information on ambient pressure distribution. Consequently, the subharmonic aided pressure estimation (SHAPE) technique to determine the pressure changes was proposed [16]. This technique has been extensively studied in in vitro experiments and demonstrated by a variety of in vivo models involving intracranial pressure, portal pressure, and intracardiac pressure [17,21,[23][24][25][26]. All the subharmonic signals collected from microbubbles in the large vessels or the heart cavity suggested an inverse relationship between the subharmonic amplitude and blood pressure, and had a good performance in clinical pressure stratification [27][28][29][30]. Tumor IFPs can be evaluated by detecting the subharmonic signals of microbubbles in tumor microvessels at a higher driving frequency compared with those used in large vessels (2)(3)(4). Previously, Definity has been used to verify the efficacy of SHAPE at the transmitting frequency of 6.7 MHz, 10 MHz, and 8 MHz in in vitro and in vivo melanoma or breast cancer animal models [31,32]. However, Definity is not approved in Asia. Both SonoVue and Sonazoid microbubbles are approved for clinical use in China, and Sonazoid was selected in the current study because of the imaging resolution benefit from its resonance frequency (~4 MHz), which was higher than that of SonoVue (~2 MHz). Although noninvasive portal vein pressure and hepatic venous pressure gradient monitoring based on subharmonics using Sonazoid have reached their ideal progress in animal experiments and clinical studies [25,27,28], the IFP estimation of superficial tumors based on Sonazoid has not been reported yet. In this investigation, we will first use Sonazoid to explore the value and stability of high-frequency excited subharmonics for IFP evaluation in tumor models. Acoustic Attenuation Measurement The experimental setup of the acoustic attenuation measurement is shown in Figure 1 to obtain the resonance frequency of UCA Sonazoid microbubbles. Two single element flat transducers (V382-SU, Olympus, Waltham, MA, USA) were placed opposite to one another as the transmitter and receiver of the acoustic wave, respectively. The diameter of the transducer was 1.3 cm, and the center frequency was 3.5 MHz with a −6 dB bandwidth of 82.35%. The transducers were driven by an ultrasonic pulser-receiver (DPR300, JSR Ultrasinics, Pittsford, NY, USA). The microbubbles' sample chamber was located at the center of the two transducers. The side walls of the sample chamber were made of 6 µm mylar which could allow the passing acoustic wave with minimal attenuation. After adding 200 mL of saline to the sample chamber for reference signal acquisition, 50 µL of Sonazoid (GE Healthcare, Oslo, Norway) suspension was injected into the sample chamber for sample signal acquisition. A magnetic stirrer with a low rotation speed was used to keep the microbubble suspension uniform. Biosensors 2023, 13, x FOR PEER REVIEW 3 of 14 vessels (2-4 MHz). Previously, Definity has been used to verify the efficacy of SHAPE at the transmitting frequency of 6.7 MHz, 10 MHz, and 8 MHz in in vitro and in vivo melanoma or breast cancer animal models [31,32]. However, Definity is not approved in Asia. Both SonoVue and Sonazoid microbubbles are approved for clinical use in China, and Sonazoid was selected in the current study because of the imaging resolution benefit from its resonance frequency (~4 MHz), which was higher than that of SonoVue (~2 MHz). Although noninvasive portal vein pressure and hepatic venous pressure gradient monitoring based on subharmonics using Sonazoid have reached their ideal progress in animal experiments and clinical studies [25,27,28], the IFP estimation of superficial tumors based on Sonazoid has not been reported yet. In this investigation, we will first use Sonazoid to explore the value and stability of high-frequency excited subharmonics for IFP evaluation in tumor models. Acoustic Attenuation Measurement The experimental setup of the acoustic attenuation measurement is shown in Figure 1 to obtain the resonance frequency of UCA Sonazoid microbubbles. Two single element flat transducers (V382-SU, Olympus, Waltham, MA, USA) were placed opposite to one another as the transmitter and receiver of the acoustic wave, respectively. The diameter of the transducer was 1.3 cm, and the center frequency was 3.5 MHz with a −6 dB bandwidth of 82.35%. The transducers were driven by an ultrasonic pulser-receiver (DPR300, JSR Ultrasinics, Pittsford, NY, USA). The microbubbles' sample chamber was located at the center of the two transducers. The side walls of the sample chamber were made of 6 µm mylar which could allow the passing acoustic wave with minimal attenuation. After adding 200 mL of saline to the sample chamber for reference signal acquisition, 50 µL of Sonazoid (GE Healthcare, Oslo, Norway) suspension was injected into the sample chamber for sample signal acquisition. A magnetic stirrer with a low rotation speed was used to keep the microbubble suspension uniform. The acoustic attenuation coefficient α(ƒ) can be calculated by the following formula. The acoustic path length containing UCA microbubbles in the sample chamber was z = 5 cm. The power spectra S ref (ƒ) and S sample (ƒ) for the reference and sample were obtained using the fast Fourier transform (FFT) of the average of 50 received signals. Furthermore, Biosensors 2023, 13, 528 4 of 13 the resonance frequency of the UCA microbubbles was defined as the frequency of the maximum attenuation in the spectrum. Subharmonic Scattering Acquisition under Different Acoustic Pressure Conditions The experiment setup for the acoustic scattering measurement of UCA microbubbles included a flow circulation system described in our previous study [22] and a customized ultrasound system (iNSIGHT-37CT, Saset (Chengdu) Inc., Chengdu, China) to obtain raw radio frequency (RF) data. The flow circulation system ( Figure 2) contained three parts: the microbubbles' water tank and flow pump, vessel phantom, and self-developed air pump for auto pressure regulation with a resolution of 0.1 mmHg and pressure monitor BIOPAC (MP160, BIOPAC Systems, Inc., Goleta, CA, USA). The linear array transducer with 128 elements had a center frequency of 8 MHz and a −6 dB bandwidth of 4 MHz to 12 MHz. According to previous theoretical and experimental studies, the optimal transmit frequency of the subharmonic scattering was twice the resonance frequency of the UCA microbubbles [33,34]. The transmitted frequency of the experiment was selected according to the Sonazoid's resonance frequency from the acoustic attenuation measurement. The transmitted tone burst had 16 cycles because the microbubble needs a long pulse duration to generate stable subharmonic scattering. The acoustic pressures of the transmitted wave from the linear array were calibrated at the beam focus of 2.0 cm through a 0.5 mm needle hydrophone (Precision Acoustics, Dorset, UK) combined with a three-dimensional acoustic field measurement system (BRC8090, Shenzhen Boray Technology Co. Ltd., Shenzhen, China). The acoustic attenuation coefficient α(ƒ) can be calculated by the following formula. The acoustic path length containing UCA microbubbles in the sample chamber was z = 5 cm. The power spectra Sref (ƒ) and Ssample (ƒ) for the reference and sample were obtained using the fast Fourier transform (FFT) of the average of 50 received signals. Furthermore, the resonance frequency of the UCA microbubbles was defined as the frequency of the maximum attenuation in the spectrum. α ƒ 8.686 ln S ln S Subharmonic Scattering Acquisition under Different Acoustic Pressure Conditions The experiment setup for the acoustic scattering measurement of UCA microbubbles included a flow circulation system described in our previous study [22] and a customized ultrasound system (iNSIGHT-37CT, Saset (Chengdu) Inc., Chengdu, China) to obtain raw radio frequency (RF) data. The flow circulation system ( Figure 2) contained three parts: the microbubbles' water tank and flow pump, vessel phantom, and self-developed air pump for auto pressure regulation with a resolution of 0.1 mmHg and pressure monitor BIOPAC (MP160, BIOPAC Systems, Inc., Goleta, CA, USA). The linear array transducer with 128 elements had a center frequency of 8 MHz and a −6 dB bandwidth of 4 MHz to 12 MHz. According to previous theoretical and experimental studies, the optimal transmit frequency of the subharmonic scattering was twice the resonance frequency of the UCA microbubbles [33,34]. The transmitted frequency of the experiment was selected according to the Sonazoid's resonance frequency from the acoustic attenuation measurement. The transmitted tone burst had 16 cycles because the microbubble needs a long pulse duration to generate stable subharmonic scattering. The acoustic pressures of the transmitted wave from the linear array were calibrated at the beam focus of 2.0 cm through a 0.5 mm needle hydrophone (Precision Acoustics, Dorset, UK) combined with a three-dimensional acoustic field measurement system (BRC8090, Shenzhen Boray Technology Co. Ltd., Shenzhen, China). During the experiment, the transducer was positioned above the vascular phantom to ensure the focal point (focus depth 2.0 cm) in the vessel lumen based on the guide of ultrasound imaging. A piece of sound-absorbing material was located on the bottom of the water box to restrain the echo from the bottom. A bolus of 0.1 mL Sonazoid solution was injected into the water tank that contained 400 mL of 0.9% NaCl solution (4 × 10 5 microbubbles/mL) and circulated through the system. The concentration used in the in vitro experiment was 2 × 10 −3 µL/mL. The mean and median sizes of microbubbles were 1.46 µm and 1.29 µm. Furthermore, 95% of the microbubbles were smaller than 5 µm. Given that the IFP of a tumor is usually less than 40 mmHg, we set the ambient pressure in the range of 10-40 mmHg with a step of 10 mmHg. Beamformed RF data of 200 frames were collected and repeated three times in each acoustic pressure and ambient pressure. The microbubble solution was replenished in the dynamic flow system after each data acquisition. Tumor Models This study was approved by the Institutional Animal Care and Use Committee. Female BALB/C nude mice (aged 5 weeks) were obtained from Zhuhai BesTest Bio-Tech Co., Zhuhai, China, and 4T1 (Mus musculus breast cancer) cells were cultured in an RPMI-1640 medium supplemented with 10% fetal bovine serum, and 1% penicillin-streptomycin. The tumor cells were injected by approximately concentrating 2 × 10 6 cells in 0.1 mL of phosphate-buffered saline. Orthotopic breast cancer models were established by injecting 4T1 cells in the mammary fat pads of nude mice. The tumor volume (V) was measured by ultrasound B-mode imaging with digital calipers calculated as V = π/6 × a × b × c (a, b, and c refer to width, axial length, and vertical length of tumor). When the tumors' volumes reached the value of 200 mm 3 , experimental data were collected. (Figure 3) Biosensors 2023, 13, x FOR PEER REVIEW 5 of 14 During the experiment, the transducer was positioned above the vascular phantom to ensure the focal point (focus depth 2.0 cm) in the vessel lumen based on the guide of ultrasound imaging. A piece of sound-absorbing material was located on the bottom of the water box to restrain the echo from the bottom. A bolus of 0.1 mL Sonazoid solution was injected into the water tank that contained 400 mL of 0.9% NaCl solution (4 × 10 5 microbubbles/mL) and circulated through the system. The concentration used in the in vitro experiment was 2 × 10 −3 µL/mL. The mean and median sizes of microbubbles were 1.46 µm and 1.29 µm. Furthermore, 95% of the microbubbles were smaller than 5 µm. Given that the IFP of a tumor is usually less than 40 mmHg, we set the ambient pressure in the range of 10-40 mmHg with a step of 10 mmHg. Beamformed RF data of 200 frames were collected and repeated three times in each acoustic pressure and ambient pressure. The microbubble solution was replenished in the dynamic flow system after each data acquisition. Tumor Models This study was approved by the Institutional Animal Care and Use Committee. Female BALB/C nude mice (aged 5 weeks) were obtained from Zhuhai BesTest Bio-Tech Co., Zhuhai, China, and 4T1 (Mus musculus breast cancer) cells were cultured in an RPMI-1640 medium supplemented with 10% fetal bovine serum, and 1% penicillin-streptomycin. The tumor cells were injected by approximately concentrating 2 × 10 6 cells in 0.1 mL of phosphate-buffered saline. Orthotopic breast cancer models were established by injecting 4T1 cells in the mammary fat pads of nude mice. The tumor volume (V) was measured by ultrasound B-mode imaging with digital calipers calculated as V π/6 a b c (a, b, and c refer to width, axial length, and vertical length of tumor). When the tumors' volumes reached the value of 200 mm 3 , experimental data were collected. (Figure 3) Tumors' Subharmonic Scattering Acquisition The customized ultrasound system was used to acquire data in the in vivo experiment. The mice were anesthetized with inhaled isoflurane (concentration 1.5%, flow 800 mL/min) while lying on a heating pad. Degassed coupling gel was applied on the tumor to minimize the acoustic impedance mismatch between the transducer and tissue to maximize the acoustic transmission. Then, the transducer with a focal depth of 2.0 cm was fixed at the largest transverse cross-section of the tumor. A bolus of 40 µL diluted Sonazoid suspension (dilution 1:20) was injected through the tail vein three times and was followed by a 0.1 mL saline flush. The concentration was 1.6 × 10 −2 µL/mL. The optimal acoustic pressure was selected on the basis of the in vitro experiment results. The contrast harmonic imaging (HI) mode was chosen to acquire the RF data for five seconds. The interval between each injection was at least 10 min to ensure contrast agent elimination. IFP Measurement The tumor IFP was measured using a tissue fluid pressure monitor (Mode 295-1, Stryker, Kalamazoo, MI, USA) [11] after the ultrasound scan. The device consisted of a needle with a side hole of 3 mm connected to a pressure sensor. The needle filled with saline needed to be inserted into the tissue vertically to make the side hole completely covered by tissue. The indicator of the device should be corrected to zero before measurement. Due to the heterogeneity distribution of IFP in tumors and decreasing IFP in the margin of tumors [4,33], IFP measurements were acquired from the central area of the tumor. Each tumor was measured 3 times in different positions of the central area. Data Processing The RF data were extracted and processed offline on a PC using Matlab (Mathworks Inc., version R2017b, Natick, MA, USA). The region-of-interest (ROI) was selected from the reconstructed B-mode image at the central area of the vascular phantom or the tumor. After adding a Hanning window function on the RF data in the ROI, the amplitude spectrum was further obtained by the fast Fourier transform. The subharmonic amplitude was extracted at the corresponding subharmonic frequency with a bandwidth of 1 MHz on the spectrum (Figure 4). The average amplitude of 20 frames of RF data was recorded. Statistical Analysis The subharmonic amplitude and IFP of various tumor types were presented as the mean ± standard deviation (SD). A linear regression analysis was conducted to compare the relationship between the subharmonic amplitude and IFPs. Cross-validation was performed five times to investigate the stability of the subharmonic estimated pressure among each tumor. All 30 tumors were randomly divided into the model group (n = 20) Statistical Analysis The subharmonic amplitude and IFP of various tumor types were presented as the mean ± standard deviation (SD). A linear regression analysis was conducted to compare the relationship between the subharmonic amplitude and IFPs. Cross-validation was performed five times to investigate the stability of the subharmonic estimated pressure among each tumor. All 30 tumors were randomly divided into the model group (n = 20) and validation group (n = 10). Calibration equations were generated from data of the model group to predict the IFPs of the validation group. A paired t-test was used to compare the differences between pressures measured by a pressure monitor and the pressures predicted by the subharmonic amplitude using cross-validation. All data and figures were analyzed and presented using SPSS (IBM SPSS, version 25.0, IBM Corporation, Armonk, NY, USA) and GraphPad Prism (GraphPad Software, version 8, Boston, MA, USA). A p value < 0.05 was considered statistically significant. Resonance Frequency of Sonazoid In order to obtain the resonance frequency of Sonazoid to make sure the optimal driving frequency generated subharmonic scattering, the acoustic attenuation measurement was first carried out. Figure 5 showed the acoustic attenuation spectra at different times after injecting the Sonazoid suspension. The frequency corresponding to the maximum attenuation coefficient is the resonance frequency. The acoustic attenuation spectra of Sonazoid showed the maximum value of the attenuation coefficient occurred around 4.2 MHz at the time of 1 min after injection. The resonance frequency decreased first and then increased with the injection time and remained in the range of 4-4.5 MHz. Furthermore, the maximum value of the attenuation coefficient decreased continuously with time. Because all the data acquisition was completed within 1 min after the injection of the microbubbles, the optimal driving frequency was determined as 8.5 MHz, approximately equal to twice the resonance frequency at the moment. Because all the data acquisition was completed within 1 min after the injection of the microbubbles, the optimal driving frequency was determined as 8.5 MHz, approximately equal to twice the resonance frequency at the moment. Relationship between Subharmonic Amplitude and Ambient Pressure The acoustic pressure at the focus generated by the linear array was calibrated in the range of 0.29-1.0 MPa (peak negative acoustic pressure), corresponding to mechanical index 0.10-0.34 (MI / ƒ . According to the results of the acoustic attenuation experiment, the resonance frequency of Sonazoid was in the range of 4-4.5 MHz. The transmit- Relationship between Subharmonic Amplitude and Ambient Pressure The acoustic pressure at the focus generated by the linear array was calibrated in the range of 0.29-1.0 MPa (peak negative acoustic pressure), corresponding to mechanical index 0.10-0.34 (MI = P A / √ ). According to the results of the acoustic attenuation experiment, the resonance frequency of Sonazoid was in the range of 4-4.5 MHz. The transmitted frequency of 8.5 MHz was applied in subsequent experiments. The acoustic pressures evaluated in the in vitro experiment were 292 kPa, 407 kPa, 555 kPa, 663 kPa, 746 kPa, and 816 kPa. Figure 6a shows that the amplitude of the subharmonic signals increased with the increasing acoustic pressure at ambient pressures of 10 mmHg and 40 mmHg under the above acoustic pressures, respectively. The range of acoustic pressure for exciting subharmonic scattering in this study corresponded to the growth stage (0.3-0.6 MPa) and saturation stage (above 0.6 MPa) consistent with Shi's findings [16]. As shown in Figure 6a, the subharmonic amplitude of 40 mmHg was always lower than that of 10 mmHg under the same incident acoustic pressure. In addition, the greatest decrease in the subharmonic amplitude reached 4.71 dB at 555 kPa, which indicated an available maximum pressure sensitivity of the subharmonic amplitude. In order to investigate the influence of acoustic pressure on the correlation between the subharmonic amplitude and ambient pressure, the subharmonic amplitude vs. ambient pressure curves were further measured for 10 mmHg, 20 mmHg, 30 mmHg, and 40 mmHg under different acoustic pressures, respectively. Figure 6b shows the inverse linear relationships between the subharmonic amplitude and ambient pressure at the acoustic pressures of 407 kPa, 555 kPa, and 663 kPa. As depicted in Table 1, the Pearson's correlation coefficients of the subharmonic amplitude and ambient pressure were higher than −0.9 when acoustic pressures were 555 kPa, 663 kPa, and 746 kPa, but the result of 663 kPa was not statistically significant (p = 0.076). The highest correlation occurred at 555 kPa, with a correlation coefficient of −0.966, and also reached a maximum ambient pressure sensitivity of 0.15 dB/mmHg. When the acoustic pressure was higher than 555 kPa, the difference of subharmonic amplitudes between 10 mmHg and 40 mmHg was decreased with the increase in acoustic pressure. As a result, the sensitivity of the subharmonic amplitude related to the ambient pressure began to decrease from 0.15 dB/mmHg to 0.068 dB/mmHg. pressures of 407 kPa, 555 kPa, and 663 kPa. As depicted in Table 1, the Pearson's correla tion coefficients of the subharmonic amplitude and ambient pressure were higher than −0.9 when acoustic pressures were 555 kPa, 663 kPa, and 746 kPa, but the result of 663 kPa was not statistically significant (p = 0.076). The highest correlation occurred at 555 kPa with a correlation coefficient of −0.966, and also reached a maximum ambient pressure sensitivity of 0.15 dB/mmHg. When the acoustic pressure was higher than 555 kPa, the difference of subharmonic amplitudes between 10 mmHg and 40 mmHg was decreased with the increase in acoustic pressure. As a result, the sensitivity of the subharmonic am plitude related to the ambient pressure began to decrease from 0.15 dB/mmHg to 0.068 dB/mmHg. Correlation between Subharmonic Amplitudes and IFPs The 30 4T1 tumor IFP models were established, and the range of IFPs for all tumors was 3 to 16 mmHg. This range was consistent with the results of a previous large-sample research study showing that the IFPs of different subcutaneous transplanted tumors of mice ranged from 4.4 to 15.2 mmHg [4]. Because the in vitro experiment demonstrated the maximum ambient pressure sensitivity occurred at an acoustic pressure of 555 kPa, the in vivo experiment was also carried out at 555 kPa. The subharmonic amplitude of tumors decreased with the increase in IFPs. The correlation coefficients between subharmonic amplitudes and IFPs were −0.848 (p < 0.01) (Figure 7). To verify the stability of subharmonics' IFP estimation, cross-validation was performed to obtain the estimated IFPs. As shown in Table 2, the ranges of mean absolute errors and the RMSE between the measured IFP by the standard pressure monitor and the estimated IFP from the subharmonic amplitude in each random group were 1.83 to 2.95 mmHg and 2.25 to 3.60 mmHg. Paired t-tests demonstrated no statistical differences between the pressures measured by the pressure monitor and estimated values (p > 0.05), which indicated the stability applicability of IFP estimation using subharmonic scattering from UCA microbubbles. Biosensors 2023, 13, x FOR PEER REVIEW 10 of 1 each random group were 1.83 to 2.95 mmHg and 2.25 to 3.60 mmHg. Paired t-tests demon strated no statistical differences between the pressures measured by the pressure monito and estimated values (p > 0.05), which indicated the stability applicability of IFP estima tion using subharmonic scattering from UCA microbubbles. Discussion In our investigation, the ambient pressure sensitivity of Sonazoid's subharmonic amplitude exceeded that reported in the literature, primarily due to the selection of driving frequency. The choice of the driving frequency is usually based on the specific tissue for the in vivo application. In clinical practice, a low-frequency ultrasound of 1-5 MHz is commonly used for the liver and heart, as they typically require a greater depth of penetration. In order to estimate the portal vein pressure and intracardiac pressure, Forsberg et al. utilized 2.5 MHz ultrasound waves to excite Sonazoid microbubbles and generate subharmonic scattering signals with a frequency of 1.25 MHz [24,25]. In their prior study on Sonazoid, the greatest pressure sensitivity of 0.08 dB/mmHg was attained for the subharmonic amplitude at a drive frequency of 2.5 MHz and an acoustic pressure of 0.35 MPa [17]. In contrast, superficial tumors such as the breast and thyroid require higher ultrasound frequencies for superior imaging resolution. Therefore, we employed acoustic waves with a frequency larger than 7.5 MHz for estimating tumor interstitial fluid pressure. While Sonazoid had a resonance frequency of 4.4 MHz [31], its optimal driving frequency for subharmonic excitation was double the resonance frequency in theory, resulting in an ideal driving frequency of 8.8 MHz to generate subharmonic scattering for assessing the IFP of superficial tumors. Consequently, our current study achieved a high pressure sensitivity of 0.15 dB/mmHg at a driving frequency of 8.5 MHz and an acoustic pressure of 0.55 MPa. In addition, the pressure sensitivities (1.015 dB/mmHg) in the tumor models were significantly higher than those in the vascular phantom. This difference between the in vitro and in vivo experiments was also observed in previous studies for Definity, Sonazoid, and SonoVue [25,31,34]. The main reason may be that the microbubbles exhibited more violent oscillation, more destruction, and produced more nonlinear scattering at body temperature (25 • C) [35]. Tumor heterogeneity makes it possible for the same tumor type to exhibit different biological behaviors. The 4T1 orthotopic breast cancer models in this study were manifested as an IFP range of 3-16 mmHg, which will influence the perfusion of tumors. Whether the subharmonic-based IFP estimation is dependent on individual differences is important for the promotion of the technique. In our study, the optimal acoustic pressure of 555 kPa in the in vitro experiment was applied to the tumor IFP model. Excellent correlations and high pressure sensitivities between the tumor IFP and subharmonic amplitudes extracted from tumor tissues were demonstrated (r = −0.848, p < 0.05, pressure sensitivity: −1.015 dB/mmHg). Moreover, a five-time cross-validation study showed relatively small errors (mean absolute error less than 2.95 mmHg, RMSE less than 3.60 mmHg). These results suggest the stability and universality of subharmonic-amplitude-based IFP estimation in tumor models. There are some limitations to this study. The optimal acoustic pressure, at which the subharmonic amplitudes were sensitive to ambient pressures, needed to be processed off-line in every experimental setting. The clinical application of this technique required commercial transducers and ultrasonic machines equipped with a real-time data processor. An acoustic pressure optimization program should be further developed to compensate the attenuation from acoustic wave propagation in the tissue. The skin thickness was about 1 mm. As a result, the attenuation of the transmitted 8.5 MHz acoustic wave in the skin can be negligible before reaching the subcutaneous transplanted tumor, and depths of superficial tumors in humans are various. The attenuation caused by acoustic wave propagation in the tissue is affected by tissue type and depth. The attenuation coefficient of fat and breast is 0.48 dB/cm/MHz and 0.75 dB/cm/MHz [36]. The depth of the breast lesions ranges from less than 1 cm to more than 5 cm. Patients with fatty breast composition, usually with lesion depth greater than 5 cm, will have more than 20 dB attenuation at 8.5 MHz. So, the output acoustic pressure should be compensated according to the depth of the lesion to ensure that the acoustic pressure applied to the lesion is optimal. Another limitation was that the range of IFP models established in mice was limited (3-16 mmHg), which was lower than the reported IFP of human tumors. Despite the fact that the effectiveness of subharmonic amplitudes in estimating ambient pressure in a range of 10-40 mmHg has been demonstrated in vitro, a study with a wider IFP range and larger animal sample needs to be performed to accumulate evidence and data for improving this technology. In addition, subsequent clinical studies will enroll breast cancer patients to evaluate the relationship between subharmonics and tumor grade, treatment outcome, and prognosis. Conclusions In this study, we obtained the optimal acoustic pressure and driving frequency in vitro for UCA Sonazoid microbubbles' subharmonic scattering with ambient pressure and verified the excellent interstitial fluid pressure correlation and sensitivity using these parameters in tumor models. The results of the further cross-validation demonstrated the robustness of subharmonic-amplitude-based tumors' IFP estimation. Further in-depth studies will be conducted to improve the ultrasonic system for monitoring the efficacy of chemotherapy. Overall, this study suggested that subharmonic-amplitude-based IFP estimation provided a promising biomarker for the noninvasive and accurate evaluation of tumor microenvironments.
7,519.4
2023-05-01T00:00:00.000
[ "Physics" ]
Efficient atomistic simulations of radiation damage in W and W-Mo using machine-learning potentials The Gaussian approximation potential (GAP) is an accurate machine-learning interatomic potential that was recently extended to include the description of radiation effects. In this study, we seek to validate a faster version of GAP, known as tabulated GAP (tabGAP), by modelling primary radiation damage in 50-50 W-Mo alloys and pure W using classical molecular dynamics. We find that W-Mo exhibits a similar number of surviving defects as in pure W. We also observe W-Mo to possess both more efficient recombination of defects produced during the initial phase of the cascades, and in some cases, unlike pure W, recombination of all defects after the cascades cooled down. Furthermore, we observe that the tabGAP is two orders of magnitude faster than GAP, but produces a comparable number of surviving defects and cluster sizes. A small difference is noted in the fraction of interstitials that are bound into clusters. I. INTRODUCTION Nuclear energy is an integral part of modern society; nuclear fuels are millions of times more energy-dense than chemical ones, such as oil. Moreover, they release no greenhouse gases. The materials in nuclear reactors are exposed to intense irradiation, and the understanding of the consequences of this process on the durability and reliability of the materials is vital not only for existing power plants but more so for future fusion and next-generation fission reactors [1]. This motivates the search for new radiation-tolerant materials. Tungstenbased high-entropy alloys (HEA) are a class of materials that show promising resilience to radiation [2], making them particularly interesting in the field of nuclear energy applications. Molecular dynamics [3] (MD) is a widely used method to study how materials respond to radiation and gives insight into atomic-scale phenomena and their underlying mechanisms that are inaccessible by experimental means [4]. Considering specifically W-based alloys, Qiu et al. [5] found, by running collision-cascade simulations, that alloying Ta with W can decrease the size of dislocation loops, whilst retaining comparable defect production to W. Moreover, cascade simulations have shown Mo-based complex concentrated alloys to fare well under radiation [6]. However, the effects of collision cascades in W-based alloys are still fairly poorly understood. Interatomic potentials that describe the nature of atom interactions within the modelled material are essential for the validity and accuracy of simulation results. However, analytical potentials (potentials that have a fixed mathematical form, comprising only a few parameters) struggle to accurately describe more than a handful of phenomena, fundamentally restricting the use of their applications. Recently, a new approach to the development of interatomic potentials based on machine-learning (ML) algorithms was proposed [7,8]. Since the training database is generated from consistent density functional theory (DFT) calculations, some of the ML potentials excel at describing a multitude of different phenomena, giving more accurate results than their analytical counterparts [7][8][9]. The Gaussian approximation potential (GAP) [7] is a popular machine-learning potential, which has been proven to give results that are on par with quantummechanical simulation methods, and is capable of successfully describing a diverse range of phenomena [10,11]. GAP also reaps the benefits of classical potentials, being capable of simulating systems that are at least thousands of times larger than in quantum-mechanical methods. Despite this, GAP is still excruciatingly slow when put up against its traditional, analytical counterparts, such as the embedded atom method (EAM) potentials. In an attempt to retain the excellent array of properties of GAP, whilst making it faster to compute, the tabulated GAP (tabGAP) formalism was created [12,13]. The key feature of tabGAP is using only lowdimensional descriptor terms, omitting terms like the Smooth Overlap of Atomic Positions (SOAP) term [14], which is a vector in a space of hundreds or even thousands of dimensions for multi-component materials. The low-dimensional terms enable tabGAP to circumvent the exhausting machine-learning prediction of GAP when computing atomic energies by using tabulation. Tabulation involves pre-computing the GAP energy predictions and mapping them onto low-dimensional grids. After tabulation, the resulting data grid can be used in conventional spline interpolation methods during simulations, which makes tabGAP faster. Perhaps even more importantly, the low-dimensional terms of tabGAP make it easier to develop for many-element materials like HEAs because they need less training data than terms like SOAP [13]. Therefore, tabGAP could act as a gateway to efficient, and accurate, studies of exotic multi-component materials. In the present study, we test the tabGAP developed arXiv:2208.00804v2 [cond-mat.mtrl-sci] 1 Jun 2023 in Ref. [12], which was developed for a W-based HEA, namely molybdenum-niobium-tantalum-vanadiumtungsten (Mo-Nb-Ta-V-W), by modelling radiation effects. To compare the performance of tabGAP to other types of interatomic potentials in MD simulations, we choose to model 50-50 W-Mo alloys. We note that the high activation of Mo under neutron irradiation limits the use of this particular alloy for fusion applications; however, it could be used in small amounts e.g. in fusion reactor diagnostics, and in non-fusion applications where neutron activation is not an issue. Our choice is motivated by the existence of both a GAP and EAM for W-Mo [15,16]. Additionally, the results of this study give general insight into how 50-50 W-based refractory alloys behave. Radiation damage in both 50-50 W-Mo alloys and pure W is modelled by the means of MD collision-cascade simulations using tabGAP, a SOAP-equipped GAP, and EAM. The simulation results are analysed for the number of surviving defects (point defects and their clusters). A. Software and potentials The simulations were run using the classical MD code, Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) [17] (https://www.lammps.org/). The QUIP code [7] (https://github.com/libAtoms/QUIP) was used to enable the use of GAP with LAMMPS. The Open Visualization Tool [18] (OVITO) was used for both visualising simulation results and defect analysis using the Wigner-Seitz method. Dislocations were analysed using the Dislocation Extraction Algorithm [19]. The Python library Matplotlib [20] was used for plotting simulation data. Cascades were run using four potentials: the EAM potential developed for W-Mo in Ref. [16] (hereafter referred to as W-Mo-EAM ), the Ackland-Thetford-Zhong-Nordlund (AT-ZN) EAM potential developed for W in Ref. [21,22], the GAP developed in Ref. [15], and the tabGAP developed in Ref. [12]. We chose the AT-ZN potential for pure W, for it is the most widely used potential for radiation damage simulations in W [23,24]. For example, it has shown good agreement with experiments and GAP at high doses [24], which makes a comparison to the other potentials useful. All four potentials were developed to be applicable for the simulation of radiation effects, i.e. joined with corresponding repulsive potentials, such as the ZBL potential in EAM [25] and DMol [26] in GAP and tabGAP, to enable a reasonable description of cascade development. It is worth noting that the present tabGAP is fitted to a HEA dataset, whereas the GAP is fitted to a W-Mo dataset. In the HEA set, there are less data for the W-Mo system, which makes a direct comparison between GAP and tabGAP difficult. For more details about the development of the GAP and tabGAP, see Refs. [12] and [15]. B. Selection of the primary knock-on atom Following the practice in [26], cascades were initiated by giving one atom, the primary knock-on atom (PKA), a recoil of a given energy towards the centre of the simulation cell. The PKAs were selected as follows. Firstly, we generate a random direction in three-dimensional space. Then, we define a point at a specific distance from the centre of the cell, in the aforementioned direction. Finally, the atom closest to this point is given the recoil in the aforementioned direction, towards the cell centre, to initiate the cascade. Higher recoil energies trigger more extensive cascades, hence the distance at which a PKA was selected, as well as the total number of atoms in the simulation cell, scale up with the recoil energy. These parameters are given in Table I. In LAMMPS, the atoms within a simulation cell are labelled by identifiers (identification numbers). Since the same atomic structure for a given material was used for all potentials, for consistency, in the simulations with different potentials, we selected as a recoil the atom with the same identifier. We assigned it with the same velocity in the same direction. Although the cells relaxed in different potentials may slightly deviate from one another, these differences are sufficiently small for a statistically averaged quantitative comparison of defect formation in different potentials. It is worth noting that because the PKAs were selected in random directions, they may move in channelling directions (which offer the least resistance to movement), and a few cascades overlapped with the periodic boundaries, in spite of the sufficient size of the simulation cells. These simulations were discarded and the simulations were re-run with new PKAs. The aim of the PKA selection method is to minimise the direction-related bias in the results. Regardless, the present results are not completely free of directional bias, since the channelling directions were excluded from the analysis. However, the main purpose of the current paper, which is to compare the results of different interaction models, is unaffected by this, since the probability of crossing the boundaries is the same for all interaction models. In fact, the number of failed simulations (where atoms enter the thermostatted border with at least 10-eV kinetic energy) was around five out of the 40 1-and 2-keV simulations, but only around two simulations for the rest of the energies (these energies gave rise to thermal spikes). C. Simulation setup Collision cascade simulations were run for 50-50 W-Mo alloys, and pure W, both with the body-centred cubic (BCC) lattice structure. The atoms in the W-Mo alloys are randomly ordered. Periodic boundary conditions were used in every simulation. In W-Mo alloys, the cascades initiated by PKA with energies from 1 to 20 keV were run using the EAM and tabGAP potentials, but only 1 to 5-keV cascades were run using GAP, due to its much higher computational cost (GAP is two orders of magnitude slower than the tabGAP we used and four orders of magnitude slower than the EAMs; see Tab. II). In pure W, simulations were run using the AT-ZN EAM, the W-part of the W-Mo-EAM potential and the tabGAP to study stable defects and their clusters with PKA energies of 1 to 10 keV. Only 1-keV cascade simulations were run in pure W with the GAP. For each PKA energy, statistics were collected over 40 simulations with different initial seeds for random-number generation, except for GAP 5 keV in W-Mo. In the latter case, only 25 simulations were run, again due to the prohibitively high computational cost of these simulations. Even the case of 25 simulations should be sufficient, as has been studied in Ref. [27]. For consistency, in all applied potentials, we used cells of the same composition. Therefore, we relaxed the simulation cells with the corresponding potential before cascade simulations. The relaxation was done by imposing a Nosé-Hoover thermostat and barostat to the cells [28,29], and waiting for the pressure and volume of the cells to become stable. Cascade simulations started out at a temperature of 300 K, and had a Nosé-Hoover thermostat applied to a 6-Å thick shell at the boundary of the simulation cells, to cool the cell down to its initial temperature, which mimics the much larger bulk material surrounding the cascade region. During the cascade simulations, no pressure control was used. The simulation time was chosen such that the final temperature is sufficiently close to the initial 300 K and the cascadeinduced defect evolution has stopped. For each W-Mo simulation, it was 100 ps, with the exception of 5-keV GAP simulations, where the shortest simulation managed to run for about 71 ps. The shorter run-time was deemed a non-issue, as will be discussed in more detail in section III A. For pure W, a shorter simulation time of 60 ps was sufficient. Due to the nature of the cascade simulations, the initially-high kinetic energies of atoms (high velocities) decrease over time. For simulation efficiency, an adaptive time-step [30] was used. The magnitude of the adaptive time-step changes dynamically in response to atomic velocities, starting out small and ultimately reaching a fixed maximum value, which was chosen to be 3 fs. In the MD simulations, electrons are not explicitly modelled, however, they do have a substantial role in energy dissipation for the collision energies involved in the cascades of this study [31]. To emulate the energy loss due to electronic excitations of high-energy atoms, electronic stopping data were used to determine the magnitude of the electronic stopping power that the atoms experience at a given kinetic energy. A cut-off kineticenergy threshold of 10 eV was used and the electronic stopping was applied to all atoms with kinetic energy higher than this. The stopping power for the W-Mo alloys was generated using the SRIM-2013 code [32,33], while the stopping power for the pure W was the same as in the earlier work [23], generated with the ZBL-96 code [25]. In the energy range of interest for the current study (≤ 20 keV, well below the maximum in the electronic stopping power), the stopping power in both codes is based on the Lindhard stopping model [34]. Hence, the possible difference in the stopping powers generated by both methods will have a negligible effect on defect formation. In addition to the cascade simulations, the mobility of interstitials was determined using tabGAP in both pure W, and 50-50 W-Mo cells. The simulation cells of perfect BCC lattices of 2 000 atoms with manually added 5-6 split-interstitials in random positions were modelled for 1 ns of simulated time using a 3-fs timestep. A single W simulation was run at 600 K, and one W-Mo simulation at both 600 K and 1200 K . A thermostat and barostat were applied to these cells, making them N P T ensembles. The purpose of these simulations was to obtain a qualitative understanding of the differences in the clustering of interstitials between W-Mo and pure W during the post-cascade evolution of defects in these materials. Lastly, we studied the binding energies of firstnearest-neighbour (1NN) divacancies in pure W and various compositions of W-Mo at 0 K, in lattices that, when devoid of vacancies, consisted of 432 atoms. The binding energy of a divacancy was defined to be: where E form, 1 and E form, 2 are the formation energies of the two constituent vacancies (obtained from lattices with only one of these vacancies), and E form, divac is the formation energy of the divacancy. The formation energies for single vacancies are given by: where E denotes the total potential energy and N the total number of particles of the system specified by the subscripts; the subscript dist (disturbed) denotes the system with the vacancy, and undist (undisturbed) the defect-free system. The divacancy formation energy is given by: where the subscript dist now refers to the system with the 1NN divacancy. For every composition of the W-Mo alloys, we inserted a 1NN divacancy into 15 randomly-generated lattices (30 lattices for the W-Mo-EAM). As the binding energy of a 1NN divacancy depends on the chemical composition of its surroundings, this analysis does not provide a definitive answer to the binding energies of a random W-Mo alloy. Rather, the analysis is done to ascertain what effect the addition of Mo to W has on the stability of divacancies. For comparison, we also computed the divacancy binding energy in DFT for the 50-50 W-Mo composition. Due to computational reasons, we used a smaller lattice (128 atoms) and computed the average of 5 different randomly generated lattices. We used the vasp DFT code [35,36] with projector augmented-wave potentials [37] ( sv in vasp), the PBE generalized gradient approximation exchange-correlation functional [38], 500 eV cutoff energy for the plane-wave basis, 0.15Å −1 maximum k-point spacing on Monkhorst-Pack grids [39], and 0.1 eV Methfessel-Paxton smearing [40]. These DFT settings are the same as the ones used for generating the training data for GAP and tabGAP [12,15]. D. Cluster analysis After a cascade, any given two defects in the simulation cell were considered to belong to the same cluster if they were separated by a chosen cut-off distance. The definitions of the cut-off radii for interstitial and vacancy clusters are the same as in Ref. [41]; for interstitial clusters, the cut-off radius is (r 3NN + r 4NN ) / 2, and for vacancy clusters (r 2NN + r 3NN ) / 2, where the distance to the kth nearest neighbour is r kN N . The cut-off radii depend on the lattice constant of the cell, which for W-Mo was set to 3.1738Å, as the lattice constants yielded by all three potentials differed from this by less than 1 %. The lattice constant for equiatomic W-Mo at 300 K as predicted by tabGAP is 3.1800å, GAP 3.179å, and W-Mo-EAM 3.160å. The experimental lattice constant for the 50-50 W-Mo system is roughly 3.16å [42]. The good agreement of the EAM lattice constant with experiment is because of the explicit fitting of the potential to the experimental values, whereas the present GAP-based potentials use the PBE exchange-correlation functional in DFT, which is known to overestimate lattice constants [43]. For pure W, the lattice constant at 300 K given by tabGAP is 3.1892Å , W-Mo-EAM 3.1714Å, and AT-ZN EAM 3.1659Å. A. Defect formation and mobility The interstitials produced in a 10-keV (tabGAP) cascade simulation in W-Mo are shown in Fig. 1. One can see that single split-interstitials are oriented along different ⟨111⟩ directions, while in the SIA cluster (centre of the snapshot), the interstitials are aligned along [111] direction parallel to one another, which is consistent with the shape of the clusters observed earlier in tungsten [44]. The mean number of Frenkel pairs as a function of the PKA energy is presented in Fig. 2 for both materials. It should be noted that the results of all simulations were included when evaluating averages and standard errors related to the number of defects, even those that ended with no defects. Information on how the results in individual simulations are distributed around the mean is illustrated in Fig. 3. As shown in Fig. 2, tabGAP and GAP produce a comparable number of defects. At 5 keV in W-Mo, however, tabGAP produces slightly more defects, though, given the standard error, the difference can be as low as about 1 to 2 defects. The W-Mo-EAM, on the other hand, produces significantly more defects across the board, in both W-Mo and W. This is likely due to the threshold displacement energies reported in Ref. [16] being too low for the present W-Mo-EAM, although results were only reported for pure Mo. We also observe that the predictions made by the AT-ZN EAM and tabGAP for the mean number of surviving defects are similar, although the numbers predicted by tabGAP are slightly higher. in the violin plots (Figs. 3a, 3c, and 3e), namely exhibiting recombination of all defects to some extent at lower PKA energies; even in one W-Mo-EAM 1-keV simulation, the cell completely recovered from the damage after the cascade had cooled down. In W, defect recombination was not observed in any of the tested PKA energies, though looking at Fig. 2, tabGAP and GAP describe W as producing roughly the same number of defects as W-Mo (given the standard errors), whereas W-Mo-EAM predicts a greater mean number of defects in W than W-Mo. In Fig. 4, one can discern the temporal evolution of temperature and defect formation in 5-keV W-Mo and W cascade simulations. We note that the temperature during the highly non-equilibrium peak of the cascade is not a conventional equilibrium temperature, but a measure of the average kinetic energy E kin of the system transformed to temperature T using E kin = 3 2 N k B T . The absolute value of the temperature is not meaningful, as it depends on the number of atoms N in the simulation cell. However, the time dependence of T is a good illustration of the duration of the non-equilibrium phase of a collision cascade. On the account of Fig. 4, it is apparent that defects stop being produced shortly after the initial spike in temperature, caused by the development of the cascade. W-Mo demonstrates a more efficient recombination of defects produced during the initial phase of the cascades than W; W-Mo has an initial spike of around 130 defects, whereas W has around 100 defects, yet both materials end up with roughly the same mean number of defects. Furthermore, the temperature is removed from the W-Mo cell more efficiently by the W-Mo-EAM potential compared to GAP and tabGAP, both of which had similar predictions. This is apparent from the comparison of the temperature evolution in the simulation cell after the active cascade phase under the same boundary conditions in all three potentials. This discrepancy may be explained not only by different lattice thermal conductivities but also by cascade size and shape. The analysis of the interstitial-mobility simulations revealed that interstitials at a given temperature in W-Mo are far less mobile than in W, where interstitials had effectively no movement even at 600 K. At a temperature of 1200 K, the mobility W-Mo interstitials rivalled the mobility pure-W interstitials had at 600 K. The interstitials were observed to migrate mainly in a crowdion ⟨111⟩ direction in both W-Mo and W. Considering that interstitials in W-Mo at 600 K are practically immobile on the MD time scale, and that the temperature even at 5 keV drops far below 600 K during the first few picoseconds, the shorter run-time of GAP 5 keV (shortest was 71 ps) most likely had no effect on defect formation and clustering. In pure W, the temperature was observed to decrease faster than in W-Mo, having reached 300 K long before 60 ps had transpired in 5-keV simulations, as indicated in Fig. 4. This indicates that the lattice thermal conductivity is significantly higher in pure W than in random W-Mo alloys. B. Defect clustering The Mo concentrations in interstitial clusters of 5-keV simulations are depicted in Fig. 5, wherein Mo-Mo is shown to be the predominant type of split-interstitial. the mean temperature, both with respect to time. Standard error, albeit very small, is represented by a shaded red area. The x-axis (time) is shared among the defect and temperature plots. The x-axis has been limited to 60 ps for clarity. Results from all simulations, even those that ended with zero defects, were included in the averages and the errors thereof. Moreover, tabGAP clusters have a slightly larger fraction of W than GAP. We note that the present tabGAP was trained for Mo-Nb-Ta-V-W, which means a smaller fraction of its training data describes W-Mo interactions than the GAP, which was trained directly for W-Mo alloys. Nevertheless, all three potentials agree that Mo atoms are predominant in interstitial clusters in W-Mo. Statistical distributions of vacancy and interstitial clusters are shown in Fig. 7. More distributions for the remaining tested energies are given in the Supplementary material. Given the standard errors, the comparison between the different potentials is satisfactory. Some of the clusters are seen in some potentials, but not in others. Overall, the GAP predicts smaller cluster sizes than the W-Mo-EAM potential and tabGAP. Fig. 7 shows that in pure W, interstitial clusters are more prevalent than in W-Mo, which is reasonable given the increased mobility that interstitials in W have over those in W-Mo. Differences between W-Mo and W in the clustering of interstitials at PKA energies lower than 5 keV are less consistent. This is due to the overall low probability of the formation of large clusters at these energies, which makes the data noisier and less statistically reliable. The interstitial clustering in W is similar in both tab-GAP and AT-ZN EAM, taking into account the margins of error. However, W-Mo-EAM predicts that the vacancies cluster more in pure W than in W-Mo, whereas tab-GAP predicts the opposite. Moreover, the AT-ZN EAM predicts a higher number of vacancy clusters (size > 1) than tabGAP. We note that tabGAP predicts more efficient clustering of vacancies in W-Mo compared to W, which is in agreement with the divacancy binding energy in Fig. 6. However, DFT predicts that divacancies in W-Mo alloys are roughly as unstable as in pure W, suggesting that alloying may not affect vacancy clustering. Fig. 6 shows that none of the potentials (not even GAP) reproduce the DFT trend for divacancy stability, although, the divacancy binding energies predicted by GAP and tab-GAP are much closer to DFT than those of the AT-ZN EAM (for pure W) and the W-Mo-EAM. We note that the small magnitudes (≈ 0.1 eV) of the binding energies (including the negative binding energies) are much smaller than the kinetic energies in the collision cascades (> 100 eV), and hence, they are not expected to have a strong effect on the results of the present study. Moreover, it has been shown that, despite their negative binding energy, divacancies are fairly stable in W because of high dissociation energies (≈ 1.7 eV) [45]. For that reason, even in the long-term evolution of defects, this inaccuracy in the binding energy is not expected to affect remarkedly vacancy clustering in these materials, since the energy barriers for vacancy migration are usually over 1.5 eV [45]. However, for higher accuracy of the description of cluster dynamics in cascades, we recommend to re-train the tabGAP and specifically include the defects of interest to ensure that the machinelearning algorithm sees the corresponding configurations during training. The clustered fraction of defects is a quantity that allows us to analyse the clustering efficiency of the formed defects in a given potential. It is evaluated as follows: where N c is the number of defects, vacancies or interstitials, bound into clusters with a size greater than 1, and N tot is the total number of defects of the corresponding type. This quantity is shown for W-Mo and W in Fig. 8. The cases with zero defects are excluded from this analysis because the clustered fraction is not defined in such cases. Doing so does not affect the analysed quantity. We see that the clustered fraction in tabGAP follows similar behaviour to that obtained with both EAM potentials. However, the clustered fraction for interstitials in W-Mo by tabGAP is somewhat lower compared to the W-Mo-EAM potential. In pure W, the interstitial clustered fraction is quite similar for the EAMs and tab-GAP, given the standard errors, whilst GAP resulted in more efficient clustering of interstitials. In the case of vacancies, tabGAP predicted similar clustering in both W and W-Mo as GAP, with the only noticeable difference between the results being at 5 keV. In general, we note that the tabGAP prediction of the interstitial clustering is less consistent with that of GAP, at least, within the statistical uncertainty available in the present work. This can be explained by the smaller training dataset for the W-Mo pair within the 5-element tabGAP potential. The results of tabGAP imply that interstitials in W have a substantially higher tendency to form clusters than in W-Mo. Surprisingly, both W-Mo-EAM and GAP predict a rather similar tendency for clustering, although, in all three potentials, we see that the interstitials in W cluster more efficiently than in W-Mo. This is reasonable, given that interstitials are more mobile in W, and can therefore form clusters more swiftly than in W-Mo. In the case of vacancies, only tabGAP and GAP reliably predict that vacancies are less clustered in W, as discussed above. C. Dislocation loops The energetically most stable dislocation loops in W are those with Burgers vectors of 1/2 ⟨111⟩ [47,48]. In all W-Mo cascades, there were only three cases, of dislocations identified by the DXA algorithm in ovito, whereas pure W only had one case in an AT-ZN EAM simulation. These dislocations were small loops of the interstitial type, formed in the 10-and 20-keV cascades (10 keV in the case of W). The observed dislocations were all 1/2 ⟨111⟩, as shown in Fig. 9. D. Performance It is imperative to discuss the difference in performance between the potentials since it was the motivation for developing tabGAP. For example, 100-ps, 5-keV tabGAP simulations using 12 processing cores were completed in less than a day, whereas GAP required a run-time of three days to attain 70 ps simulated time using 1 000 cores. It is worth noting that the tabGAP framework has been further developed after the present simulations using tabGAP had been performed. The new version developed in Ref. [13] has optimised code and cut-off radii, and includes an EAM-like energy contribution, which makes it both more accurate and faster than the tabGAP used in this study. In light of this, the performance of the newer tabGAP, called here enhanced tabGAP (e-tabGAP), was tested in addition to the four potentials used in this study. For more details and benchmarks of the e-tabGAP, we refer to Ref. [13]. The performance of the potentials was tested by running N P T simulations in 31 250 -atom cells These simulations were run for 2 000, 3-fs time-steps, using 30 central processing unit cores. The results are provided in Table II. From Table II, it is evident how slow GAP is compared to the other potentials. The tabGAP used in this work is roughly two orders of magnitude faster than GAP, and two orders slower than the EAMs. With the newer version, e-tabGAP, the speed-up is three orders of magnitude to GAP, and only one order of magnitude slower than the EAMs. The primary sources of discrepancy in the EAM performances are the larger cut-off radii used in the W-Mo-EAM as opposed to the AT-ZN variant. To put the difference in the performances of GAP and e-tabGAP into perspective, let one consider the following example: given the same computational resources and the same task, a job that would take e-tabGAP three days, would take GAP closer to 14 years. IV. SUMMARY OF OBSERVATIONS For clarity, we here summarise the observations discussed in the previous sections. The comparison between tabGAP and GAP can be summarised as follows: 1. TabGAP was found to be two orders of magnitude faster than GAP, and two orders of magnitude slower than the EAM potentials. The newer version of tabGAP (optimised code and cut-off radii) is three orders faster than GAP, and one order slower than the EAMs. 2. The number of surviving Frenkel pairs in tab-GAP was found to be close to GAP, albeit always slightly higher, within the uncertainties given by the standard error of the mean. 3. TabGAP and GAP produced similar defectclustering, within the standard error bars, although there is some difference in the number of specific cluster sizes between the two potentials. 4. We also found that, overall, the fraction of interstitial atoms bound into clusters was smaller in tab-GAP than in GAP. The cause for this discrepancy may lie in the smaller training data for tabGAP. The differences between 50-50 W-Mo alloy and pure W in the primary radiation damage can be summarised as: 1. Interstitials at a given temperature in W-Mo were found to be substantially less mobile than in W. 7. However, we noticed slightly more efficient recombination of defects in the 50-50 W-Mo alloy, since there were several cases where the defects created in cascades fully recombined. This behaviour was not observed in pure W. Additionally, W-Mo was observed to recombine a greater fraction of defects produced during the early phase of the cascades. V. CONCLUSIONS The aim of this study was to analyse the benefits and possible drawbacks of a more efficient version of the machine-learning potential GAP, the so-called tab-GAP. In this study, we report the differences and similarities between pure W, and W-Mo (50:50) alloy with respect to the primary radiation damage as predicted by three potentials: tabGAP, GAP, and EAM. In W-Mo, the main difference between EAM and (tab)GAP is the number of surviving defects, which is significantly higher in the EAM potential. However, in pure W, the well-established AT-ZN EAM potential produces similar numbers of defects and clustering statistics to tabGAP, which are also fairly similar to the available predictions made by GAP and much lower than the values predicted by the W-Mo-EAM potential. We conclude that, overall, tabGAP produces similar results to GAP in cascade simulations in a random binary alloy, while being two orders of magnitude faster. This makes tabGAP a promising machine-learned potential for accurate modelling of low-and high-dose radiation damage in multicomponent alloys. VI. ACKNOWLEDGEMENTS We are grateful for funding from the Academy of Finland project HEADFORE (grant no. 333225). This work has been partly carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 -EUROfusion). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. The authors wish to thank the Finnish Grid and Cloud Infrastructure (FGCI) (persistent identifier urn:nbn:fi:research-infras-2016072533) for supporting this project with computational and data storage resources. Here we compare the time-integration error between the three potentials. To test this, we ran test simulations, using the velocity Verlet algorithm, in cells comprising 1 024 atoms, that were not connected to thermostats or barostats, making them N V E ensembles; ensembles where the total energy should stay constant. In Fig. 10, one can see the results from simulations for all of the potentials for varying values of time-step, using the aforementioned cell at a temperature of 500 K; the flatter the line, the better. Fluctuations of total energy in an N V E ensemble are due to time-integration error, caused by having a non-zero time-step. Interestingly, tabGAP shows erratic variation in total energy (Fig. 10a), whereas EAM and GAP show more consistency in the pattern of the variation. The erratic variation of tabGAP could be caused by interpolation error. Even so, the largest fluctuation per atom (5 − fs time-step) is only ≈ 0.15 meV, whereas for GAP and EAM respectively, these are ≈ 0.06 meV and ≈ 0.08 meV. The average kinetic energy of an atom in these simulations is 3 2 k B 500 K ≈ 65 meV. Therefore, changes in the energy of an atom caused by tabGAP are completely masked by thermal vibrations and are thus insignificant.
8,029
2022-08-01T00:00:00.000
[ "Materials Science", "Physics" ]
Double sink energy hole avoidance strategy for wireless sensor network To solve the energy hole problem in wireless sensor networks with double sinks, a double sink energy hole avoidance strategy is proposed. The main idea is that two data sinks are set at fixed positions on both sides of the rectangular network to collect nodes data in the corresponding area of the network. In the network, sensor nodes are organized in non-uniform clusters. Clusters close to sink have a smaller cluster radius, and clusters far from sink have a larger cluster radius. According to the results of threshold training, monitoring area of double sink is dynamically adjusted based on the difference of energy consumption and load of nodes in the double sink monitoring area, so that the energy consumption load of nodes in the double sink monitoring area tends to be the same, so as to avoid the premature occurrence of energy hole phenomenon in the area with large load, leading to network failure. The experimental results demonstrate that the strategy proposed in this paper can efficiently balance the energy dissipation of double sink and prolong the network energy utilization efficiency and network lifetime. a large number of energy resources of undead nodes in the network. The simulation results in [3] showed that when there was an energy hole in the network, up to 90% of the remaining energy in the network was wasted when the network life was over. Therefore, avoiding or delaying the formation of energy holes is an effective and reliable way to prolong the lifetime of the whole network. One method of dividing the network into concentric circles and each of these circles owning a layer was proposed in [4], which considered the limitation of energy resources in these networks. A deployment strategy with using the least possible nodes was presented in [5], which prolonged network lifetime by avoiding energy holes and also ensured full sensing and communication coverage. The effective of the proposed method was verified analytically and validated through NS-2-based simulation experiments. A balanced energy consuming and hole alleviating, and energy-aware balanced energy consuming and hole alleviating algorithms were proposed in [6], which adopts data forwarding and routing selection strategy for the entire network, not only to balance the load distribution of entire network, but also to utilize the energy resource efficiently. In [7], dead nodes, packet loss and energy consumption of dead nodes were simulated on network simulator using Double Cost-based Function Routing and the results were compared with Greedy Geographic Routing. In [8], the problem of energy hole creation in depth-based routing techniques and a technique to overcome the deficiencies in existing techniques were devised. Besides addressing the energy hole issue, the proposition of a coverage hole repair technique is also part of this paper. In areas of the dense deployment, sensing ranges of nodes redundantly overlap. An efficient RF energy harvesting scheme using multiple dedicated RF sources to avert energy holes was proposed in [9], and the performance of multi-hop WSN with wireless energy transfer in terms of energy charged, number of energy transmitter's, throughput and outage in the network was verified. Fasee [10] proposes an energy-efficient traffic prioritization for medium access control protocol, which provides sufficient slots with higher bandwidth and guard bands to avoid channels interference causing longer delay. Azam [11] presents a balanced load distribution scheme to avoid energy holes created due to unbalanced energy consumption in UWSNs. Applying the optimal condition, Jia [12] proposes a novel sensor redistribution algorithm to completely eliminate the energy hole problem in mobile sensor network. Sharma [13] proposes a corona model-based approach to enhance network's lifetime by balancing energy depletion rate across network and avoiding energy hole around sink. Sushil [14] proposes a quantum inspired green communication framework for energy balancing in sensor enabled IoT systems. Verma [15] proposes a fuzzy logic-based clustering algorithm for WSN to extend the network lifetime, by utilizing the concept of average energy-based probability and average threshold for appropriate cluster heads selection. Energy-efficient zone-based dual subsink protocol with dual cluster head was presented in [16], which is developed to schedule the data transmission of these relay nodes with mobile sink. In a word, there are two main problems in the existing energy hole avoidance schemes: one is that many schemes are put forward too idealized and difficult to implement, which is easy to cause unnecessary waste of resources; the other is that most of the current schemes only consider the energy problems, and for other aspects, such as network delay, which are still lack of consideration, we should find a balance point to improve the life cycle and reduce the delay as much as possible. All those researchers above mentioned have positive effect to investigate the energy hole in wireless sensor network. They adopt different data transmission strategies, sensor node deployment strategies [17], non-uniform clustering [18] and appropriate routing protocols [19] are used to delay or avoid the occurrence of energy hole as much as possible. However, most of these strategies are to balance the energy consumption load of nodes in the single sink monitoring area as much as possible in the single sink network environment, so as to avoid energy hole. In the multisink environment with larger network scale, how to make the nodes in different sink monitoring areas have balanced energy consumption, so as to avoid the energy hole phenomenon, which has become an urgent problem to be solved. With the purpose of avoiding the energy hole problem in large-scale sensor networks, an energy hole avoidance strategy based on double sink nodes is proposed in this paper. Compared with the existing energy hole avoidance strategy, the main contributions of this paper can be summarized as follows. 1 To solve the energy hole problem in wireless sensor networks with double sinks, a double sink energy hole avoidance (DS-EHA) strategy is proposed. 2 Double sink divides the network into two areas for monitoring. According to the load difference between the nodes in the double sink monitoring area, this method dynamically adjusts the area of the double sink monitoring area and solves the energy hole problem in the double sink network by the way of double sink cooperative work. 3 A novel end adjustment threshold training mechanism is proposed, which makes the adjustment of double sink monitoring area more efficient and reasonable. The organization of this paper is as follows. Section 1 introduces the research background and development status of the subject. Section 2 illustrates the wireless sensor network system model and problem descriptions. The DS-EHA design scheme in detail is presented in Sect. 3, including the cluster routing strategy, the dynamic adjustment strategy of double sink monitoring area and the selection of target threshold. Then, network environment parameter selections for the proposed algorithm and simulation comparison are shown in Sect. 4. Section 5 concludes this paper. Network deployment model Assuming that there are N sensor nodes in total, each node has the same initial energy and is randomly distributed in a two-dimensional rectangular region. Each node is pre-assigned with a unique identifier ID in the whole network, and they can obtain the distance from the signal source according to the strength of the received signal. The two sink nodes are arranged symmetrically on both sides of the rectangular network, and their transmitting power can be adjusted and can communicate with each other. Energy consumption model. The same energy model in the literature [20] is adopted in this paper: 1 The energy required by the sensor to transmit a k-bit data packet to the node at distance d is 2 The energy required to receive a k-bit packet is is the threshold, when the distance between sending node and receiving node is less than d 0 . The energy dissipation of the data transmitted by the sender is proportional to the square of the distance. Otherwise, it is proportional to the fourth power of distance. E elec represents the energy consumption per bit of data sent or received, ∂ fs d 2 and ∂ mp d 4 are the energy consumption of transmitting data amplifier per bit. Problem description Each node has a different communication range, for example, node i has a communication range of R i . Therefore, if a node is not in the communication range of another node, it cannot communicate directly, and the forwarding of the relay node is required. By the idea of wheel, before the start of each round to calculate the best position of the base station and the base station in the epicycle position is unchanged, the cycle is represented by T . The base station shall be within the permitted area. In this paper, the graph is used to represent the wireless sensor network. E represents the communication link in the network. If there is a link between node i and node j , the edge i, j ∈ E . Because some nodes close to the base station consume energy too fast, they are the bottleneck of the entire network. If there is only one sink at rest in the network, it will be difficult to avoid forming "energy volution" near the sink. In addition, obtaining a good energy balance often leads to a large network delay, so it is necessary to design a routing algorithm to make a balance between energy optimization and delay reduction. DS-EHA scheme design For large-scale wireless sensor networks, this paper adopts the double sink network structure. For the following two reasons, firstly, the communication ability of sensor nodes is limited, in addition, the communication range of sink nodes responsible for data receiving and processing is also difficult to cover the whole network. Definition 1. As shown in Fig. 1, A and B are sink-1 and sink-2, AS and BS are the perceptual diameter of sink-1 and sink-2, respectively. H is a node in the region, SQ is the perception dividing line, and the shadow area is in the monitoring area of sink-1 and sink-2 at the same time. Shadow area is called the repetitive coverage area of the network. Cluster routing strategy In order to balance the energy consumption of each cluster in the network, the nodes in DS-EHA are organized in a non-uniform way. The clusters close to sink have smaller cluster radius, while those far from sink have larger cluster radius. As shown in Fig. 1, node H is in the repeated coverage area of the network. According to the signal strength of the received broadcast message, node H calculates the distance − → AS and − → BS from sink-1 and sink-2, respectively. Then, node H calculates the distance d h between the node and the perceived boundary according to formula (3). where − → AH and − → BH are the distance from node H to ink-1 and sink-2, respectively, and − → AB is the horizontal distance between sink-1 and sink-2, as shown in Fig. 1. If d h ≥ 0 is greater than or equal to 0, node H is at the right of the perception boundary. If d h < 0, node H is at the left of the perceived boundary, node H is clustered in the sink-1 monitoring region, and the data is transmitted to the sink-1 direction. When nodes are clustered, they transmit data to sink in a multi-hop mode. The cluster-head node with the largest residual energy and closer to the sink is selected as the next-hop data forwarding node in its neighbor cluster heads. Dynamic adjustment strategy of double sink monitoring region Assume that the monitoring area of sink-1 and sink-2 is equal at the beginning of the network. The randomness of node distribution in the network makes it difficult for the number of nodes in the sink-1 and sink-2 monitoring area and the load of nodes in the routing process to be the same. If the load difference between nodes in the sink-1 and sink-2 monitoring area is large at the beginning, the load difference between nodes in different sink monitoring areas will be larger and larger as time goes on, and the nodes in the area with higher load will die earlier, thus resulting in the phenomenon of energy volution. In order to balance the load of nodes in the network and avoid the occurrence of energy volution, the DS-EHA algorithm proposed by us is oriented by the load difference between double sink nodes and dynamically adjusts the area of double sink monitoring area. Adjust the design of discriminant function The energy consumption rate of the nodes in the sink-1 and sink-2 monitoring area can well reflect the load difference of the nodes in the sink-1 and sink-2 monitoring area. Fig. 1 Repeated coverage area by double sinks The higher the energy consumption rate of the node, the greater the load of the node in the region. After each round of data collection, the difference value of energy consumption rate of nodes in the sink-1 and sink-2 monitoring area was used as the adjustment discrimination function value of the current round of the network in DS-EHA algorithm. If the energy consumption rate of nodes in the sink-1 monitoring area is higher than sink-2, the area of sink-1 monitoring area will be reduced. The reduced data in the sink-1 monitoring area were transmitted to the sink-2 monitoring area. Conversely, if the energy consumption rate of nodes in the sink-1 monitoring area is smaller than sink-2, the area of the sink-1 monitoring area will be increased. The current network rotation is represented by m, h is the region where the node belongs to, h ∈ (1, 2) , Judge(x) is the adjustment discrimination function of the network, Judge(m) is the value of the adjustment discrimination function of the current wheel, and Judge(T) is the end adjustment threshold. ϕ h m represents the energy consumption rate of the nodes in the sink monitoring area in each round, E h inti−m represents the residual energy of the nodes in the sink monitoring area at the beginning of each round, E h rem−m represents the residual energy of the nodes after each round, and then, the energy consumption rates of the nodes in sink1 and sink2 monitoring area are respectively: According to formula (4), the adjustment discrimination function of the network is: Implementation of dynamic adjustment strategy Through the dynamic adjustment of the sink-1 and sink-2 perception radius, the area of the sink-1 and sink-2 monitoring area was adjusted. AS can be seen from Fig. 1, when the sink-1 and sink-2 sensing radii changed, the change value of the repeated region in the monitoring area was S , and the step size adjustment factor was . The length of the rectangular region in this paper was W, the width was Z, and the total area was S wz . − → AS ′ and − → BS ′ were sink-1 and sink-2, respectively. After adjusting the area of the monitoring area, then According to formulas (6), (7) and (8), the sink-1 and sink-2 sensing radii can be obtained after adjusting the area of the monitoring area. If ϕ 1 m − ϕ 2 m ≥ 0 , the monitoring area of sink-1 should be reduced, then If ϕ 1 m − ϕ 2 m < 0 , the monitoring area of sink-1 should be increased, then The perceived radius of the corresponding sink-2 detection region is: Selection of target threshold When the adjustment discriminant function Judge(m) is greater than or equal to the end adjustment threshold Judge(T ) , continuing to dynamically adjust the area of the sink-1 and sink-2 monitoring area until Judge(m) < Judge(T ) , ending the adjustment of the area of the sink-1 and sink-2 monitoring area, that is, the area of the sink-1 and sink-2 monitoring area remains unchanged. It can be seen that the end of the adjustment threshold Judge(T ) size directly affects the adjustment of the sink-1 and sink-2 monitoring area. If the end adjustment threshold is set too small, the network may be in an adjustment state all the time. If the setting of the end adjustment threshold is too large, it may cause the network to stop the adjustment, and the load of nodes in the sink-1 and sink-2 monitoring area is still significantly different. Both are detrimental to extending network life. In order to make the adjustment of double sink monitoring area more efficient, a reasonable end adjustment threshold is needed. DS-EHA constructs threshold training trees according to the value of each adjustment discriminant function in the network and obtains the current optimal end adjustment threshold through the threshold training tree. In the process of ending threshold training strategy, assuming that Judge(C) and Judge(P) represent training thresholds and optimal training values, respectively, the end of the adjusting threshold Judge(T ) initial value is a negative integer, if the discriminant function value Judge(m) of the current wheel of the network is greater than the end adjustment threshold Judge(T ) , began to adjust the area of the double sink monitoring area, and construct a threshold training tree at the same time, start training the new Judge(T ) , on the contrary, the area of the double sink monitoring area with Judge(T ) 's value remains the same. The process of constructing threshold training tree is as follows: firstly, the Judge(T ) of the end adjustment threshold is taken as the root node, and the value of the discriminant function of each round is taken as the leaf node. The number in the leaf node represents the current number of rounds. The arrow pointing up indicates that the value of the discriminant function of the current round is larger than that of the previous round. The arrow pointing down indicates that the discriminant function value of the current round is smaller than that of the previous round. As shown in Fig. 2, in the end of threshold training process, if the value of each leaf node shows a downward trend (e.g., ① → ②) before obtaining the optimal training value Judge(P) for this training, then this kind of training method is called the downward trend training method. On the contrary, as shown in Fig. 3, if there is an upward and downward trend (① → ② → ③) at the same time before the optimal training value is obtained, this kind of training method is called the mixed trend training method. Downtrend training method As shown in Fig. 2, because of Judge(1) > Judge(T ) , it is necessary to construct threshold training tree to find a new end adjustment threshold. Judge(2) < Judge(1) , which indicates that the load of nodes in sink-1 and sink-2 monitoring area tends to be balanced by dynamically adjusting the monitoring area of sink-1 and sink-2. At this time, we need to continue to adjust the monitoring area of sink-1 and sink-2 through the dynamic adjustment strategy of double sink monitoring area. Until sink-1 and sink-2 are obtained, the optimal area of monitoring area is obtained. The value of leaf nodes ① and ② shows a downward trend, indicating that the load of nodes in the network is tending to be balanced. When the first reverse trend point (leaf node ④) appears in the downward trend, the threshold training is completed. The values of the adjustment discriminant function corresponding to leaf node ① and leaf node ③ are the training threshold and the optimal training value of the threshold training, respectively. The cumulative mean value of all the leaf nodes between the training threshold Judge(C) and the optimal training value Judge(P) is taken as the new end adjustment threshold of network, as shown in Fig. 2. The new end adjustment threshold of network is: Mixed trend training method As shown in Fig. 3, before obtaining the optimal training value for this training, there are both upward trend (① → ②) and downward trend (② → ③ → ④) leaf nodes. The value of leaf node ② is larger than that of leaf node ①. The area of sink-1 and sink-2 monitoring area needs to be adjusted to reduce the load difference between two sink nodes. Assuming that the value of leaf node 3 is smaller than that of leaf node 2 by adjusting, and the value of leaf node shows a downward trend, the first inverse trend leaf node (leaf node⑤) is found by referring to the downward trend training method, and training threshold of the mixed trend training and optimal training value are obtained. Then, the new end adjustment threshold in the network is: After threshold training, sink-1 and sink-2 search the table NT (m) to find and follow the perception radius of sink-1 and sink-2 in the wheels corresponding to the optimal training value Judge(P) . As shown in Fig. 2, sink-1 and sink-2 start from the fifth round and follow the perception radius of sink-1 and sink-2 in the third round. In Fig. 3, sink-1 and sink-2 begin with the sixth round and follow the perceptual radius of sink-1 and sink-2 in the fourth round. Until the discriminant function value Judge(m) > Judge(T ) of the first m round, the threshold training tree is reconstructed with the value of Judge(T ) as the root node, and a new round of threshold training is started. Environment setting and parameter selection In order to verify the effectiveness of the DS-EHA algorithm proposed in this paper, the EEUC (energy-efficient uneven clustering), DSR (dynamic source routing) and DS-EHA algorithms are simulated by MATLAB simulation software under the same conditions, and several performance indicators of the algorithms are compared and analyzed. In the experiment, each algorithm is equipped with two sinks. The data of sensor nodes are Table 1, and the general flowchart of the whole working process is shown in Fig. 4. The value of step adjustment factor directly affects the load of nodes in the network and the performance of threshold training strategy. The smaller the value of , the larger the adjustment range of network, the larger the value of , the smaller the adjustment range of network. If the area adjustment range of the double sink monitoring area is too large, the load of the nodes between the two sinks will fluctuate greatly, and it is difficult for the node load to reach equilibrium. If the adjustment range is too small, the load of double sink nodes will take a long time to reach equilibrium. As shown in Fig. 5, when = 500, the number of surviving rounds of nodes reaches the maximum, and the network has the longest lifetime. The simulation experiments in this paper all take = 500. Simulation experiment analysis The change curve of the energy consumption rate of the nodes in the sink-1 and sink-2 monitoring area with the number of adjustment rounds is shown in Fig. 6. At the beginning, the load difference between the two sink nodes is large. At the beginning of a new round of data collection, the new sensing radius is used to broadcast messages to realize the adjustment of double sink monitoring area, so that the load between the two sink nodes tends to be balanced. In order to verify the superiority of the proposed algorithm in this paper, on the DS-EHA algorithm proposed in this paper with algorithm based on the conventional algorithm of DSR, EEUC algorithm for energy consumption test experiments, specific test results are shown in Figs. 7 and 8, with the increase in the whole network adjustment round number, DS-EHA algorithm nodes remaining energy is always higher than the other two algorithms, DS-EHA algorithm by dynamically adjusting the area of the double sink monitoring area, effective balance the load between the double sink node, prolong the network lifetime. As shown in Fig. 8, in the default simulation environment, the network life of DS-EHA is 28% higher than that of DSR algorithm and 36% higher than that of EEUC algorithm. Step adjustment factor Surviving nodes round number (N) Results and discussion Compared with the simulation results of existing EEUC and DSR energy hole avoidance strategies, the proposed algorithm in this paper has two advantages. Firstly, two areas for monitoring in the network are divided by double sink, and DEAS dynamically adjusts the area of double sink monitoring area according to the load difference between nodes in double sink monitoring area. Hence, the problem of energy hole in double sink network is solved effectively by the way of double sink cooperative work. Secondly, a new training mechanism of end adjustment threshold is proposed, which makes the adjustment of double sink monitoring area more efficient and reasonable. Conclusion In order to solve the problem of uneven load of nodes in the sink monitoring area under the double sink structure, the strategy of dynamically adjusting the area of the double sink monitoring area is adopted in this paper. The problem of energy hole in the network is effectively avoided, and the load of nodes in the network is more balanced through the way of double sink cooperative work. And the algorithm proposed in this paper has better performance in energy efficiency and network life through the comparative simulation analysis.
5,868.8
2020-11-02T00:00:00.000
[ "Computer Science", "Engineering" ]
The Mechanical Behavior of a Multispring System Revealing Absurdity in the Relativistic Force Transformation (e mechanical motion of a system consisting of simple springs is investigated from the viewpoint of two inertial observers with a relativistic relative velocity. It is shown that the final displacement of the springs is not measured the same by the observers. Indeed, it is demonstrated that there is an incompatibility between kinematics and dynamics in Einstein’s relativity regarding the force transformation. Introduction is article represents an advanced version of the author's spring paradox [1] in which it was shown that the final displacement of two relatively moving springs is measured differently from the standpoint of different observers as soon as the springs meet each other. Here, we try to make the possible effect of the signal delay due to the constancy of light speed of little or no consequence as a cornerstone in resolving the paradox. Similar to our previous works on the subject [1,2], we insist here, too, that the relativistic dynamics are not easily reconcilable to the relativistic kinematics since there are fundamental deficiencies with the Lorentz transformation for force. Moreover, it is worthwhile to note that some other works show paradoxes of special relativity regarding rotating reference systems for only kinematic effects [3], which is related to the subject of this article. Although the analysis demonstrated in the article is based on the well-known dynamics of special relativity, other dynamics have been introduced in some references of the literature. For instance, it has been shown that different dynamics can be derived for the kinematics of special relativity [4], and thus, our multispring system paradox analysis can be performed under other dynamics too. It is interesting whether the shown paradox holds for all possible dynamics. In addition, the studies in [5,6] develop new mathematical formalisms on special relativity, and hence, some theoretical research may investigate the application of these formalisms to the analysis of paradoxes such as the paradox discussed in this article. On the other hand, there are alternative theories for special relativity [7,8], and the continuation of the research presented in this article may concern checking whether the multispring system paradox also applies to these theories. The Multispring System Paradox Too many very thin identical springs, each with a similar constant of k P ″ , are attached at one end to the circumference of a thin solid cylindrical plate, all being perpendicular to the plane that passes through the plate, and in the other end, the springs touch the floor. Since these springs are fused to the thin plate, we denote them by P. Another spring S with a greater spring constant of K S ′ attaches the center of the plate to the ceiling of the compartment in which the experiment is carried out (see Figure 1). e distance between the floor and ceiling, as well as the free lengths of the springs, is d 0 ′ . On the other hand, it is assumed that the constants of the springs have the following relation: where n ′ is the number of thin P springs. In that the thin springs are set in parallel to each other, their net constant is simply calculated to be In other words, the net constant of the P springs is equal to that of S, and thus, it is anticipated that the upward forces of the P springs and the downward force of S are balanced, so that, from the viewpoint of the lab observer M, the thin plate remains motionless at a distance d 0 ′ /2 from the ceiling as well as the floor level (see Figure 2(a)). Now, suppose that the plate starts to rotate about its axis of symmetry (z) along with the thin springs P fused to its perimeter. e surface of the floor is considered to be frictionless so that the other ends of the thin springs can easily slide over it, and the springs are not bent or deformed (see Figure 2(b)). If the tangential velocity u ′ of the cylinder's perimeter-to which the thin springs are attached-is a significant portion of light speed, the constant of each thin spring is reduced by the reciprocal of the Lorentz factor α u′ [1,9]. erefore, we can write (see Appendix A) where k P ′ is the reduced constant for each of the rotating thin springs measured by the lab observer M and Moreover, it is worthwhile to mention that k P ″ is the constant for each of the P springs measured either in its rest frame before the rotation, or in the frame momentarily at rest relative to the spring in the process of rotation. e net constant for the rotating springs is thus calculated as follows: Indeed, the rotating springs are weakened due to the relativistic effects, and, as long as the springs P are assembled in series with S, the plate finds its equilibrium state at a distance smaller than d 0 ′ /2 from the floor. If we denote by Δz S ′ the final displacement of S, the final displacement of the thin springs would then be d 0 ′ − Δz S ′ . When the upward force F P ′ of the plate springs (P) equals the downward force F S ′ of S, the forces are in equilibrium and we have Substituting Equation (4) implies Now, we are interested in seeing if an observer N, who approaches the lab observer M at v along x ′ , would measure Δz S the same as obtained in Equation (6) (Δz S � Δz S ′ ); otherwise, relativity encounters a fatal paradox. Indeed, since the lengths in the transverse directions to the velocity v are left unchanged according to the relativistic kinematics, it is expected that the measurements made by M and N be the same regarding the final displacement of the springs. However, observer N asserts that each thin spring travels in a trochoid curve though the resultant velocity w of each P spring is always perpendicular to the spring's alignment. Indeed, N observes that S approaches him at v, while each of the P springs, according to their angular position in the plate, approaches or recedes from him at w so that we can write where w x and w y are the components of the resultant velocity w both complying with the relativistic velocity addition formula. erefore, if M measures the velocity u ′ of a specific P spring at an angular position of θ ′ to have the components of u x ′ � u ′ sin θ ′ and u y ′ � u ′ cos θ ′ (see Figure 3(a)), the relativistic velocity addition suggests that N measures the corresponding velocities as follows [10] (see Figure 3(b)): Inserting Equations (8) and (9) into Equation (7), we obtain Since all springs' alignments are perpendicular to the velocities of v and w, their constants would be reduced by their corresponding reciprocal Lorentz factor. In other words, the constant of S is reduced by α v and the constants of the P springs are decreased by α w as seen by N. Now, if, for International Journal of Mathematics and Mathematical Sciences simplicity, the number of the P springs n ′ tends to infinity, observer N can easily use integration to calculate the resultant upward force of the P springs as follows: where dk P ″ is the infinitesimally small constant of each of the infinite number of the P springs measured either in the spring's rest frame before the rotation of the plate, or in the frame momentarily at rest with respect to the spring during the rotation. Moreover, dk P is the infinitesimally small constant of that specific P spring as measured by N, and, as stated earlier, Δz S is the final displacement of S measured by N. Remember that the displacement of the P springs would then be d 0 ′ − Δz S . On the other hand, the number of the P springs can be calculated by dividing the length of the plate's perimeter by the infinitesimal width of each spring: In the process of rotation of the plate, the P springs are weakened (k P ′ < K S ′ ), and thus, the plate finds its equilibrium state somewhere below the previous location. In this case, the final displacement of the P springs is supposed to be Recall that the middle spring of the rotating plate is shown thinner in size due to maximum speed and maximum Lorentz contraction along the x′-axis. Figure 3: (a) e angular position of the i th P spring is shown as P i on the plate being observed by the lab observer M in plane x′y′. Because the tangential velocity of the plate's perimeter is u′, the velocity of P i is decomposed to u x ′ and u y ′ . (b) e angular position of the spring as viewed by the moving observer N. e plate is Lorentz contracted due to its relative velocity of v. Indeed, N asserts that the velocity w of the i th spring has two components of w x and w y complying with the relativistic velocity addition formula. where r ′ is the radius of the plate measured by M. Recall that the plate's perimeter is Lorentz contracted by α u′ during the rotation. Inserting Equation (12) into Equation (1), the differential form of each of the P springs constant is obtained: Substituting Equation (13) into Equation (11), we get for which the integration implies Remember that it is rational to use 2πα u′ instead of 2π for the upper bound of the integrations over θ ′ (see Appendix B). In that spring S has a velocity v from the viewpoint of N, its constant would reduce to K S � α v K S ′ . e corresponding spring force would thus be Observer N claims that the upward force of F P should balance the downward force of F S in order for the plate to remain in a static situation. Using Equations (15) and (16), we have where α w � . On the other hand, using Equation (10), α w can be simplified to See Appendix C for the proof. Substituting u x ′ � u ′ sin θ ′ together with Equation (18) into Equation (17) yields As stated earlier, Δz S which is measured by N must equal Δz S ′ measured by M; otherwise, relativity results in a paradox. Comparing Equation (19) with Equation (6), if Δz S � Δz S ′ , we indeed get 1 2π Unfortunately, the above formula is not always valid for all arbitrary values of u ′ and v, and thus, it seems that relativity includes a null result, at least, in this example. To prove, it suffices to substitute v � 0.6c and u ′ � 0.8c and do the calculations numerically. In this case, the left-hand side of Equation (20) equals 0.506, whereas the right-hand side equals 0.600, which are not equal to each other. is counterexample shows a deficit in special relativity. However, one also can take the integral analytically to show that Equation (20) is not valid for all arbitrary values of u ′ and v. An important point with this problem is that if the forces are transmitted via some sort of signaling from the P springs towards the center of rotation of the plate to which one end of S is attached, the arrival of the signals to the center is simultaneous from the viewpoint of both M and N. is simultaneity makes the spring S react to all of the signals sent by the P springs instantly as viewed by both of the observers; otherwise, it is expected that the plate is deformed in shape due to the signal delays. Important Notes regarding This Paradox To reduce the reader's confusion, we gather some important remarks concerning this problem and the possible resolutions: (1) Remember that this problem is not connected to some aspects of the Ehrenfest paradox [11] according to which a fast-rotating disc cannot approach the light speed since the centrifugal pressure exceeds the shear modulus of the material of which the plate is made. In our problem, indeed, it is not necessary for the tangential velocity u ′ of the plate to have a value close to c in order to encounter a paradox. at is, if u ′ is much smaller than the speed of light, the paradox is still valid though the difference in Δz and Δz ′ is very small. (2) e centrifugal force exerted on the P springs due to the rotation of the plate may bend the springs slightly outward the center of rotation; however, this phenomenon can be neglected by assuming that the said force is not great, or the springs' stiffness is great so that they are not easily bent out of shape. (3) e uniform distribution of the P springs as seen by M is no longer uniform from the viewpoint of N (see Figure 4). Remember that this phenomenon has already been discussed in the literature [12]; however, it is unlikely that this nonuniformity can cause the plate's normal to incline relative to the z direction as viewed by N. In fact, one can claim that the resultant force of the upper springs would certainly balance that of the lower ones measured by M (see Figure 4(a)) such that the plate remains parallel to the floor level, whereas N may claim that, due to the nonuniform distribution of the P springs, the mentioned forces are not balanced, which makes the plate become oblique (see Figure 4(b)). Recall that if the plate is inclined in the rest frame of N, in that it is not measured inclined in that of M, this can bring about another paradox besides the main paradox discussed earlier. However, the author guesses that the increase in the density of the springs takes place for those having greater tangential speeds from N's point of view, and thus, the spring constants would have smaller values. On the other hand, the lesser the springs' density, the slower they move and the greater their constants. erefore, it is possible that the increase in springs' density compensates for the decrease in their constants, and vice versa, so that the upper and lower resultant forces would finally balance each other, which prohibits the plate from additional rotation. It is also possible that this inclination is somehow related to the disputatious arguments about Mansuripur's article where a similar nonuniformity in the distribution of some point-like electrical charges causes the moving observer to detect a possible torque on a current-carrying loop of wire, whereas the lab observer does not, due to the uniform distribution of the charges [13][14][15][16]. A comprehensive discussion is beyond the scope of this article. (4) It is not mandatory to consider an infinite number of the P springs. One can repeat the calculations using any finite number of springs. (5) Instead of involving the viewpoint of the observer in the rest frame of the P springs, one can directly apply the Lorentz transformation for force between M and N to finally reach Equation (20) (see Appendix D). (6) Remember that this article does not question the relativistic version of Hook's law, but rather the relativistic transformation for force in its general form. Hence, one can replace the springs with electromagnetic fields and electrical charges in a way similar to [2] in order to rewrite the paradox (see Appendix D). Conclusion As a complementary to the author's previous works regarding the relativistic force transformation, this article shows, too, an inconsistency between the kinematics and dynamics in relativity. : e spring P is attached to a thin, frictionless rod, which has pierced the upper plate of a parallel-plate capacitor. Since the spring force of F P ″ cancels out the field force of F E″ ″ , the tiny charge q ″ , which is attached to the lower end of the rod, is suspended statically from the standpoint of the lab observer O. Both the rod and the pillar are made of nonconductive materials. On the other hand, because the spring is supposed to be located far outside the capacitor, no EM field affects it. Observer M moves at u′ along −x″ as viewed by O. International Journal of Mathematics and Mathematical Sciences 5 is a uniform electric field of E z″ ″ � −E ″ . e rod is attached to a spring (P) with a constant of k P ″ in one end, and in the other end, it is attached to a point-like, positively charged object (q ″ ). It is assumed that the rod is frictionless and can thus easily move up and down along z ″ . Both the spring and the charged object are considered massless, and the experiment is carried out away from any gravitational field. e spring in turn is attached to the ceiling of the lab in the upper end (see Figure 5). If the spring is in its free length position (E z″ ″ � 0), it is supposed that the tiny charge q ″ is located very close to the positively charged plate of the capacitor. erefore, when the capacitor is charged, the spring is stretched with a displacement of Δz ″ until the electrical force of the field (F E″ ″ ) cancels out the spring force of F P ″ . In fact, when the oscillations damp out, q ″ finds its equilibrium state at Δz ″ from the upper plate as seen by the lab observer, which, this time, we denote by O. We thus can write the following: We are now interested to find the spring constant of k P ′ from the standpoint of the moving observer M relative to which observer O, as well as the system of spring-capacitor, indeed, moves along + x ′ at u ′ . Using the Lorentz transformation for EM fields, observer M, however, detects a magnetic field of Equation (A.2) infers the corresponding Lorentz force of which is exerted on q ″ along + z ′ , and Equation (A.3) implies an electric force of F E′ ′ � q ″ E z′ ′ � −c u′ E ″ q ″ , which is exerted on q ″ along −z ′ . e resultant force of F E′−B′ ′ due to the EM fields is calculated as follows: On the other hand, observer M calculates the spring force to be Because M, as well as O, admits the static situation of q ″ , the above forces shall cancel out each other, and hence, we have Moreover, the traditional Lorentz transformation asserts that the lengths perpendicular to the motion direction are left unchanged, otherwise paradoxes arise. erefore, we have Substituting Equation (A.1) together with Equation (A.7) into Equation (A.6) yields Equation (3) is thus proved explicitly. B. Regarding the Upper Bound of the Integrations It is evident that the number of springs must remain the same before and during the rotation (n ″ � n ′ ). at is to say, the number of springs is independent of whether or not the Lorentz contraction occurs. Before the rotation, observer M calculates this number to be Now, if we equate Equation (B.1) with Equation (12), we get where C is the integration constant. is constant can be chosen to be zero inasmuch as for θ ′ � 0, we set θ ″ � 0. erefore, we have It is evident that a complete period occurs on the interval (0, 2π) for θ ″ as measured by M before the rotation. To find the upper bound of all integrations over θ ′ , it suffices to insert θ ″ � 2π into Equation (B.4): On the other hand, in the last appendix, we have introduced an alternative approach to this paradox, which does not involve using spring constants or the Lorentz contraction directly. ere, another method is demonstrated for proving the use of 2πα u′ instead of 2π as the upper bound of the said integrations (see Appendix D). C. Derivation of Equation (18) Equation (18) is proved in this appendix. Using Equation (10), we can write 6 International Journal of Mathematics and Mathematical Sciences Recall that the cross sign "×" indicates the usual multiplication rather than the vector product. Considering the fact that u ′ 2 � u ′2 x + u ′2 y , we continue which finally yields erefore, Equation (18) is proved. D. Eliminating the Use of Spring Constants Here, not only we directly use the Lorentz transformation for force to relate the viewpoints of M and N, but we eliminate the use of spring constants. Assume that we replace the spring S shown in Figure 1 with a cylinder inside which there is a uniform electric field. It is supposed that an electrically charged object acts as a piston inside this cylinder. (In Figure 5, if we eliminate the spring P, the remaining capacitor is similar to a cylinder inside which the charged object (q ″ ) and the rod behave as a piston, which very well depicts our purpose.) Indeed, we have produced some sort of spring, which can exert a constant force regardless of the displacement of the charged piston. If the P springs, in the article's main problem (see Figure 1), are also replaced by some similar cylinders, though each being very thinner in size and having an infinitesimally small charged piston, the spring S and each of the P springs, respectively, exert the forces of F S ′ and dF P ′ from the standpoint of observer M. Now, if M claims that the system is balanced and thus the forces cancel out each other, we can write e Lorentz transformation for force asserts that N calculates the corresponding forces as follows ( [10] p. 147): If relativity excludes any null result, N would also claim that the forces would balance each other; otherwise, the plate would accelerate upward or downward along z. erefore, the static situation implies (D.5) Inasmuch as dF P ′ is independent of v and u x ′ , we can write (D.6) Finally, inserting Equation (D.1) into Equation (D.6) yields (D.7) Since Equation (D.7), similar to Equation (20), has two unacceptable solutions of v � 0 or u x ′ � 0 ⟶ u ′ � 0, it shows that observer N, contrary to M, believes that the plate would accelerate along z. Remember that if we first insert u x ′ � u ′ sin θ ′ into Equation (D.7) and then integrate both sides of Equation (D.7) with respect to θ ′ from 0 to 2πα u′ , we reach exactly Equation (20). Remember that these calculations are also applicable to the original problem including springs provided the difference of the final displacements of the springs measured by M and N are small so that the related forces remain nearly unchanged. On the other hand, we are not worried about how the electromagnetic fields change as viewed by N in this later example because whatsoever they are, they must produce the resultant forces complying with the Lorentz transformation for force. erefore, it is also possible to use, instead of springs, a cylinder filled with an ideal gas along with a moveable piston regardless of the type of the thermodynamic process according to which the piston compresses/decompresses the gas contained within the cylinder and regardless of how the thermodynamic parameters such as temperature and pressure are defined relativistically. It is because the calculations done in this appendix are general for all forces, which can be applied to any problem regardless of the agent(s) of the involved force(s). Data Availability Data sharing is not applicable to this article as no new data were created or analyzed in this study. Conflicts of Interest e author declares no conflicts of interest.
5,612.6
2021-12-11T00:00:00.000
[ "Physics" ]
Study of the convective fluid flows with evaporation on the basis of the exact solution in a three-dimensional infinite channel The solution of special type of the Boussinesq approximation of the Navier – Stokes equations is used to simulate the two-layer evaporative fluid flows. This solution is the 3D generalization of the Ostroumov – Birikh solution of the equations of free convection. Modeling of the 3D fluid flows is performed in an infinite channel of the rectangular cross section without assumption of the axis-symmetrical character of the flows. Influence of gravity and evaporation on the dynamic and thermal phenomena in the system is studied. The fluid flow patterns are determined by various thermal, mechanical and structural effects. Numerical investigations are performed for the liquid – gas system like ethanol – nitrogen and HFE-7100 – nitrogen under conditions of normal and low gravity. The solution allows one to describe a formation of the thermocapillary rolls and multi-vortex structures in the system. Alteration of topology and character of the flows takes place with change of the intensity of the applied thermal load, thermophysical properties of working media and gravity action. Flows with translational, translational-rotational or partially reverse motion can be formed in the system. Introduction The fluid flows with an interface being under action of the co-current gas fluxes and accompanied by evaporation or condensation have been the subject of extensive theoretical and experimental investigations in the last few decades (for a review, see [1]). The flows are applied in systems of the fluidic cooling or thermal controlling of highly efficient semiconductor equipment, in membrane evaporators, distillers, thermal coating applications technologies, etc. Precisely forecasting the fluid dynamics in the processes requires comprehensive analysis based on modeling the two-layer flows with evaporation. Results of the theoretical study can help one to clarify physical aspects of evaporative convection and phenomena in the liquid media caused by the gas flow. One way of modeling real fluid flows is obtaining the exact solutions. It allows one effectively and rapidly to get some evaluation characteristics or to forecast outcome of experiments on preliminary stages of working-out [2]. Finally, in multiparameter problems the exact solution certainly gives information on degree and character of influence of various factors and provides possibility to make more precise a mathematical model. In present work the character and structure of the joint flow of evaporating liquid and cocurrent laminar gas flux are investigated on the basis of new exact solution of the Boussinesq [3,4] for three dimensional case. The group nature of the Birikh solutions [5] allowed one to generalize the solutions both for 3D convection problem in the non-axis-symmetrical case [6] and for problems of evaporative convection in the domains with internal interfaces admitting phase transition [7]. The invariant and partially invariant solutions are used to study the fundamental and secondary features of physical processes described by the convection equations. They imply the natural properties of space-time symmetry and symmetry of spatial fluid motion. It should be noted that the asymptotic character of the Birikh type solution has been justified in [8], where the thermocapillary gravitational convection in a long horizontal cavity has been studied experimentally and numerically on the basis of a 2D problem. Complete investigation of properties of the constructed solutions expects the study of the stability characteristics and their perturbation spectrum. In [9,10] the stability of the two-dimensional convective flows with evaporation was investigated. Influence of intensity of the thermal loads on the external channel walls, gravity and linear thicknesses of the working media layers on the stability characteristics have been studied in the framework of the problem statement without taking into account the Soret effect. The aim of the paper is to carry out mathematical modeling of the three-dimensional convective fluid flows with evaporation as well as to analyze the influence of the gravity and external thermal load on the flow patterns. The study is performed in the framework Oberbeck -Boussinesq model taking into account the Soret and Dufour effects in vapor -gas phase. Generalization of the Ostroumov -Birikh solution for 3D flows with evaporation at interface Let the Cartesian coordinate system be chosen so that the gravity acceleration vector g is directed opposite to the Ox axis (g = −gi, i is the unit vector of Ox). Two fluid layers are separated by the thermocapillary interface Γ given here by the equation x = 0 (see fig. 1). Let the linear size of the flow domain in the y-direction h be the characteristic length. The characteristic values for the coupled problem of the liquid-gas flows are introduced on the basis of the characteristics of the liquid so that u * , T * and p * = ρ 1 u 2 * are the characteristic velocity, temperature drop and pressure, respectively. Viscous incompressible fluids (liquid and gas -vapor mixture) fulfill the infinite horizontal layers Ω 1 and Ω 2 ( fig. 1). The boundaries of the domains Ω 1 , Ω 2 are the fixed impermeable walls. The stationary three-dimensional convective flows of j-th medium (here and subsequently j = 1, 2 relate to the liquid and gas -vapor mixture, respectively) are described by the Oberbeck -Boussinesq approximation of the Navier -Stokes equations [7,11]. The Dufour and Soret effects (or the effects of diffusive thermal conductivity and thermodiffusion [12,13,14]) are taken into account in the gas phase. We suppose that the vapor is a passive admixture, the vapor diffusion in the gas phase is described by the diffusion equation. We construct the exact solution of the Oberbeck -Boussinesq, which is characterized by dependence of the components of the liquid v 1 = (u 1 , v 1 , w 1 ) and gas velocity v 2 = (u 2 , v 2 , w 2 ) vectors on the transverse coordinates (x, y) (see [7]). The temperature functions T j , pressure p j and vapor concentration C have the terms Θ j , q j , Φ similarly depending on the transverse coordinates (x, y): Here Re = u * h/ν 1 is the Reynolds number, Gr = β 1 T * gh 3 /ν 2 1 is the Grashof number, Ga = gh 3 /ν 2 1 is the Galilei number,ρ = ρ 2 /ρ 1 ,β = β 2 /β 1 are the ratios of the densities ρ j and coefficients of thermal expansion β j of the gas and liquid, respectively; ν j is the coefficient of kinematic viscosity, γ is the concentration coefficient of the gas density. The coefficients A and B determine the constant longitudinal temperature and concentration gradients along the interface. If A * and B * are the dimensional longitudinal gradients of temperature and concentration functions then The following relation between A and B will take place B = −C * ε A because of the interface condition for saturated vapor concentration. (Here C * is the saturated vapor concentration at T 2 = T 0 ;ε = εT * , ε = λµ/(R * T 2 0 ), λ is the latent heat of evaporation, µ is the molar mass of the evaporating liquid, R * is the universal gas constant; for details and choice of T 0 see [2,9].) The interface boundary conditions are formulated on the basis of the conservation laws and some additional assumptions [6,7,11]. The kinematic and dynamic conditions (projection on the tangential and normal vectors to the interface) should be fulfilled on the interface x = 0 [6,7]. Conditions of continuity of tangential velocities and temperature are assumed to be fulfilled on the thermocapillary interface x = 0. At x = 0 the heat transfer condition with respect to the diffusive mass flux due to evaporation and the vapor mass balance equation are formulated. These relations take into account the Dufour and Soret effects. The linearized form of an equation for saturated vapor concentration at interface is used as a condition for the vapor concentration function at interface [9]. It is a consequence of the Clapeyron -Clausius equation and the Mendeleev -Clapeyron equation for an ideal gas. On the fixed impermeable walls of the channel the no-slip conditions for velocity fields and the conditions of thermal insulating of the lateral walls are imposed. The case of absence of vapor flux on the upper and lateral rigid boundaries is studied in the present statement. Both conditions provide a fulfillment of the conditions for full heat flux and for full mass flux with respect to the Dufour and Soret effects, respectively, on the fixed walls [11,15]. Numerical investigations The analytical calculations for construction of the exact solutions of the three-dimensional convection problem with evaporation at the interface is complemented by the numerical investigations. The numerical algorithm of solving of each 2D problems is based on the longitudinal transverse finite difference scheme known as the method of alternating directions [6,11,16]. Numerical investigations are performed for the liquid -gas system like ethanol -nitrogen and HFE-7100 -nitrogen with the physicochemical properties described in [17] (see also [2,7,9]). The values of other parameters used by simulations including the coefficients characterized the Soret and Dufour effects in the gas -vapor layer are chosen similarly to that described in [2,7,9]. We investigate the flow topology computed with following values of the Grashof number asymmetrical swirls in the liquid layer occurs in the ethanol -nitrogen system in microgravity conditions. Two of the vortices are deformed into "angled" vortexes, but their cores stay close to the interface in "corners" near the lateral walls. Liquid particles from the hot spot move along the interface due to the Marangoni effect and go down near the lateral walls because of unstable thermal stratification. Thus, the upper part of the "angled" vortex is generated by thermocapillary effect, but the side part appears due to convective mechanism. In HFE-7100nitrogen system in the case with more intensive longitudinal temperature regimes (see fig. 5) the liquid flows are characterized by four separate vortices under microgravity and six swirls in the terrestrial conditions. For the system the flow regimes differ only in the liquid layers (compare fig. 4 and 5), whereas the fluid flow structures are rebuilt in both upper and lower layers in ethanol -nitrogen system ( fig. 2 and 3). In all the cases flow patterns are symmetric with respect to the plane y = 0.5 in both phases. gravity (here Gr = 47000 for ethanol -nitrogen, Gr = 1220000 for HFE-7100 -nitrogen systems). Some minor quantitative differences are observed by comparison of the flows under conditions of normal gravity and microgravity for both liquids. However, it should be pointed out that there are more greater quantitative differences between liquid and gas flows in the flows of the ethanol -nitrogen system, than in the HFE-7100 -nitrogen system. Conclusions The solution of special type of the 3D stationary coupled problem of the gravitational and thermocapillary convection with respect to evaporation is used to describe the convective flows in an infinite channel of a rectilinear cross section without assumption of axial symmetry of the flow domains. This solution has the group nature and is the analogue of the Ostroumov -Birikh solution of the convection equations which additionally include the Dufour and Soret effects in the gas phase. Considering the infinite channel under action of the constant longitudinal temperature gradient we study a model problem of evaporative convection and interpret the constructed solution as an one that describes the flow on the working area being a sufficiently long cavity. The flows of both fluids (of a liquid and gas -vapor mixture) modeled with the help of the exact solution can be characterized as a translational motion and progressively rotational flows and realized in the various forms. The qualitative and quantitative differences are confirmed for the flows of various working fluids (ethanol -nitrogen and HFE-7100 -nitrogen systems). The numerical investigations allow one to analyze the possible flow structure with respect to the intensity of the gravitation field and longitudinal temperature gradients created on the interface. The intensity of the liquid flows depends on intensity of the gravitation field and on interface temperature regime. Topologically various structures of flows are formed due to combined influence of the thermocapillary and convective mechanisms and evaporation/condensation process, affecting the thermal pattern of the flows.
2,851.8
2017-09-01T00:00:00.000
[ "Engineering", "Physics", "Environmental Science" ]
Primary cytomegalovirus infection during pregnancy and congenital infection: a population-based, mother–child, prospective cohort study Objective This study assessed maternal cytomegalovirus antibodies, and the occurrence of primary and congenital cytomegalovirus infections, and risk factors of congenital infection after a maternal primary infection. Study design We included 19,435 pregnant women in Japan, who were tested for serum cytomegalovirus antibodies before 20 gestational weeks. Immunoglobulin (Ig) G avidity was evaluated in women with both IgG and IgM antibodies; tests were repeated at ≥28 gestational weeks among women without IgG and IgM antibodies. Result Primary and congenital infections were 162 and 23 cases, respectively. The risk ratios for congenital infection were 8.18 (95% confidence interval: 2.44–27.40) in teenage versus older women, and 2.25 (95% confidence interval: 1.28–3.94) in parity ≥ 2 versus parity ≤ 1. Of 22 live birth congenital infection cases, three had abnormal neurological findings. Conclusion We demonstrated teenage and parity ≥ 2 pregnant women as risk factors of post-primary congenital infection. INTRODUCTION Cytomegalovirus (CMV) is a common pathogen that causes congenital infection, infection-related malformations, and neurological disabilities. Congenital CMV infections account for up to 10% of cases of cerebral palsy [1]. Congenital CMV is a leading cause of non-genetic sensorineural hearing loss (SNHL) at birth, accounting for 25% of all causes. Moreover, congenital CMV accounts for 25% of late-onset SNHL occurring at the age of four years [2]. Maternal CMV infections are divided into primary and non-primary infections (both occurring during pregnancy). Primary infection is the first infection a pregnant woman is exposed to. Primary infection is indicated by seroconversion or low immunoglobulin (Ig) G avidity in maternal antibody tests. Nonprimary infections comprise both reinfection and reactivation of infection before pregnancy. Reinfection is caused by a CMV strain that is different from the one before pregnancy, whereas reactivation is caused by the endogen latent strain that existed before pregnancy [3]. A primary CMV infection induces a CMV-specific IgM antibody production, followed by a CMV-specific IgG antibody production. Despite the low avidity of a specific IgG antibody in the first weeks, it gradually increases after a primary CMV infection. A CMV-specific IgM antibody has a high false-positive rate, with <30% of women with positive IgM having a primary infection [4]. However, low IgG avidity is a sensitive and specific marker of primary infection [5]. Cases of IgM antibody combined with low IgG avidity are suspected of having a primary infection within the preceding 2-4 months of pregnancy [4]. The presence of the CMV IgM antibody combined with low IgG avidity is considered to have the same diagnostic value as CMV antibody seroconversion, which shows exact primary CMV infection. Lazzarotto, et al. [4] found that the incidence of fetal or newborn congenital CMV infections was very similar in both pregnant women with positive IgM antibody and low IgG avidity and those with antibody seroconversion (25.0% in women who were IgM positive with low IgG avidity and 30.3% in women with antibody seroconversion). The IgG avidity assay used in the current study appears to have a similar sensitivity for primary CMV infection as the assay used in the previous study. Ebina et al. [6] reported an 88.9% sensitivity, 96.2% specificity, 27.6% positive predictive value, and 99.8% negative predictive value, for the IgG avidity for congenital CMV infection used in the current study. The incidence of primary CMV infection is overwhelmingly referred to in only antibody seroconversion, which occurs during pregnancy in seronegative pregnant women [7]. The incidence of primary infection during pregnancy is rarely mentioned in both sets of positive IgM and low IgG avidity and antibody seroconversion. Alternatively, for the incidence of congenital CMV infection, the incidence has been mentioned without making any distinction between the maternal primary and non-primary CMV infections. In this population-based mother-child prospective cohort study on maternal CMV antibody screening, we demonstrated the incidence of primary and congenital CMV infection after a maternal primary infection, which occur during pregnancy. In addition, we studied the risk factors of the occurrence of congenital CMV infection after maternal primary infection. SUBJECTS AND METHODS Maternal CMV antibody screening program in Mie, Japan since 2013 We have been conducting maternal CMV screening programs in Mie, Japan, in the context of a population-based, observational, and prospective cohort study (UMIN000011922) since 2013. This study was conducted in accordance with the Declaration of Helsinki. We obtained ethical approval from the Clinical Research Ethics Review Committee of the Mie University Hospital (#2610) and obtained informed consent from all participants. We have previously reported the 2013-2015 maternal antibody screening program results [8]; here, we continued maternal antibody screening. Serum CMV IgG and IgM antibody tests using Seiken CMV IgG and IgM assays (Denka Seiken, Tokyo, Japan) were performed on all participants before 20 weeks of gestation. In the Seiken CMV IgG and IgM assays, the enzyme immunoassay (EIA) method was adopted. The threshold levels of both CMV IgG and IgM antibodies were determined based on the manufacturer's protocol: CMV IgG negative, 0-1.9 EIA value; borderline, 2.0-3.9 EIA value; and positive, ≥4.0 EIA value; and CMV IgM negative, 0-0.79 index; borderline, 0.80-1.20 index; and positive, ≥1.21 index. For the participants with IgG positive or borderline (+ or + −) and IgM positive (+) results, additional IgG avidity tests were performed, using residual serum samples from the IgG and IgM antibody tests. An Enzygnost CMV IgG assay (Siemens Healthcare Diagnostics, Tokyo, Japan) was used and the urea washing method was utilized in the Aisenkai Nichinan Hospital, Miyazaki, Japan [7]. Women with low IgG avidity results (35% or lower on the IgG avidity index) were considered as having primary infection in early pregnancy during the periconceptional period or a high risk of subsequent congenital infection; alternatively, women with high IgG avidity results (>35% of IgG avidity index) were considered as having primary infection dating >3 months pre-conception or a low risk of subsequent congenital infection. Regarding participants with IgG negative (−) and IgM negative (−) results, precautionary measures (such as avoiding close contact with saliva or urine of young children and condom usage during sexual intercourse during pregnancy) were taken to prevent primary infection. We additionally performed repeated IgG and IgM antibody tests at ≥28 weeks of gestation. Women with IgG and/or IgM seroconversion were considered as having primary infection after the first trimester of pregnancy or a high risk of subsequent congenital infection; alternatively, women with neither IgG nor IgM seroconversion had remained free from CMV infection or were seronegative with a low risk of subsequent congenital infection. For the participants with IgG (−) and IgM (+ or + −) results, we performed repeated IgG and IgM antibody tests after two or more weeks, as per the instruction manual of assays in the case of sole IgM detection. Women with IgG seroconversion were considered as having primary infection or a high risk of subsequent congenital infection; alternatively, women with no IgG seroconversion were considered as having no infection or a low risk of subsequent congenital infection. Participants with IgG (+ or + −) and IgM borderline or negative (+ − or −) results were considered as having non-primary infection or a low risk for subsequent congenital infection. Diagnosis of congenital CMV infection in infants whose mothers were considered as having primary infection during pregnancy For participants considered as having primary infection during pregnancy, we collected either an amniotic fluid or a urine sample of their newborns within one week after birth. In addition, using a fresh liquid sample, we tested samples using the aforementioned real-time polymerase chain reaction (PCR) method (Mie University Hospital, Mie, Japan) [8]. Infants with CMV DNAs in the PCR method were diagnosed with congenital CMV infection. In infants with congenital infection, we performed a viral isolation method using the CMV DNAs-positive neonatal urine samples (National Mie Hospital, Mie, Japan). Moreover, we studied the incidence (%) of congenital infection following maternal primary infection in each age group (teens, 20 s, and 30-40 s) and each parity group (para 0, para 1, and para ≥ 2). Next, we studied the risk ratio of the incidence of congenital infection. Neurological tests in congenitally infected infants after diagnosis of congenital CMV infection For congenitally infected infants with abnormal findings at birth, including low birth weight, small for gestational age, microcephaly, hepatosplenomegaly, jaundice, petechia, or a "refer" result in the newborn hearing screening (NHS), we performed neurological tests, including brain magnetic resonance imaging (MRI), auditory brainstem response (ABR), and funduscopy during the neonatal period. However, infected infants who neither showed abnormal findings at birth nor a "refer" result in the NHS were neurologically tested at 18 months. To calculate the statistical significance, we used Fisher's exact or Chisquared tests. A p < 0.05 was considered statistically significant. Analyses were performed by SAS Enterprise Guide 6.1 software (SAS Institute, Cary, NC, USA). (Table 1). There were~50 obstetrical institutions in Mie, Japan, and 49,000 deliveries during said period. We studied 40% of the women in the population as a large-scale cohort. Maternal CMV antibody screening Out of 19,435 participants, 1037 (5.34%) had IgG (+ or + −) and IgM (+) results, of which, 115 (11.09%) showed low IgG avidity results, hence they were considered as having primary infection in early pregnancy during the periconceptional period. The other 922 women showed high IgG avidity results and were considered as having primary infection dating >3 months pre-conception. Out of 19,435 participants, 6510 (33.50%) showed IgG (−) and IgM (−) results, of which, 4082 were retested for IgG and IgM antibodies; 47 (1.15%) showed IgG and/or IgM seroconversion, being considered as having a primary infection after the first trimester of pregnancy. Out of those, 16 (0.39%) showed only IgM seroconversion while 31 (0.76%) showed IgG seroconversion; 22 showed IgG and IgM seroconversion and nine showed isolated IgG seroconversion; nevertheless, 4035 (98.85%) women showed neither IgG nor IgM seroconversion, and had remained free from CMV infection, or were seronegative. Out of the 19,435 participants, 126 (0.65%) showed IgG (−) and IgM (+ or + −) results, out of which, 98 were retested for IgG and IgM antibodies after two or more weeks. None of the 98 women showed IgG seroconversion and were considered to be without infection. Out of 19,435 participants, 11,762 (60.52%) showed IgG (+ or + −) and IgM (+ − or −) and were considered as having non-primary infections (Figs. 1, S1). Congenital CMV infection in infants whose mothers were considered as having primary infection during pregnancy We collected neonatal urine or amniotic fluid samples from 162 pregnant women considered to be primarily infected during pregnancy; 114 urine and one amniotic fluid sample from women with low IgG avidity, and 47 urine samples from women with IgG and/or IgM seroconversion from the initial IgG (−) and IgM (−) results during early pregnancy were collected. Out of the 115 low IgG avidity samples, eight (seven urine and one amniotic fluid) were positive for CMV DNAs and six (all urine) were positive for cytopathic effect in the viral isolation method. Out of 47 IgG and/ or IgM seroconversion urine samples, 15 and 13 were positive for CMV DNAs and cytopathic effect, respectively, totaling 8 and 15 congenital infections in women with low IgG avidity results and in those with IgG and/or IgM seroconversion, respectively (Table 2). There was a significant difference (p < 0.001) in the incidence of a subsequent congenital infection between women with primary infection in early pregnancy during the periconceptional period and women with primary infection after the first trimester of pregnancy (7.0% and 31.9%, respectively). In eight pregnant women with low IgG avidity and subsequent congenital infection, the median CMV IgM titer was 7.23 index (range: 5.41-10.53 index). Keeping 100% sensitivity for the eight pregnant women, the positive predictive value for fetal congenital infection in each IgM titer level was 7.1% in the IgM titer level ≥1.21 index, 9.4% in ≥2.00 index, and 11.9% in ≥4.00 index, respectively. The incidence of congenital infection following maternal primary infection was 0.86% in pregnant women in the teenage years (three out of 350), 0.11% in the 20 s (ten out of 8765), and 0.10% in the 30-40 s (ten out of 10,320), respectively. The incidence in teens was significantly higher (p < 0.001) and the Furthermore, the incidence of congenital infection following maternal primary infection was 0.11% in pregnant women of para 0 (ten out of 9115), 0.07% in para 1 (five out of 7038), and 0.27% in para ≥ 2 (eight out of 3012). The incidence in para ≥ 2 was significantly higher (p = 0.03), and the risk ratio of the incidence of congenital infection following maternal primary infection was 2.25 (95% confidence interval: 1.28-3.94) in para ≥ 2 compared to para 0 and para 1 (Fig. S2). Neurological tests in congenitally infected infants Out of the eight congenitally infected cases in participants with low IgG avidity results, seven were live births and one was a second-trimester abortion (no abnormal fetal echo findings). All 15 congenitally infected cases in participants with seroconversion were live births. The median gestational weeks at birth of all 22 Abnormal findings in neurological tests were found in one and two infants with low IgG avidity and seroconversion, respectively. a Including borderline. b Periventricular cysts in brain magnetic resonance imaging (MRI) and unilateral threshold elevation in auditory brainstem response (ABR). c Unilateral threshold elevation in ABR. d Impaired white matter intensity in brain MRI in both cases. live birth cases were 38 weeks (range: 37-40 weeks) while the median birth weight was 2930 g (range: 2070-3826 g). Two out of 22 live birth cases showed abnormal findings at birth. One case from a mother with low IgG avidity (37 weeks gestation at birth, birth weight of 2244 g) showed a low birth weight, microcephaly, and a "refer" result in the NHS. The other case from a mother with seroconversion (38 weeks gestation at birth, birth weight of 2070 g) showed a low birth weight, small for gestational age, and microcephaly. They both underwent brain MRI, ABR, and funduscopy during the neonatal period. While the latter case showed normal results in all tests, the former case showed an abnormality in both brain MRI (periventricular cysts) and ABR (unilateral threshold elevation) but had normal funduscopy results. Subsequently, the former case underwent anti-viral therapy but showed developmental delay (development quotient 62) and severe unilateral SNHL. The remaining 20 out of 22 live birth cases did not show any abnormal findings at birth. Fourteen out of 20 cases without abnormal findings at birth underwent brain MRI, ABR, and funduscopy at~18 months after birth. Two cases from mothers with seroconversion showed impaired white matter intensity in brain MRI but had normal ABR and funduscopy results (Figs. 2, S3). DISCUSSION Serological tests diagnose maternal primary CMV infections during pregnancy, and closely examine suspicious fetal echo findings (such as hyperechogenic bowel, fetal growth restriction, or brain calcifications) and screen asymptomatic pregnant women. Recent primary infections are diagnosed by CMV IgG and IgM antibody measuring, and IgG avidity. In primary infections, a CMV IgM antibody production is induced first, followed by an IgG antibody production, which is often not detectable until at least two weeks after symptom onset [5]. The seroconversion of the CMV IgG antibody precisely means primary infection. In addition, tests regarding CMV IgG avidity are conducted to measure the IgG antibody maturity against a viral antigen to detect primary infections. Primary infections can be confirmed by antibody seroconversion or a set of both positive IgG and IgM antibodies combined with a low IgG avidity result [9]. We studied the incidence of a set of positive IgM and low IgG avidity at early-stage pregnancy (0.59%) and seroconversion during early-to-late-stage pregnancy (0.39%) as a primary infection. A CMV IgM antibody appears and can persist for months and sometimes over a year after a primary infection. Furthermore, a CMV IgM antibody is detectable during different strain re-infection from one of the primary infections or reactivation caused by the same endogenous latent strain in the primary infection. For these reasons, the CMV IgM antibody has a high false-positive rate for primary infections, with <30% of pregnant women with positive IgM antibody having a primary infection [4]. Therefore, a diagnosis of maternal primary infection cannot be based on the production of the IgM antibody alone. An IgG avidity test is performed to measure IgG antibody maturity against the CMV antigen. Although an IgG avidity is very low in the first weeks after primary CMV infection, it gradually increases after the primary infection. A maternal low IgG avidity result suggests a primary infection within the preceding Table 2. The number of primary, non-primary, and no infection cases with abnormal fetal echo findings, "refer" in neonatal hearing screening, and neither abnormal fetal echo findings nor "refer" in neonatal hearing screening by maternal antibody screening and congenital infection. Maternal antibody screening and congenital infection Abnormal fetal echo findings 2-4 months [4]. Thus, a low IgG avidity result during the first trimester of pregnancy suggests a primary infection during earlystage pregnancy. Conversely, a maternal high IgG avidity result suggests a primary infection occurring more than five months earlier [9]. A high IgG avidity result during the first trimester of pregnancy suggests that primary infection occurred before conception. Moreover, a borderline IgG avidity result during the first trimester cannot exclude a primary infection either during early-stage pregnancy or the periconceptional period. A primary CMV infection in early-stage pregnancy is usually diagnosed with a set of positive IgG and IgM antibodies and low IgG avidity results during the first trimester of pregnancy. Although an IgG avidity test is useful for diagnosing primary infections, it has limitations. Despite antibody seroconversion precisely indicating primary infection, low IgG avidity results in pregnant women do not necessarily constitute the occurrence of primary infection during pregnancy. As a diagnostic tool, a low IgG avidity result still has some pitfalls, as it may be falsely presented in past infections before conception with very low IgG antibody levels [9,10]. Conversely, a low IgG avidity result may not be falsely presented in recent primary infections, as IgG avidity can be falsely high immediately after antibody seroconversion [11]. Therefore, exact diagnosis of maternal primary CMV infection based on the IgG avidity measurement is not perfect. In pregnant women with low IgG avidity and subsequent congenital infection, we have reported previously that the higher the CMV IgM titer, the higher the positive predictive value for congenital infection in the range of 100% sensitivity [8], which was confirmed in the current study. In pregnant women with low IgG avidity, the titer of IgM antibody was high in fetal congenital infection cases; thus, the IgM titer was considered to be useful for predicting occurrence of fetal congenital infection. Primary infection during pregnancy is rarely mentioned in a set of positive IgM and low IgG avidity, and antibody seroconversion on a large scale; it is overwhelmingly mentioned in pregnancy antibody seroconversion in seronegative pregnant women. Hyde et al. [7] reported a 1.7% (95% confidence interval: 1.6-1.8%) incidence of antibody seroconversion during a 9-month pregnancy as in seronegative populations. In this study, we showed a 1.2% (47 IgG and/or IgM seroconversion out of 4082 seronegative pregnant women) incidence of only antibody seroconversion, mostly consistent with the reports in the literature. Kaneko et al. [12], in a Japanese cohort study, reported 0.86% of primary infection out of the total population (nine with low avidity and one with seroconversion out of 1163 pregnant women), which is similar to this study's results, despite having a small cohort. We demonstrated that 0.98% of the maternal antibody screening cohort population is estimated to have a primary infection and the result was similar to the epidemiology in Western Europe and in the United States (~1-2% of population) (Supplementary Manuscript). Adding to primary infection during pregnancy, the incidence of congenital infection after primary infection in the population through a large-scale maternal CMV antibody-screening cohort used in this study was 0.16% (Supplementary Manuscript, Fig. S4). We studied congenital CMV infection in pregnant women, both with a set of positive IgM and low IgG avidity at an early-stage pregnancy (0.04%) and seroconversion during early-to-late-stage pregnancy (0.12%), as primary infection during pregnancy. In the literature, the incidence of congenital CMV infection out of live births is reported to be 0.4-1.0% [13][14][15][16]; however, congenital infection has been mentioned without differentiating between maternal primary and non-primary infections during pregnancy. Recently, congenital infection has been separately reported for maternal primary and non-primary infections during pregnancy. Leruez-Ville et al. [15] reported congenital infection after maternal primary infection at 0.34% (eight out of 2378 pregnant women). Kaneko et al. [12] reported this rate at 0.26% (two out of 1163 pregnant women with IgG (+), IgM (+), and low IgG avidity and one woman with seroconversion). Tanimura et al. [17] reported a 0.14% congenital infection rate (two out of 2193 women with IgG (+), low IgG avidity, and/or IgM (+), and one woman with seroconversion). Our results were similar to the aforementioned, despite having the largest cohort, demonstrating on a large-scale cohort that 0.16% of the maternal antibody screening population is estimated to have a congenital infection after a maternal primary infection during pregnancy. Young age and para ≥ 1 are known to be risk factors for primary CMV infection during pregnancy [18]. In this study, we studied the risk factors for congenital CMV infection after maternal primary infection and demonstrated teenage and para ≥ 2 pregnant women as risk factors of congenital infection after primary infection. Two major sources of CMV infection in pregnant women include sexual activity and direct contact with young children, with transmission occurring through direct contact with body fluids containing viable CMV. In teenage pregnant women, direct contact with semen containing CMV during sexual intercourse without condom use is thought to be a scenario of primary infection, as they rarely have children. However, in para ≥ 2 pregnant women, direct contact with urine or saliva containing Fig. 2 Results of infant neurological tests at approximately 18 months of age in live birth congenital CMV infection cases whose mothers had primary CMV infection (n = 22). a Low IgG avidity (n = 7) and seroconversion (n = 15). b One case from low avidity mother showed low birth weight, microcephaly, and "refer" in newborn hearing screening and the other case from seroconversion mother showed low birth weight, small for gestational age, and microcephaly. c The former case showed periventricular cysts in brain magnetic resonance imaging (MRI) and unilateral threshold elevation in auditory brainstem response (ABR). d Developmental delay and sensorineural hearing loss (SNHL). e Six patients from low avidity mothers and eight from seroconversion mothers underwent brain MRI, ABR, and funduscopy. Another three from seroconversion mothers only underwent ABR. The remaining three did not undergo tests. f Impaired white matter intensity in brain MRI in both cases. CMV, of their children during pregnancy is thought to be the other scenario of primary infection. Leruez-Ville et al. [15] reported congenital infection in pregnant women who were seronegative before pregnancy at 0.87%, eight congenital infections out of 924 women including both at early pregnancy with IgG (−) and IgM (−) results and with IgG (+), IgM (+), and low/intermediate IgG avidity results. In the current study, that occurrence was found to be lower at 0.55% (23 congenital infections out of 4197 pregnant women including 4082 with IgG (−) and IgM (−) results and 115 with IgG (+), IgM (+), and low IgG avidity results, respectively). Our education messaging provided to seronegative women to prevent primary CMV infection later in pregnancy might contribute, although we had no data relating to the number of women who acquired CMV through exposure to young children or sexual transmission in Japan. The total congenital infection in pregnant women both with primary and non-primary infection was reported to be similar between France and Japan. Total congenital infection was reported by Leruez-Ville to be 0.38% in France [15], while was reported by Koyano et al. to be 0.31% in Japan [19]. As the incidence of total congenital infection was similar between the two countries, and the incidence of congenital infection from mothers with primary infection was lower in Japan, the incidence of congenital infection from mothers with non-primary infection was thought to be higher in Japan. In fact, Tanimura et al. [17] reported that the incidence of congenital infection from mothers with non-primary infection was higher than that from those with primary infection (0.32% from non-primary and 0.14% from primary infection). Further study is needed regarding congenital infection in pregnant women with non-primary infection in Japan. In this study, two out of nine congenital infection cases whose mothers had IgG and/or IgM seroconversion from the initial IgG (−) and IgM (−) results showed an abnormal brain MRI result. Brain lesions are thought to develop only in congenital infection cases after a maternal primary infection in the first trimester of pregnancy [18,[20][21][22][23]. Despite assumptions that pregnant women with IgG and/or IgM seroconversion from the initial IgG (−) and IgM (−) results in this study are mostly primary infections during the second or third trimester, primary infection during the first trimester may also be mixed in. The mothers of the two cases were initially IgG (−) and IgM (−) at 11 and 12 weeks of gestation, respectively; maternal primary infections might occur within the first trimester of pregnancy. If we had added antibody tests at 14 weeks of gestation, they might have shown seroconversion before late-stage pregnancy. In conclusion, after performing a population-based, mother-child, prospective cohort study of maternal CMV antibody screening, we found primary infections during pregnancy and congenital CMV infections after maternal primary infection, and cases with abnormal results in neurological tests. In addition, we demonstrated teenage and para ≥ 2 pregnant women as risk factors of congenital infection after maternal primary infection.
6,008.8
2021-07-20T00:00:00.000
[ "Medicine", "Biology" ]
GRETNA: a graph theoretical network analysis toolbox for imaging connectomics Recent studies have suggested that the brain’s structural and functional networks (i.e., connectomics) can be constructed by various imaging technologies (e.g., EEG/MEG; structural, diffusion and functional MRI) and further characterized by graph theory. Given the huge complexity of network construction, analysis and statistics, toolboxes incorporating these functions are largely lacking. Here, we developed the GRaph thEoreTical Network Analysis (GRETNA) toolbox for imaging connectomics. The GRETNA contains several key features as follows: (i) an open-source, Matlab-based, cross-platform (Windows and UNIX OS) package with a graphical user interface (GUI); (ii) allowing topological analyses of global and local network properties with parallel computing ability, independent of imaging modality and species; (iii) providing flexible manipulations in several key steps during network construction and analysis, which include network node definition, network connectivity processing, network type selection and choice of thresholding procedure; (iv) allowing statistical comparisons of global, nodal and connectional network metrics and assessments of relationship between these network metrics and clinical or behavioral variables of interest; and (v) including functionality in image preprocessing and network construction based on resting-state functional MRI (R-fMRI) data. After applying the GRETNA to a publicly released R-fMRI dataset of 54 healthy young adults, we demonstrated that human brain functional networks exhibit efficient small-world, assortative, hierarchical and modular organizations and possess highly connected hubs and that these findings are robust against different analytical strategies. With these efforts, we anticipate that GRETNA will accelerate imaging connectomics in an easy, quick and flexible manner. GRETNA is freely available on the NITRC website.1 Introduction The human brain operates as an interconnected network that responds to various inputs from different sensory systems in real time. A substantial body of evidence suggests that the powerful performance arises from a highly optimized wiring layout embedded in our brains by coordinating neural activities among distributed neuronal populations and brain regions (Mesulam, 1990;McIntosh, 1999;Bressler and Menon, 2010). Mapping and characterization of the underlying structural and functional connectivity patterns of the human brain (i.e., connectomics; Sporns et al., 2005;Biswal et al., 2010) in both typical and atypical population is therefore fundamental since they provide invaluable insights into how the collective of the human brain elements is topologically organized to promote cognitive demands (Park and Friston, 2013) and how the topology dynamically reorganizes to respond to various brain disorders (Bullmore and Sporns, 2009;He and Evans, 2010;Rubinov and Bullmore, 2013). Recent advances in the human connectomics have shown that human brain networks can be non-invasively obtained from a variety of neurophysiological and neuroimaging techniques, such as electroencephalography/magnetoencephalography (EEG/MEG), functional near infrared spectroscopy (fNIRS), structural MRI, diffusion MRI and functional MRI. Based on data from these modalities, the brain networks can be generally categorized into structural networks and functional networks. Structural brain networks can be constructed by calculating interregional morphological correlations (e.g., cortical thickness) based on structural MRI (He et al., 2007;Bassett et al., 2008;Tijms et al., 2012) or by tracing interregional fiber pathways based on diffusion MRI (Hagmann et al., 2007;Iturria-Medina et al., 2007;Gong et al., 2009). Functional brain networks can be derived by estimating interregional statistical dependences in the BOLD signal from functional MRI (Biswal et al., 1995;Salvador et al., 2005), regional cerebral blood flow from arterial spin labeling (Liang et al., 2014), oxygenated/deoxygenated hemoglobin concentrations from functional near-infrared spectroscopy (fNIRS; Niu et al., 2012) or electrophysiological signals from EEG/MEG (Stam, 2004;Stam et al., 2007). Once the brain networks are constructed, a common mathematical framework based on graph theory can be employed to topologically characterize the organizational principles that govern the networks. In graph theory, a network is abstracted as a graph composed of a collective of nodes linked by edges. For human brain networks, nodes typically represent structurally, functionally or randomly defined regions of interest (ROIs), and edges represent inter-nodal structural or functional connectivity that can be derived from the above-mentioned data modalities. Recent years have witnessed a surge of interest in the study of human brain networks (Bullmore and Sporns, 2009;Xia and He, 2011;Filippi et al., 2013). In response, several freely available toolboxes have been developed to implement and visualize graph-based topological analyses of brain networks, such as the Brain Connectivity Toolbox (BCT; Rubinov and Sporns, 2010), eConnectome , CONN (Whitfield-Gabrieli and Nieto-Castanon, 2012), Graph-Analysis Toolbox (GAT; Hosseini et al., 2012) and GraphVar (Kruschwitz et al., 2015). Specifically, we have previously developed PANDA (Cui et al., 2013) for the construction of structural brain networks based on diffusion imaging data and BrainNet Viewer toolkits for the visualization of brain networks. These toolboxes, with distinct advantages and unique scopes of application (Table 1), together tremendously accelerate the progress of brain connectome studies. However, these toolboxes either cover only single functions of network construction, analysis or statistics or are powerless or inflexible in the face of huge computational loads and complex and diverse processes (Table 1, we will return this issue in the ''Discussion'' Section). A complete, efficient and flexible pipeline toolbox for imaging connectomics is currently lacking. Here, we developed the GRaph thEoreTical Network Analysis (GRETNA) toolbox to perform comprehensive graph-based topological analyses of brain networks. The GRETNA is a Matlab-based, open-source package with a graphical user interface (GUI). Compared with previous toolboxes, the most impressive features of GRETNA are the combination of multiple functional modules, flexible manipulation and parallel computation (Table 1). Specifically, GRETNA incorporates network construction, analysis and comparison modules to provide a complete and automatic pipeline for connectomics. Given the popularity of resting-state functional MRI (R-fMRI) in mapping intrinsic brain connectivity patterns and studying the topological architecture of diseased brains (Biswal et al., 1995;Fox and Raichle, 2007;Van Dijk et al., 2010;Wang et al., 2010), GRETNA exclusively extends the capabilities for R-fMRI data preprocessing and subsequent network construction procedures. Moreover, GRETNA enables an easy, quick and flexible manner to manipulate different network analytical strategies, including structurally, functionally or randomly defined network nodes, positive or negative connectivity processing, binary or weighted network types and the choices of different thresholding procedures or ranges. Finally, GRETNA is capable of performing parallel computing in the network construction and analysis modules, an intriguing feature that can substantially shorten the duration of network analyses of large data sets. With these efforts, we anticipate that this toolbox will facilitate graph-based brain network studies, particularly those based on R-fMRI data. Currently, the Gretna has been successfully applied to many previous connectome studies (He et al., 2008;Wang et al., 2011Wang et al., , 2015Cao et al., 2013;Zhong et al., 2015). Overview of Functionality of GRETNA GRETNA is an open-source, Matlab-based, cross-platform (Windows and UNIX OS) package under General Public License (GPL) that provides a GUI framework to implement comprehensive graph-based analyses of network topology, perform statistical comparisons of between-group differences in network metrics and examine the relationships between network properties and other variables of interest. It is worth emphasizing that these functionalities are applicable to any connectivity networks that are derived from various toolboxes (e.g., PANDA), data modalities (e.g., EEG/MEG, fNIRS and MRI), species (e.g., humans, monkey and cat) and research fields (e.g., social networks and transportation networks). In particular, GRETNA allows researchers to preprocess human R-fMRI data and construct intrinsic functional brain networks. GRETNA is divided into three sections: network construction, network analysis and network comparisons (Figure 1). In the network construction section, GRETNA allows researchers to: (i) perform R-fMRI data preprocessing, including volume removal, slice timing, realignment, spatial normalization, spatial smoothing, detrend, temporal filtering and removal of confounding variables by regression; (ii) compute voxel-based degree centrality (i.e., functional connectivity density); and (iii) construct region-based connectivity matrices (Figure 2). In this section, GRETNA accepts two types of data: DICOM data or Neuroimaging Informatics Technology Initiative (NIfTI) images (3D/4D). In the network analysis section, GRETNA allows researchers to: (i) convert individual connectivity matrices into a series of sparse networks according to the pre-assigned parameters of the network type (binary or weighted), network connectivity member (absolute, positive or negative), threshold type (connectivity strength or sparsity) FIGURE 1 | The graphical user interface (GUI) of GRETNA. The main window of GRETNA includes three panels: network construction, network analysis and network comparison. and threshold range; (ii) generate benchmark random networks that match real brain networks in the number of nodes and edges and degree distribution; and (iii) calculate graphbased global and nodal network metrics (Figure 3). In this section, GRETNA accepts two types of data: text files (i.e., .txt) or Matlab data files (i.e., .mat). In the final network comparison section, GRETNA allows researchers to: (i) perform statistical inference on global, nodal and connectional network parameters; and (ii) estimate network-behavior relationships (Figure 4). It is worth highlighting that GRETNA executes parallel computing throughout R-fMRI data pre-processing, network construction and network parameter calculation by allotting processing tasks to different computational cores. This was done by calling the PSOM toolbox (Bellec et al., 2012) in a single PC. Of note, the parallel computing can work not only for multiple subjects, but also for a single subject when computing multiple network metrics. Figure 5 presents the flowchart of brain network construction and topological characterization and explains how parallel computing works. Below we describe these procedures in detail. Network Construction In this section, GRETNA allows researchers to perform several preprocessing steps of R-fMRI data, that are commonly used in the community, and then construct large-scale brain networks by calculating the pairwise functional connectivity among a set of ROI according to a brain parcellation scheme. Notably, researchers can arbitrarily designate the order of preprocessing steps. Data Format Conversion Before formal data preprocessing, the DICOM data, a format output from most MRI scanners, is typically transformed into other formats, e.g., NIfTI format. Compared with the previous analyze file format, the NIfTI format contains new and important features, such as affine coordinate definitions that relate a voxel index to a spatial location, indicators of the spatial normalization type and records of the spatio-temporal slice ordering. This FIGURE 2 | The GUI panel of network construction. In this panel, GRETNA allows researchers to perform all common preprocessing steps used by the R-fMRI community and construct large-scale functional brain networks using different region-based parcellations. Voxel-based degree centrality can also be computed here. conversion is achieved in GRETNA by calling dcm2nii in the MRIcroN software. 2 Removal of Volumes The first several volumes of individual functional images are often discarded for magnetization equilibrium. GRETNA allows researchers to delete the first several volumes by specifying either the number of volumes to be deleted or the number of volumes to be retained. The latter is useful for across-datasets or acrosscenter studies in which numbers of image volumes are usually different. Slice Timing Correction Currently, R-fMRI datasets are usually acquired using repeated 2D imaging methods, which leads to temporal offsets between slices. The slice-timing effects have been demonstrated to have prominent effects on study results and can be successfully compensated by the slice timing correction step (i.e., temporal data interpolation; Sladky et al., 2011). This is performed in GRETNA by calling the corresponding SPM8 functions. Of note, for a longer repeat time (e.g., > 3 s), within which a whole brain volume is acquired, it is advised to omit the slice time correction step because interpolation in this case becomes less accurate. Realignment During an MR scan, participants inevitably undergo various degrees of head movements even when foam pads are used. The 2 http://www.mccauslandcenter.sc.edu/mricro/mricron/ movements break the spatial correspondence of the brain across volumes. This step realigns individual images so that each part of the brain in all volumes is in the same position. This is performed in GRETNA by calling relevant SPM8 functions. Spatial Normalization For group average and group comparison purposes, individual data are usually transformed into a standardized space to account for the variability in brain size, shape and anatomy. This can be accomplished in GRETNA by two methods based on the SPM8 functions: (i) directly warping individual functional images to standard MNI space by estimating their transformation to the echo-planar imaging (EPI) template (Ashburner and Friston, 1999); and (ii) warping individual functional images to standard MNI space by applying the transformation matrix that can be derived from registering the T1 image (co-registered with functional images) into the MNI template (Ashburner and Friston, 2005). The latter method tends to improve the accuracy of spatial normalization when the distortions of functional data are negligible, which is important to ensure effective crossmodality co-registration. Spatial Smoothing Smoothing, a common preprocessing step after spatial normalization, is used to improve the signal to noise ratio and attenuate anatomical variances due to inaccurate intersubject registration. GRETNA performs spatial smoothing using a Gaussian filter with a shape that can be determined by a 3-value Frontiers in Human Neuroscience | www.frontiersin.org FIGURE 3 | The GUI panel of network analysis. In this panel, GRETNA allows researchers to calculate many global and nodal graph-based metrics used in brain network studies. This panel provides flexible manipulations for researchers regarding the thresholding procedure, network type and network connectivity member. Notably, null random networks can be generated here to benchmark the results derived from brain networks. vector of full width at half maximum (FWHM) as implemented in SPM8. Detrend FMRI datasets may suffer from a systematic increase or decrease in the signal with time presumably due to long-term physiological shifts or instrumental instability (Lowe and Russell, 1999). GRETNA provides an option to reduce the effects of linear and non-linear drift or trend in the signal on the basis of relevant SPM8 functions. It should be noted that this step is still controversial (Smith et al., 1999) and researchers should interpret their results with caution if detrend is implemented. Temporal Filtering Previous studies have shown that spontaneous brain activity is predominantly subtended by the low-frequency components (0.01-0.1 Hz) of R-fMRI signals (Biswal et al., 1995;Lowe et al., 1998;Kiviniemi et al., 2000). Thus, R-fMRI data are typically band-pass filtered to reduce the effects of low frequency drift and high-frequency physiological noises. Notably, even in the FIGURE 5 | A flowchart to explain brain network construction, topological characterization and parallel computing. typically used low-frequency intervals, accumulating evidence suggests that functional brain architectures are distinct across different frequency bands (Achard et al., 2006;Salvador et al., 2008;Zuo et al., 2010;Liao et al., 2013) and show frequencyspecific alterations in neurological and psychiatric disorders, such as Alzheimer's disease and mild cognitive impairment (Han et al., 2011;Wang et al., 2013;Liu et al., 2014). Moreover, recent studies highlight the physiological significance of high frequency fluctuations (Boubela et al., 2013;Liao et al., 2013). In GRETNA, we provide an option for researchers to easily choose the frequency ranges that the data will be filtered with an ideal box filter function. This is done by converting a time series into frequency domain using a Fast Fourier Transform (FFT), retaining amplitude spectrum for frequency components of interest and setting amplitude spectrum to 0 for other frequency components, and converting the new amplitude spectrum into time domain by an inverse FFT transform. Removal of Confounding Variables For R-fMRI datasets, several nuisance signals are typically removed from each voxel's time series to reduce the effects of non-neuronal fluctuations, including head motion profiles, the cerebrospinal fluid (CSF) signal, the white matter (WM) signals and/or the global signal (Greicius et al., 2003;Fox et al., 2005). In GRETNA, researchers can assign any combination of these variables to be variables of no interest, which will be regressed out. By default, the global signal, CSF signal and WM signal are calculated within the BrainMask_05_61_73_61.img, the CsfMask_07_61_73_61.img and the WhiteMask_09_61_73_61.img, respectively. The three images are from the REST toolbox (Song et al., 2011) and separately correspond to brain masks of the whole brain, cerebral spinal fluid and WM in the standard MNI space. In addition, the first-order derivative of head motion profiles can also be removed. Voxel-Based Degree Degree is a measure that quantifies the importance/centrality of a node through the number and/or strength of connections to all other nodes in a network. Degree centrality has been widely used in brain network studies because it tends to have higher test-retest (TRT) reliability than other nodal centrality metrics Cao et al., 2014), and it is well in line with physiological measures, such as the rates of cerebral blood flow and glucose metabolism (Liang et al., 2013;Tomasi et al., 2013). Three parameters are needed for voxelbased degree analysis based on R-fMRI data: (i) a brain mask to indicate the coverage of brain regions; (ii) a correlation threshold to exclude low-level correlations (e.g., 0.2); and (iii) a distance threshold to determine short/long connections. Using GRETNA, we can obtain a total of 18 voxel-based degree maps for each participant that vary across connectivity distance (i.e., short-, long-or full-range), sign (i.e., positive, negative or absolute) and type (i.e., binary or weighted). Researchers can choose these degree maps according to their research objectives. Functional Connectivity Matrix This option is used to construct individual interregional functional connectivity matrices in two major steps: regional parcellation (i.e., network node definition) and functional connectivity estimation (i.e., network edge definition). GRETNA provides options for several different parcellation schemes, including the structurally defined Anatomical Automatic Labeling atlas (AAL-90;Tzourio-Mazoyer et al., 2002) and Harvard-Oxford atlas (HOA-112; Kennedy et al., 1998;Makris et al., 1999) and the functionally defined Dos-160 (Dosenbach et al., 2006(Dosenbach et al., , 2010, Crad-200 (Craddock et al., 2012), Power-264 (Power et al., 2011) and Fair-34 (Fair et al., 2009). Additionally, GRETNA also contains functions that can be used to parcel the brain into an arbitrary number of ROIs with same or different sizes (Zalesky et al., 2010b). These parcellation approaches provide flexible choices to determine network nodes for specific research objectives and allow researchers to test the robustness of their findings across different regional parcellations . Once a parcellation scheme is chosen, a mean time series will be extracted from each parcellation unit, and pairwise functional connectivity is then estimated among the time series by calculating linear Pearson correlation coefficients. This will generate an N X N correlation matrix, with N being the number of regions included in the selected brain parcellation for each participant. Of note, this section also allows researchers to construct dynamic correlation matrix based on a sliding time-window approach. Network Analysis In this section, GRETNA can calculate various topological properties of a network or graph from both global and nodal aspects, which can be compared with counterparts of random networks to determine the non-randomness. Thresholding Prior to topological characterization, a thresholding procedure is typically applied to exclude the confounding effects of spurious relationships in interregional connectivity matrices. Two thresholding strategies are provided in GRETNA: the absolute connectivity strength threshold and relative sparsity threshold . Specifically, for the connectivity strength threshold, researchers can define a threshold value such that network connections with weights greater than the given threshold are retained and others are ignored (i.e., set to 0s). This connectivity strength threshold method allows for the examination of the absolute network organization. Note that the same connectivity strength threshold usually leads to a different number of edges in the resultant networks, which could confound between-group comparisons in network topology (van Wijk et al., 2010). To address this problem, GRETNA provides an alternative threshold method-sparsity or density threshold. Sparsity is defined as the ratio of the number of actual edges divided by the maximum possible number of edges in a network. For networks with the same number of nodes, the sparsity threshold ensures the same number of edges for each network by applying a subjectspecific connectivity strength threshold and therefore allowing an examination of relative network organization ). These two thresholding strategies are complementary and together provide a comprehensive method to test the network organization. Finally, given the absence of definitive way in selecting a single threshold, researchers can input a range of continuous threshold values to study network properties in GRETNA. Network Type Networks can be binarized or weighted depending on whether the connectivity strength is taken into account. Previous brain network studies have mainly focused on binary networks due to the reduction in computational complexity and clearness of network metric definitions. Notably, binary networks neglect the strength of connections above the threshold, and therefore fail to identify subtle network organizations (Cole et al., 2010). In GRETNA, all network analyses can be conducted for both binary and weighted networks. Briefly, a connectivity matrix C ij = [c ij ]can be converted into either a binary network or a weighted network where r thr is a connectivity strength threshold that is the same across all subjects for the connectivity strength thresholding procedure or a subject-specific connectivity strength threshold determined by the sparsity thresholding procedure. It should be emphasized that for weighted network analysis, the connectivity strength must reflect similarity (e.g., correlation coefficient) because the reciprocal of connectivity strength is used to calculate inter-nodal path length. Network Connectivity Member Previous R-fMRI studies have found that certain functional systems are anti-correlated (i.e., have a negative correlation) in their spontaneous brain activity (Greicius et al., 2003;Fox et al., 2005). However, negative correlations may also be introduced by global signal removal, a preprocessing step that is currently controversial (Fox et al., 2009;Murphy et al., 2009;Weissenbacher et al., 2009;Schölvinck et al., 2010). For network topology, negative correlations may have detrimental effects on TRT reliability and exhibit organizations different from positive correlations (Schwarz and McGonigle, 2011). Accordingly, GRETNA provides options for researchers to determine the network connectivity members, based on which subsequent graph analyses are implemented: positive network (composed of only positive correlations), negative network (composed of only absolute negative correlations) or full network (composed of both positive correlations and the absolute values of the negative correlations). Random Networks Brain networks are typically compared with random networks to test whether they are configured with significantly non-random topology. In GRETNA, the random networks are generated by a Markov-chain algorithm (Maslov and Sneppen, 2002;Sporns and Zwi, 2004), which preserves the same number of nodes and edges and the same degree distribution as the real brain networks. Specifically, for a binary network, two edges (i 1 ,j 1 ) and (i 2 ,j 2 ), are first selected at random that is node i 1 is connected to node j 1 and node i 2 is connected to node j 2 . If there are no edges between node i 1 and node j 2 and between node i 2 and node j 1 , we then add two new edges, (i 1 ,j 2 ) and (i 2 ,j 1 ), to replace the original two edges, (i 1 ,j 1 ) and (i 2 ,j 2 ). This procedure is repeated 2 X the number of edges in the reference brain network to assure the randomized organization. For a weighted network the randomization is performed in a similar manner but in this case the weights are bound to the edges. It should be noted that how to generate random networks is an ongoing topic for brain network studies Zalesky et al. (2012); Hosseini and Kesler (2013). Therefore we also provide codes to generate random networks based on a time series randomization and correlation matrix randomization as introduced in Zalesky et al. (2012). Further studies are needed to produce null models that are more biologically meaningful as benchmarks for real brain networks. Network Metrics GRETNA can calculate several widely used network metrics in brain network studies for both binary and weighted networks. Generally, these measures can be categorized into global and nodal metrics. Global metrics include small-world parameters clustering coefficient and characteristic path length (Watts and Strogatz, 1998;Onnela et al., 2005), local efficiency and global efficiency Marchiori, 2001, 2003), modularity (Newman, 2006), assortativity (Newman, 2002;Leung and Chau, 2007), synchronization (Barahona and Pecora, 2002;Motter et al., 2005) and hierarchy (Ravasz and Barabási, 2003). Nodal metrics include nodal degree, nodal efficiency (Achard and Bullmore, 2007) and nodal betweenness centrality (Freeman, 1977). Of note, during the calculation of the characteristic path length, local efficiency, global efficiency, nodal efficiency and betweenness, GRETNA computes the pairwise shortest path length matrix by calling functions from the MatlabBGL toolbox (version 4.0) 3 (Floyd-Warshall algorithm for networks with density larger than 10% and Johnson's algorithm otherwise). Additionally, GRETNA calculates the characteristic path length as the ''harmonic mean'' distance between all possible node pairs (Newman, 2003) to address the disconnected nodes. For the formula, usage and interpretation of these measures, see Rubinov and Sporns (2010) and Wang et al. (2011). Finally, GRETNA can also calculate the area under the curve (AUC) for each network measure to provide a scalar that does not depend on specific threshold selection (Wang et al., 2009;Zhang et al., 2011). It should be noted that this module can perform topological analysis of brain networks, independent of imaging modality and species. For example, the structural brain connectivity matrices in humans or macaques that are obtained from the PANDA software (Cui et al., 2013) or the CoCoMac database 4 can be entered into the module for graph analysis. Network Comparison In this section, GRETNA allows researchers to perform statistical testing on global, nodal and connectional network measures. For global and nodal network measures, GRETNA provides several popular parametric models, including one-sample t-test, two-sample t-test, paired t-test, one-way analysis of variance (ANOVA) and repeated measures ANOVA. GRETNA also provides multiple comparison correction approaches, including the false discovery rate (FDR) and Bonferroni procedures. With respect to inter-nodal connection comparisons, one-sample t-test and two-sample t-test are provided, followed by multiple comparison correction procedures with FDR, Bonferroni or network-based statistic methods (Zalesky et al., 2010a). Finally, the statistical analysis of network-behavior correlation can be implemented in this section. In addition, covariates of no interests (e.g., age, gender and clinical variables) can be added into all of these statistical models. Example R-fMRI Data to Illustrate the Usage of GRETNA Participants and Data Acquisition A publicly available TRT reliability dataset 5 was employed to exemplify the usage of GRETNA. This dataset contains 57 Only the first session was used in the current study to explain the use of GRETNA. Of note, four participants were excluded due to excessive head motion or image quality (Dai et al., 2014). Data Preprocessing Data preprocessing included removal of the first 10 volumes, slice timing correction, head movement correction, spatial normalization (T1 segmentation), removal of linear trend, temporal band-pass filtering (0.01-0.1 Hz) and nuisance signal regression (24-parameter head motion profiles, global signal, CSF signal and WM signal). Network Construction and Analysis We first obtained 6 voxel-wise functional connectivity strength maps (i.e., voxel-based degree centrality maps) for each participant, which were combinations between network type (binary or weighted) and network connectivity member (positive, negative or absolute of both). We then constructed 6 inter-regional functional connectivity matrices for each participant according to the 6 different regional parcellation approaches provided in GRETNA (i.e., . The order, location and name of each node under these parcellation atlases are provided in the toolbox (. . .\GRETNA\Templates)., These connectivity matrices were subsequently averaged across participants to derive 6 group-level mean connectivity matrices. These group-level matrices were further converted into a set of binary and weighted networks via connectivity strength (i.e., correlation) and sparsity thresholding procedures (both ranged from 0-1 with an interval of 0.04). Finally, we calculated various global (clustering coefficient, characteristic shortest path length, local efficiency, global efficiency, assortativity, hierarchy, synchronization and modularity) and nodal (nodal degree, nodal efficiency and nodal betweenness centrality) topological properties of these brain networks. All imaging preprocessing, network construction and analyses were performed in the GRETNA toolbox. The results of the network analysis were visualized using the BrainNet Viewer toolbox . Figure 6 shows the mean voxel-based functional connectivity strength maps for all of the participants. We found that the functional connectivity strength was distributed heterogeneously over the brain with the most highly connected regions in the posterior cingulate gyrus, precuneus, medial prefrontal cortex, dorsolateral prefrontal cortex and subcortical structures (e.g., hippocampus, thalamus and amygdala). This pattern was generally robust across of network type (binary or weighted) and network connectivity member (absolute, positive or negative). Region-based Brain Networks: Global Metrics The mean interregional functional connectivity matrices derived under each regional parcellation scheme are shown in Figure 7. Given the fact that: (i) the R-fMRI data were mainly used to illustrate the usage of GRETNA; (ii) the analyzed network properties have been frequently studied under both healthy and pathological conditions (For relevant reviews, see Bullmore and Sporns, 2009;He and Evans, 2010;Stam, 2014); and (iii) our findings were largely comparable with previous studies and were qualitatively independent of the brain parcellation schemes used in the current study; we thus only took Power-264 as an example to present our findings since this parcellation provided the highest spatial resolution among the 6 atlases used. Figure 8 presents all global metrics (clustering coefficient, characteristic path length, local efficiency, global efficiency, assortativity, hierarchy, synchronization and modularity) for both the group-based brain network and the 100 matched random networks as a function of sparsity and correlation thresholds. The functional brain network exhibited different organization from random networks, as characterized by a higher clustering coefficient, characteristic path length, local efficiency, assortativity and modularity but lower global efficiency. Most of these findings were robust against the selection of network types and threshold procedures. Additionally, several network measures varied depending on the choices of network type or thresholding procedure. For example, only weighted network analysis revealed lower synchronization for the brain network than the random networks; a hierarchical structure was evident in the brain network only when the correlation-based thresholding method was used. Figure 9A shows the spatial distributions of three nodal centralities (degree, efficiency and betweenness) for both binary and weighted brain networks under both correlation and sparsity thresholding procedures (the AUCs were used here). The spatial distributions of nodal degree and efficiency were highly similar regardless of network type and thresholding procedure. Specifically, the posterior parietal, medial and lateral prefrontal and lateral temporal cortices as well as several subcortical structures exhibited the highest values. Region-Based Brain Networks: Nodal Metrics However, nodal betweenness exhibited obviously different patterns in that only the posterior parietal cortex showed extremely high betweenness in the brain, a consistent finding across different network types and thresholding procedures. Further clustering analysis of the spatial similarity (i.e., correlation) matrix of nodal centrality distributions validated this observation that betweenness centrality was separated from nodal degree and efficiency, which were clustered together ( Figure 9B). Discussion We developed a toolbox, GRETNA, to automatically analyze topological properties of brain networks that are not constrained by data modality and species. Specifically, GRETNA can perform R-fMRI data preprocessing, construct brain functional networks and calculate most commonly used global and nodal topological attributes with parallel computing ability. Moreover, GRETNA is flexible in dealing with several important methodological issues, such as network node definition, network types, thresholding procedure and treatment of negative correlations, all of which are great concerns in brain network studies. Finally, we utilized a publicly released R-fMRI dataset to demonstrate the capabilities of GRETNA. Graph-based topological analysis of human brain networks is one of the most active domains in modern brain science. With the explosion of brain network studies, a growing number of toolboxes are being developed to facilitate the progress from brain network construction to topological characterization and result visualization ( Table 1). For example, the PANDA toolbox has been developed to construct large-scale structural brain networks based on diffusion MRI data (Cui et al., 2013); the BCT toolbox allows topological analysis of networks based on Matlab codes (Rubinov and Sporns, 2010); and the BrainNet Viewer can visualize brain networks . For R-fMRI, toolboxes also exist with functionality in data preprocessing, network construction or descriptions, such as the REST (Song et al., 2011), CONN (Whitfield-Gabrieli andNieto-Castanon, 2012) and GAT (Hosseini et al., 2012). Of note, the CONN toolbox can also calculate some topological attributes of networks. However, it is important to note that the majority of these toolboxes either can only address a single module of brain network construction (e.g., PANDA) or network metric calculation (e.g., BCT), or lack the ability to support parallel computing, therefore inconvenient for conducting a complete, efficient brain network study. In contrast, GRETNA combines parallel computing with a whole pipeline of R-fMRI data pre-processing, network construction and network topological characterization, which could significantly accelerate the research process during connectome studies. Specifically, compared with the recent developed GraphVar (Kruschwitz et al., 2015), GRETNA has distinct features in parallel computing, capability to preprocess R-fMRI data. In addition, connectome-based studies are of high complexity during their implementations as reflected by liberal choices in the analytical strategies, such as brain node and edge definition, thresholding procedure, network type and others. Due to the current lack of a gold standard in the determination of these options, GRETNA thus provides many options to address increasingly concerning issues in brain network analysis, such as the brain parcellation scheme, binary or weighted network type, thresholding procedure and treatment of negative correlations. This enables researchers to flexibly determine their analytical strategies and thus allow testing the robustness of their findings against different choices. Finally, the outputs from GRETNA are easily compatible with our previous connectome visualization tool, BrainNet Viewer . Using a publicly released TRT dataset, we found that the most highly connected regions in the brain were predominantly in the posterior cingulate gyrus, precuneus, medial prefrontal cortex, dorsolateral prefrontal cortex and several subcortical structures. This finding is generally robust against the spatial resolutions (voxel-or region-level) and centrality measures (degree, efficiency or betweenness) used, particularly for the posterior parietal regions. These identified hubs are comparable with previous structural and functional brain network studies (Hagmann et al., 2008;Buckner et al., 2009;Gong et al., 2009;Tomasi and Volkow, 2010;Liang et al., 2013). Moreover, the hub topography was independent of several factors of network type, network connectivity member and thresholding procedure, indicating that hubs are a stable, intrinsic property of brain network architecture. Of note, despite high spatial correlations, nodal betweenness behaved differently from nodal degree and efficiency in capturing hub topography, presumably due to their differences in depending on only one graph property (i.e., first-order; degree and efficiency) or on more than one property or ratios of one property (i.e., second-order; betweenness; Wang et al., 2011). At the global level, the human brain networks exhibit different organization from matched random networks as characterized by a higher clustering coefficient, characteristic path length, local efficiency, assortativity and modularity and lower global efficiency, which is indicative of the efficient small-world, assortative and modular organizations of functional brain networks. This is consistent with numerous previous brain networks studies (Park et al., 2008;Bullmore and Sporns, 2009;He and Evans, 2010;Meunier et al., 2010;Braun et al., 2012;Liang et al., 2012). Additionally, these findings were robust against the factors of network connectivity member and thresholding procedure, suggesting that these organizational principles are stable configurations embedded in the functional brain networks. Regarding hierarchy, positive values were observed, which indicates a hierarchical structure of functional brain networks. In hierarchical networks, highly connected hubs tend to link nodes that have a limited chance to interconnect with each other, which favors top-down routing among network nodes on the one hand and minimize wiring costs on the other hand (Ravasz and Barabási, 2003). The hierarchical structure observed here is consistent with previous brain network studies (Bassett et al., 2008;Braun et al., 2012;Liang et al., 2012). Additionally, we also noted positive synchronization for functional brain networks, a feature that has been relatively less studied in human brain networks than other measures. Notably, the behaviors of hierarchy and synchronization seemed to depend on the analytical strategies: FIGURE 7 | Mean inter-regional correlation matrices. Individual R-fMRI functional connectivity matrices were first transformed into z-score matrices (Fisher's r-to-z transformation), then averaged across all participants, and finally inversely transformed into r-value matrices (Fisher's r-to-z inverse transformation). Six different regional parcellation approaches were used, including four functionally defined parcellations and two structurally defined parcellations (AAL-90 and HOA-112). FIGURE 8 | Global organization of group-based functional brain network. Significantly different organization was observed for R-fMRI brain networks from matched random networks, as characterized by a higher clustering coefficient, characteristic path length, local efficiency, assortativity and modularity but lower global efficiency. These findings were generally robust against the choices of network type and thresholding procedure. A B FIGURE 9 | Nodal characteristics of group-based functional brain network. (A) Nodal degree, efficiency and betweenness were computed for both binary and weighted R-fMRI brain networks under both the correlation and sparsity thresholding procedures (only nodes with centralities larger than the mean of the whole brain network are shown). (B) Although significant correlations were observed in the spatial distributions among the three nodal centrality measures regardless of network type and threshold procedure, nodal betweenness revealed unique patterns compared to nodal degree and efficiency, which was demonstrated by the hierarchical clustering analysis. that more obvious deviations of functional brain networks from matched random networks appeared when the weighted network analysis was used for synchronization and the correlation-based thresholding procedure was used for hierarchy. Taken together, GRETNA revealed largely comparable findings with previous brain network studies, therefore demonstrating its effectiveness. It should be noted that while graph-based brain network studies are burgeoning, they are still in their infancy. There are many methodological challenges that remain elucidative, such as head motion correction (Muschelli et al., 2014;Patel et al., 2014), null model construction (Zalesky et al., 2012;Hosseini and Kesler, 2013), thresholding method selection (Toppi et al., 2012) and connectivity type determination (Salvador et al., 2007;Liang et al., 2012). Moreover, there are certain topological attributes that are not included in the current GRETNA, such as richclub architecture (van den Heuvel and Sporns, 2011) and motif (Milo et al., 2002). Future versions of GRETNA will expand the functionality of these aspects. GRETNA can be further improved by integrating independent component analysis to allow exploring functional brain network topology among different brain components or subsystems (Yu et al., 2011(Yu et al., , 2013 and sophisticated methods to characterize temporal evolution of functional brain networks (Liao et al., 2014;Zalesky et al., 2014) or both In addition, the current GRETNA can only handle undirected networks (binary and weighted). Recent methodological advances have allowed researchers to infer largescale directed brain networks with R-fMRI data (Liao et al., 2011;Yan and He, 2011). Hence, an important future extension of GRETNA is to add functionality to address directed networks. Finally, although the current GUI version of GRETNA includes several statistical functions, they are all parametric. Given the lack of statistical theory regarding the distribution of graph metrics for human brain networks, future versions could contain nonparametric inference of brain network metrics (Bullmore and Sporns, 2009), such as the permutation test , Functional Data Analysis (Bassett et al., 2012) or re-sampling approach . In conclusion, we developed a user-friendly and easily navigable toolbox, GRETNA, to assist in conducting topological analysis of structural and functional brain networks. This toolbox has a highly compatible GUI with the widely used SPM toolbox. We hope that the toolbox contributes to facilitating and standardizing brain connectomics studies based on graph theory.
9,049.6
2015-06-30T00:00:00.000
[ "Computer Science" ]
Study of the M-doped Effect (M=Al, Ag, W) on the Dissociation of O2 on Cu Surface Using a First Principles Method The O2 adsorption and dissociation on M-doped (M=Al, Ag, W) Cu (111) surface were studied by density functional theory. The adsorption energy of adsorbate, the average binding energy and surface energy of Cu surface, and the doping energy of doping atom were calculated. All the doped atoms can be stably combined with Cu atoms and improve the surface activity of Cu surface, while the Al and W atoms would strengthen the bonding effect between the atoms on the Cu (111) slab except Ag doping atom. Due to the different electronegativity of three metals of Al, Ag, W, these doping atoms can resist the dissociation of O2 by DOS analysis. The potential energy surface was computed, and the result showed that the dissociation reaction of O2 on the surfaces not only reflected in the barrier energy, but also the reaction energy. Ag atoms had the best resistance effect on the O2 dissociation comparing with Al and W atoms because of the large barrier energy and reaction energy. Introduction The reaction of gas molecule and metal is under study both experimentally and computationally for metal Cu, as it is related to many processes such as catalysis, deposition, coating, corrosion or oxidation 1,2 . Among them, oxidation of Cu and Cu-based alloy surfaces is both common and important phenomenon because the oxide film formed in air protects the surface against further reaction and corrosion. The steps in the oxidation process involve dissociative adsorption of O 2 on Cu-based alloy surfaces, migration of O into the surfaces, and then oxide formation. However, the initial reaction mechanism has not been well understood, and the energy barrier of the initial oxidation are still under debate. The adsorption and reaction of O 2 on Cu surface has been studied for years by experimental technologies. The interaction of Ni/Fe-covered Cu (100) surfaces with O 2 were studied by x-ray photoelectron spectroscopy (XPS). The authors pointed that the composition and coverage of the oxides of Fe and Ni are related to the concentration of O 2 and the exposure time 3 . Then A. R. Balkenende and collaborators found the exposure of O 2 to Cu (100) and Cu (110) surfaces leads to dissociative adsorption and the dissociation on Cu (110) surface is about an order of magnitude faster than on Cu (100) surface 4 . Dissociative chemisorption of O 2 on Cu (100), S/Cu (100) and Ag/Cu (100) surface alloy were investigated by Auger electron spectroscopy (AES). It is concluded that at very low Ag coverages, the reduced reactivity of Ag/Cu (100) towards O 2 dissociation is primarily due to the steric blocking of the surface defects and that any electronic effects are only secondary and present only at higher Ag coverages 5 . An energy selective molecular beam surface scattering experiments revealed a defect induced low-barrier dissociation pathway leading to enhanced dissociation of O 2 molecules on Cu surface with translational energy up to 60 meV 6 . These qualitative and quantitative results are mainly due to the analysis of energy oscillation spectroscopy, without directly observing the microscopic process of the initial stage of O 2 adsorption and dissociation on Cu surface. In contrast, theoretical simulation work is more suitable for directly obtaining microscopic information such as the structure, energy, transition state, and potential energy surface of O 2 adsorption and dissociation on Cu surface. Because O atoms exhibit insignificant selectivity with respect to positions and coverages on Cu surface. For examples, based on density functional theory (DFT) calculations, people found that on Cu (211) surface the step microfacets are very reactive and the dissociation of the O 2 molecule at room temperature occurs mostly on those sites 7 . First principles simulations based on DFT were used to determine absorption energies and pathways to dissociation of O 2 at finite temperature on the Cu (110) surface. The results suggested that adsorption kinetics play an important role in determining the mechanism of dissociation 8 . Another work showed that the expansive strain parallel to the surface plane could enhance the binding of atomic and molecular oxygen on Cu (111) surface as well as to decrease the transition state energy of O 2 dissociation 9 . However, the theoretical sticking probability of O 2 dissociation on Cu (100) is found non-activated and topology of the potential energy surface is very open in entrance channels 10 . The previous work mentioned above show that the micro-mechanism of *e-mail<EMAIL_ADDRESS>dissociation and adsorption of O 2 on Cu surface is a very important research topic. Dissociation of O 2 on Cu (111) has been described as both a direct process and one mediated by a molecular precursor. The steering effect plays an important role in the oxygen molecule dissociative process. The potentialenergy hypersurface (PES) for O 2 on Cu (111) is of key importance to assess dissociative phenomena. However, the surface doped elements are also an important factor affecting the dissociation of adsorbates. The doping atom could penetrate the substrate actively, change the surface activity and affect the dissociation of adsorbate on the surface [11][12][13] . Some metals, such as Al, Ag, W, are common elements in electrical engineering materials, and their presence has an important effect on the performance of Cu wires. Therefore, studying the microscopic process of O 2 dissociation by the doping atoms on the Cu surface will provide a direct help in understanding the oxidation and protection mechanisms of Cu-based products. Computational Details First principles calculations based on density functional theory (DFT) 14,15 were used to investigate O 2 adsorption and dissociation on the Al, Ag and W doped Cu (111) surface in the CASTEP code 16 . The ultrasoft pseudopotential 17,18 was used to describe electron-ion interaction for all atoms. The generalized gradient approximation-Perdew-Wang 1991 (GGA-PW91) 19,20 was used to describe the exchange and correlation functional. In our calculations, the cutoff energy of 500 eV was employed for plane wave expansions. The Brillouin zone is sampled by using a 10 × 10 × 10 Monkhorst-Pack k-point grid 21 for the Cu bulk lattice constant. The energy difference, the residual force, the maximum stress and the displacement of each atom were set as 10 -5 eV/atom, 0.03 eV/Å, 0.05 GPa and 0.001 Å during the optimization to ensure the convergence of the self-consistent field calculations 22 . The climbing image nudged elastic band (CI-NEB) Method 23 was used to estimate transition state and final state structures. In order to check the validity of our methodology, the lattice parameter of bulk Cu which has a three-dimension primitive cell with bcc symmetry was computed. Our calculated value (3.491 Å) has a difference less than 3.5% compared to the experimental value of 3.615 Å 24 and theoretical value of 3.607 Å 25 ). The surface size was modelled by using super approach with periodic boundary conditions. A vacuum layer of 20 Å was found to be sufficient to prevent the interactions between the periodic images. We used the slabs consisting of 7-layer, and relaxed the first 3 layers including doping atoms and the bottom 4 layers. By using a 3 × 3 × 1 Monkhorst-Pack k-point grid, the surface structural relaxation and the total energy calculation were performed in our study. Besides, the p(3x3) super cell was used to study the properties of O 2 adsorption and dissociation on the pristine/doped Cu (111) seeing in Figure 1. The adsorption energy (E ads ) of adsorbate, the average binding energy (E b ) and surface energy (γ) of Cu surface, and the doping energy (E im ) of doping atom are calculated. The four kinds of energies are calculated in following equations 1-4. The green, bronzing and blue particles denote the first, second, third layers of Cu atoms. The oxygen and doping atoms are shown in red and yellow color, respectively. where E total is the total energy of the surface system, E slab is the energy of pristine/doped Cu surface, E slab Cu is the energy of Cu surface with the doping atom removed, and n and m denote the number of Cu and metal atoms, respectively. The E Cu and E m denote the atomic energy of Cu and metal atoms, respectively. E O2 is the energy of the isolated O 2 in gas phase, while E b Cu and E b m are the bulk energy of fcc-type crystal of Cu and metals, respectively. The energy barrier (E B ) and reaction energy (E R ) are defined in equation 5 and equation 6. The E IS , E TS , and E FS are the energies of the initial state, transition state, and the final state of the configuration, respectively. 1. Al, Ag, W doped Cu (111) surface The effect of doping atoms (Al, Ag, W) on the surface binding and stability are studied firstly. We replace one Cu atom of the first layer with the metal atom to get a M-doped (M= Al, Ag, W) Cu (111) surface. We optimize the stoichiometric geometries of M-doped Cu (111) surface until the external and internal degree of freedom can relax to the force and stress vanished in the bulk calculations. The lattice constants change after metal atoms have been doped on the Cu (111) surface. The relaxed structure of the M-doped Cu (111) surfaces are shown in Figure 2. All the doping atoms can be stably combined with Cu atoms, while being slightly embedded in the surface to a certain depth. The new lattice constants for M-doped Cu surfaces are 7.617 Å, 7.632 Å, 7.638 Å, respectively, which are all slightly larger than those of pristine Cu (111) surface (7.611 Å) because of larger radius of the doping atoms. The average bond length between Cu-Cu atoms in the first layer of pristine surface is 2.537 Å. While the average bond length between Al-Cu, Ag-Cu, W-Cu atoms in the first layer of doped surfaces are 2.543 Å, 2.582 Å, 2.554 Å, respectively, which means the doping atoms make the surrounding Al atoms slightly excluding to them. This repulsion effect enhances the bonding between surface atoms and improves the energy stability of the doped Cu surfaces. The doping energy (E im ), binding energy (E b ) and surface energy (γ) of pristine/doped Cu surface are listed in Table 1. It is clear seen that all the doping atoms can be stably combined with Cu atoms due to the negative values of E im . The W-doped surface has the lowest E im with -7.614 eV than Al-doped and Ag-doped surfaces because W atom has more valence electrons that can be transferred to and from Cu atoms. Then we compared and analyzed the change of E b and γ between the doped surface and the pristine surface. In this paper, the E b reflects the strength of the bonding between atoms in the slab. The E b of pristine Al surface is -3.628 eV and lower than the E b of Al-doped and W-doped surfaces, so the binding energy data confirms that the Al and W atoms will strengthen the bonding effect between the atoms on the Cu (111) slab. But on the Ag-doped surface, the result is exactly the opposite. In addition, the γ of pristine Al surface (3.6451 J/m 2 ) is lower than Al-doped and W-doped surfaces. That means Al and W atoms improves the surface activity of Cu surface, which may be more favorable for small molecules. In the same situation, Ag-doped surface has lower γ than the pristine Cu surface. Based on above results, Ag atom reduces the surface activity of Cu and inhibit the reaction between Cu surface and oxygen molecules. O 2 adsorption on pristine/doped Cu (111) surface It is known that O 2 prefer to adsorb at the top site of metal surface. In this work, there are one site for O 2 adsorption on pristine Cu (111) surface and two sites for O 2 adsorption on doped surface. Figure 3a shows the optimized O 2 adsorption on pristine Cu surface, and Figure 3b are the optimized O 2 adsorption on the top site of doped Cu surfaces (TOP-type) which named Al_TOP, Ag_TOP, W_TOP, respectively, and Figure 3c are the optimized O 2 adsorption on the 1st nearneighbor top site of doped Cu surface (1NN-type) which named Al_1NN, Ag_1NN, W_1NN, respectively. For all kinds of doped surfaces, it should be pointed out that the O 2 molecules prefer to adsorb in a parallel to the TOP-type surfaces, and they adsorb with a little tilt angle related to the 1NN-type surfaces. This difference is mainly caused by different atomic radii and different surface flatness under the adsorbate. We calculated the adsorption energy (E ads ) of O 2 on the pristine/doped surfaces to obtain the most stable adsorption configurations. As is shown in Table 1, for the M-doped surfaces, we found that most TOP-type surfaces are more stable than pristine surface for O 2 adsorption. The E ads of Al_TOP, W_TOP and W_1NN surfaces are -1.011 eV, -2.175 eV and -1.609 eV, which are still much lower than pristine surface (-0.589 eV). The reason is that the distance between adsorbates and top-site Cu is smaller than these of top-site Al and W atoms. In another word, Al and W atoms embedded in the surface increase the internal binding of the system, and weaken the adsorption effect with the adsorbate. In addition, the O atom prefers to adsorb on the top of the doping atom rather than on the top site of Cu atom, because the 1NN-type surfaces have higher values of E ads than TOPtype surfaces. However, the results of Ag-doped surfaces are opposite to those of other surfaces. Because of the inhibition effect of Ag atoms, the E ads on both types of Ag-doped surfaces is lower than that of pristine surface. Besides that, we also computed the E im , E b and γ of O 2 adsorption on the M-doped Cu surfaces and shown the data in Table 1. After O 2 adsorption, the values of E im and E b of pristine/doped surfaces are lower than before except Ag-doped surfaces, but the values of γ of pristine/doped surfaces are higher than the clean surfaces except W-doped surfaces. The reason for our analysis is that the adsorption of O 2 causes the doping atoms to be further embedded in the surface, which increases the doping energy. This conclusion is consistent with the result of adsorption energy we mentioned above. As the doping atoms continue to penetrate into the Cu surface, the space between the Cu atoms is reduced, thereby increasing the binding energy inside the material. The reason for the increase in surface energy is also due to the presence of adsorbates. However, since the radius of Ag atom is much larger than that of Cu atom, the obvious repulsion effect causes the weak bonding between surface atoms to no longer follow the above rules. To further explore the interaction between the substrate, doping atom and adsorbate, we introduced the electron density of states (DOS) for O 2 adsorbed pristine/doped Cu surfaces. As shown in Figure 4, we plotted the DOS of one doping atoms, two oxygen atoms and two kinds of Cu atoms, one is the Cu atom just below the adsorbed O atoms and the other one is the nearest neighbor to both the doping atom and the adsorbed O atom. We can see that the contribution of Cu-d elctorns to the DOS is obviously large, although the Cu-s and Cu-p electrons have hybridization and little contributions in the energy range of -10 eV to + 10 eV. Among them, the number of d-electrons of Cu atoms on the pristine surface is higher than that of O adsorbed surfaces. In another word, the Cu-d orbital electrons in the former are concentrated in the high energy region (-5 eV to 0 eV), while O-p orbitals in the latter are concentrated in wider energy level range of -7 eV to +2 eV. At the same time, the O-s and O-p orbital electrons have obvious peaks in the energy range of -7 eV to + 10eV, which indicates that after O adsorption, an interaction occurs between O-Cu, and some electrons are transferred from Cu atom to O atom, which in turn causes the remaining electrons of Cu atom to move to a lower energy level and improves the bonding between the inner atoms in the Cu surface, and finally shows the effect on the value of the binding energy. However, O atoms and doping atoms contribute significantly to the DOS, especially the O-p orbital electrons and the d orbital electrons of doping atoms. Due to the different electronegativity of three metals Al, Ag, W, the hybridization rules of the electron orbitals on the three doped surfaces are different. The Al-p orbital peaks and O-p orbital peaks have overlap in a wide energy range, but there are nearly not hybridization peaks between Cu-d and Al-p orbits. The hybridization of the peaks of the Ag-d orbit and O-p orbit is affected by the O adsorbed sites and appears to overlap at high and low levels, respectively. But Ag-d orbit has the highest peak at low energy level of all the three doping atoms. The W-d orbital and O-p orbital have a broad peak hybridization only at high energy levels, but the number of electrons of these overlapping peaks is the smallest of the three doped metals. The analysis of the electronic structure shows that the difference in energy calculated in Table 1 is not only influenced by the orbital hybridization between the adsorbate and the surface atoms, but also related to the electronegativity and the orbital energy level of the doping atoms. In one sense, our results provide fundamental insights into O 2 impossible dissociative adsorption on these M-doped Cu surfaces. O 2 dissociation on pristine/doped Cu (111) surface In this section, we mainly investigate the O 2 dissociative processes (O 2 → O +O) on the pristine/doped Cu surface. We computed the initial state (IS) and final state (FS) in order to obtain the potential energy surface (PES). Then we obtained the intermediate states using a linear interpolation. At last, we used the same convergence criteria of force and energy as the whole simulation process. The energy barrier (E B ) and reaction energy (E R ) and structure parameters of O 2 are listed in Figure 5 and Table 2. It is seen that the degree of O 2 dissociation is related to the doping atoms on Cu surfaces. This result is as similar as our previous calculations 11,12 that pointed that the precovered-atoms affect the subsequent H 2 O dissociation. The E B is 0.405 eV on the pristine Cu surface. Except W_TOP surface, the value is lower than the E B on other M-doped surfaces. It shows that the doping atom can resist the dissociation of O 2 . Further studying the data, we found that the E B on three kinds of Al_Top, Ag_1NN and W_1NN surfaces are 0.505 eV, 0.905 eV and 0.814 eV, respectively, which are much larger than other surface models. This seems to indicate that the adsorbed O 2 in a certain range near the doping atoms has good effect of accelerated dissociation. However, by analyzing the reaction energy, we found that the dissociation reaction of O 2 on the surfaces is not only reflected in the barrier energy, but the reaction energy can be also an important index. The E R are -5.355 eV, -6.426 eV, -5.771 eV, -6.113 eV, -5.386 eV, -7.191 eV, -5.356 eV for pristine, Al_Top, Al_1NN, Ag_Top, Ag_1NN, W_Top, W_1NN surfaces, respectively. The results show that the O 2 can be dissociated more easily on the M_1NN surfaces than that of M_Top surfaces. We found that the reaction energy of O 2 on the pristine Cu surface is larger than that of all doped surfaces, and the E R of O 2 at the M_TOP surface is also low. For the previous results, we believe that the doping Ag atoms actually resist the dissociation of O 2 . The results of bond length (L O-O ) also prove our inference, because the value of L O-O for Ag doped surfaces are smaller than that on the pristine Cu surface. Therefore, Ag atoms have the best resistance effect on the O 2 dissociation comparing with Al and W atoms because of the large barrier energy and reaction energy. Conclusions The O 2 adsorption and dissociation on the pristine/ doped Cu (111) surface were studied by density functional theory. The Al, Ag and W atoms were doped on the surface to study the interaction between O 2 and Al. All the doped atoms could be stably combined with Cu atoms, while being slightly embedded in the surface to a certain depth. The E b of pristine Al surface is lower than that of Al-doped and W-doped surfaces, so the binding energy data confirms that the Al and W atoms would strengthen the bonding effect between the atoms on the Cu (111) slab except Ag doping atom. The O atom prefers to adsorb on the top of the doping atom rather than on the top site of Cu atom. However, Ag atom reduces the surface activity of Cu and inhibit the reaction between Cu surface and oxygen molecules. Due to the different electronegativity of three metals Al, Ag, W, the hybridization rules of DOS for O 2 adsorbed pristine/doped Cu surfaces are different. After O adsorption, an interaction occurs between O-Cu, and some electrons are transferred from Cu atom to O atom, but the doping atom can resist the dissociation of O 2 . Moreover, we computed the initial state and final state in order to obtain the PES. The result shows that the dissociation reaction of O 2 on the surfaces not only reflected in the barrier energy, but the reaction energy can be also an important index. Ag atoms have the best resistance effect on the O 2 dissociation comparing with Al and W atoms because of the large barrier energy and reaction energy.
5,185.6
2021-01-01T00:00:00.000
[ "Materials Science", "Chemistry" ]
Comparison of CIV, SIV and AIV using Decision Tree and SVM The H3N2, the canine influenza virus has numerous types of animal hosts that can live and reproduce on. They mostly settle on pigs and birds. However, some concerned voices are rising that there is high possibility that humans could be an additional victim for the canine flu. Consequently, our project group expect that the information about the H3N2’s DNA are valuable, since the information could attribute to development of vaccine and medicine. In the experiments of analysing the properties of CIV, Canine Influenza Virus with the comparison of SIV, Swine Influenza Virus and AIV, Avian Influenza Virus with the decision tree and SVM, Support Vector Machine. The result came out that CIV, SIV and AIV are alike but also different in some aspects. Introduction Recently, various types of influenzas have broken out worldwide. The influenzas can be thought of as similar as each other, but can also be thought as the origins of separate diseases. Determining the ranking of them with their impact to human, to seek the effective treatment, maybe the three-AIV, SIV, and CIV-will win the most of all the influenzas. In our experiment, the sample of viruses was composed of Nev, Nef, Tat and Vif each originating from human, swine and avian influenza virus. The assortments of the isolate implies that gene classification exists. [1] Specific investigation on gene assortments and the 8 particular gene segments and DNA sequences among the H3N2/AIV/SIV/CIV is the core of the report in terms of finding the treatment of H3N2 CIV. The remedy of H3N2 AIV and SIV could be substituted to potential cure of H3N2 CIV. 'SVM' and 'decision tree' algorithm would be used in order to analogue the resulting DNA sequences and similarities of the 3 viruses. The common gene of the 3 viruses includes the fact that swine, avian, human cells are the possible host or incubator of the virus and the restriction of virus from being cloned could result the destruction of the virus's ability. Swine Influenza virus The H3N2 Swine Influenza Virus(SIV) originates from the H1N1 Influenza A virus. It produces fever, lethargy, sneezing, coughing, difficulty breathing and decreased appetite in pigs. [2] In humans, on the other hand, the symptoms of the 2009 "swine flu" H1N1 virus are similar to those of influenza-like illness in general. Symptoms include fever, cough, sore throat, body aches, headache, chills and fatigue. [3] The 2009 outbreak has also shown an increased percentage of patients reporting diarrhoea and vomiting. Canine Influenza virus Canine Influenza Virus(CIV) is well known as one of the most highly pathogenic subtype of the influenza A virus. Symptoms vary from mild to harsh ones, including a cough which lasts for approximately 30 days, possibly a nasal discharge, high fever and -in extreme casespneumonia. [4] The vaccine for this virus has been developed in 2009, but the effectiveness or safety has not yet been verified. [5] Avian Influenza virus Avian Influenza Virus (AIV) is an influenza caused by viruses adapted to birds, and it is traced back to its original form, the H5N1 AIV, which is noted for its fatalness. Symptoms include fever, cough, sore throat, body aches, headache, chills and fatigue. [6] Decision Tree A decision tree is a tree-like graph or model composed of node and branches. The decision support tool are openly utilized statistical analysis for classification of category type inputs. [7] Several internal nodes contain questions associated with data items, branches down the original source into subsets. Following branches represents the outcome of test and leaf nodes are directed to a class node. [8] 2.2.2 Support vector machine SVM(Support Vector Machine) is supervised learning model for data analyzing algorithms and it is used to classification and regression analysis. When a group of data is given, SVM training algorithm builds a model that decide which category to involve the data, based on existing data. SVM model is represented as a widest gap, border of separated categories, in space. [9] New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall on. SVM is used by not only linear classifications but also non-linear classification. For SVM to perform a non-linear classification efficiently, kernel trick is needed to implicitly map their inputs into high-dimensional feature spaces. Nef protein Nef(Negative Regulatory Factor) protein is a 27-35kDa protein. The protein is encoded by primate lentiviruses which include AIV-1, AIV-2 and SIV. Nef promotes Tcell activation and the establishment of two basic attributes of AIV infections. Nef regulates the cell surface expression of CD4 and Lck. [10] Enrichment of active Lck induces the production of Interleukin 2 which activates the growth, differentiation and proliferation of T-cells. Differentiated T-cells creates a new population of cells in which AIV-1 could infect. In short, Nef protein stands for manipulation of the host's cellular machinery AIV-1 replication in terms of "Negative Factor." Env protein Env(Envelop Protein) protein is a protein that forms the viral envelope. The expression of the env gene induces retroviruses to attach to specific cell target and to infiltrate the cell membrane. The env gene codes for the gp160 protein, which is cleaved into gp120 and gp41 by Furin. The glycoprotein gp120 bins to CD4 receptor on the target cell with a receptor including the helper T-cell. Replication cycle of AIV is related with the env gene in which gp120 has been the subject of AIV vaccine research while CD4 receptor binding is an important step in AIV infection. In addition, glycoprotein gp41, which bound to gp120, enables AIV to enter the cell in another step of AIV infection. Tat protein In molecular biology, Tat(Trans-Activator of Transcription) is a protein that is encoded for by the tat gene in AIV-1. Tat is a regulatory protein that drastically enhances the efficiency of viral transcription from the LTR promoter and replication. Tat has an unusual property for a transcription factor. It can be released and enter cells freely, yet still retain its activity, enabling it to up-regulate a number of genes. Vif protein Vif(Viral infectivity factor) is a protein found in AIV and other retroviruses and is essential for viral replication. It have human enzyme APOBEC(cytidine deaminase enzyme that mutates viral nucleic acids) cause hypermutation of the viral genome, rendering it dead-onarrival at the next host cell by ubiquitination and cellular degradation . [11] As result, it disrupts antiviral activity. Thus APOBEC plays key role to defend retroviral which AIV-1 has overcome by the acquisition of vif. [12] 3 Method Decision Tree We used decision tree to analyse the overlapped rule among AIV, SIV, and CIV. Dividing the DNA sequence in 10 parts, we tested the particular protein sequence of env, nef, tat, and vif in 5,7,9 window using 10-fold cross validation. Sequences with confidence above 75% were only considered as 'valid'. SVM We used RBF, Polynomial function, normal function, and sigmoid function in this experiment. The X-axis of graph is the kind of function we used, and Y-axis is accuracy. We progressed the experiment in each windows and calculated the average. To test SVM's validity, we compare Accuracy rate. At a higher accuracy, more similarity. [13] 4 Result Decision Tree Decision Tree was used to analyse common rules between DNA sequences of AIV, SIV and CIV. The DNA sequences are divided into 10 subsets and the experiment was held in 10 fold cross validation for each 5 window, 7 window and 9 window rules of specific proteins: ENV, NEF, TAT and VIF. Under all the extracted rules, only those with frequency of over 0.75 were considered as valuable. [14] In the experiment, AIV refers to class 1, SIV refers to class 2 and class 3 represents CIV. In interpreting the result, a particular amino acid has a tendency of being repeatedly observed in a certain position. In the protein ENV, AIV seemed to have the most powerful impact among the three viruses in classification. In protein NEF and TAT, CIV was dominant under rules. On the other hand, SIV and CIV were the main subjects of classification, so the influence of AIV was small in protein VIF. [15] In Table 1, the result represents that AIV has considerably mixed with SIV and CIV. In addition, class 2 and class 3 overlapped common sequences with other classes, so the DNA sequences between three viruses are quite evenly mixed. In Table 2, AIV showed high tendency to follow the rules of class 1 while SIV almost equally combined with other classes. On the other hand, CIV refer to class 2 in relatively high percentage. In Table 3, AIV, SIV and CIV have the common amino acid Q in position 7 and the three viruses are mixed equally. In Table 4, we observed the considerably combined DNA sequences among AIV, SIV and CIV. In Table 5, under certain combining of the three viruses, AIV had high tendency to class 3 and SIV was likely to refer to class 2 in highest percentage. In Table 6, largest number of appearances were observed in pos 2=C of AIV, pos 2=F of SIV and pos 9=N in CIV. The percentile showed that the three viruses are evenly mixed in protein NEF. Table 7, AIV, SIV, and CIV share considerably similar amino acid sequence. The first table represents representative rules of each class and their frequency. Second table notice the percentage which AIV,SIV,CIV refers class1,2,3 with tat_window. In Table 8, AIV is similar with SIV. SIV has similar amino acid with AIV. CIV is relatively different from others. In Table 9, AIV is similar with SIV and CIV. SIV is similar with AIV. CIV is Smilar with AIV. In Table 10, AIV, SIV, and CIV share considerably similar amino acid sequence. In Table 11, AIV is different from others. However, SIV, and CIV are also different but not clearly. In Table 12, AIV is similar with SIV. SIV and CIV are similar each other as well as similar with AIV. SVM The experiments were held for each protein of 5window, 7window and 9window. In each result, we calculated the mean value of the data and drew the bar graph to enhance the visual recognition with the relative value. [16] In Fig. 1, the average value of the rbf is evidently lower than that of normal, polynoml and signoid. The results means that the relation between the three viruses. SIV, CIV and AIV are in nonlinear relationship, so the three are seemed to be quite mixed. Conclusion The paper applied the approach of comparing the properties of CIV, SIV and AIV. In order to find out the genetic information of AIV, the 4 kinds of protein such as nef, tat, vif and env protein were the subjects of the experiments. Decision tree and SVM were used in the experiments. According to the rule which were extracted form the decision tree, the three viruses are evenly mixed in considerable amount of DNA sequences. From SVM, we realized that the relation between SIV, AIV and CIV are below the unlinear relationship. This similarities and differences discovered through our experiment may support other studies of the treatment of not only CIV but also SIV and AIV. We hope this study to be extended to other fields also in logical way and content aspects.
2,640.4
2016-01-01T00:00:00.000
[ "Biology" ]
LAME: Layout-Aware Metadata Extraction Approach for Research Articles , Introduction With the development of science and technology, the number of related academic papers distributed periodically worldwide has reached more than several hundred thousand.However, their layout styles are as diverse as their subjects and publishers although the portable document format (PDF) is widely used globally as a standardized text-based document provision format.For example, the information order is inconsistent when converting such a document to text because no layout information separating the document content is provided.Thus, extracting meaningful information such as metadata, including title, author names, affiliations, abstract, and keywords, from a document is quite challenging. Research on extracting metadata or document objects from PDF documents using machine learning has increased [1][2][3][4][5][6][7].In aspects of natural language processing (NLP) approach, Open-source software, such as Content ExtRactor and MINEr (CERMINE) [4] and GeneRation of Bibliographic Data (GROBID) [5], automatically extract metadata using the sequential labeling technique but generally do not take the layouts into account in detail.Therefore, they do not show reasonable metadata extraction performances for every research article due to their diverse (and sometimes bizarre) layout formats.Unlike existing NLP based metadata extraction approaches, PubLayNet [1], LayoutLM [7], and DocBank [3] employ object detection models, such as Mask region-based convolutional neural network (Mask R-CNN) [8] and Faster R-CNN [9], to detect the layout of academic literature and extract document objects (e.g., the text, figures, tables, titles, and lists).The critical weakness of them is the low layout analysis quality for unseen journals and different document types.For example, when we apply the PubLayNet model trained with Detectron2 [1] to the first page of a Korean academic journal, it cannot capture the correct regions of documents objects, as depicted in Fig. 1. In terms of training data and its coverage, PubLayNet and LayoutLM automatically construct the training data using the metadata provided by PubMed Central Open Access-eXtensible Markup Language (PMCOA-XML) or LaTex.Nevertheless, these are primarily for extracting figures and tables; they do not cover all the necessary metadata, such as the abstract, author name, keyword, or other data [1].Moreover, to the best of our knowledge, the PMCOA-XML data of publications are only limited to biomedical journals and small numbers of LaTex data are available in the public domain.Recently, some training data for metadata extraction with consideration of the layout for the selected 40 Korean scientific journals were manually crafted [6].However, its layout-aware data quality is not so satisfactory due to inconsistent and noisy annotations. To guarantee consistent annotation quality in constructing layout-aware training data and building a more sophisticated language model for advanced metadata extraction, we propose a LAyout-aware MEtadata extraction (LAME) framework composed of three key components.First, an automatic layout analysis for metadata is designed with PDFMiner.Second, a large amount of layout-aware metadata is automatically constructed by analyzing the first page of papers in selected journals.Finally, Layout-aware Bidirectional Encoder Representations from Transformers for Metadata (Layout-MetaBERT) models are constructed by adopting the BERT architecture [10]. In addition, to show the effectiveness of the Layout-MetaBERT models, we performed a set of experiments with other existing pretrained models and compared with the state-of-the-art (SOTA) model (i.e., bidirectional gated recurrent units and conditional random field (Bi-GRU-CRF)) for metadata extraction. Our main contributions are as follows:  We proposed an automatic layout analysis method which doesn't requires PMCOA-XML (or Latex) data for metadata extraction.  We automatically generated training data for the layout-aware metadata from 70 research journals (65,007 PDF documents).  We constructed a new pretrained language model, Layout-MetaBERT, to deal with the metadata of research articles more effectively.  We demonstrated the effectiveness of Layout-MetaBERT on ten unseen research journals (13,300 PDF documents) with diverse layouts compared with the existing SOTA model (i.e., Bi-GRU-CRF). 2 Related work Metadata extraction Various attempts have been made to analyze and extract information from documents and classify them into specific categories.Studies on text classification have been continuous since 1990, and the performance of text classification has gradually improved with the employment of sophisticated machine learning algorithms, such as the support vector machine (SVM) [11], conditional random fields (CRF) [12], convolutional neural network (CNN) [13], and bidirectional long short-term memory (BiLSTM) [14].Afterward, various successful cases using bidirectional encoder representations from transformers (BERT) [10] pretrained with a large-scale corpus were introduced in the field of NLP.In the studies by [15] and [16], the pretrained BERT model was fine-tuned on the text classification task, and it showed results of close to or superior to the SOTA result for the target data.The use of BERT based pretrained models became popular due its high performances in various NLP fields, and more advanced pretrained models [17][18][19] were introduced according to various research purposes. As a previous SOTA model of our metadata extraction task, a Bi-GRU-CRF model trained more than 20,000 human-annotated pages of layout boxes for metadata [6] from research articles showed an 82.46% of F1-score.However, accurately detecting and extracting regions for each type of metadata in documents is still a nontrivial task because of the various layout formats. Document layout analysis Document layout analysis (DLA) [7] and several PDF handling efforts [6], [11], [20] have been conducted to understand the structure of documents.The DLA aims to identify the layout of text and nontext objects on the page and detect the layout function and format.Recently, the LayoutLM model [7] employed three different information elements for BERT pretraining to identify layouts: 1) layout coordinates, 2) text extracted using optical character recognition software, and 3) image embedding by understanding the layout structure through image processing.Moreover, NLP-based DLA research on various web documents [21], Layout detections and layout creation methods to find text information and location [8], [22], [23] have been studied.[2] and [24] applied the object detection technique to text region detection.Interestingly, widely used object detection techniques (e.g., Mask R-CNN [8] and Faster R-CNN [9]) have been applied to the metadata extraction field [1], [3]. Due to the high cost of training data construction for DLA, many studies have attempted to build datasets automatically.For example, the PubMed Central website, which includes academic documents in the biomedical field, provides a PMCOA-XML file for each document, enabling an analysis of the document structure.In the case of PubLayNet [1] which utilizes the PubMed dataset, the XML and PDFMiner's TextBoxes were matched to construct about 1 million training data.However, this is generally possible only when accurate coordinates are provided to separate each layout and the text information elements for each. Automatic layout analysis To understand the layout that separates each metadata element in the given PDF file, we must observe the text and coordinate information on the document's first page.To this ends, we employ the open-source software, PDFMiner, to extract meaningful information surrounding the text in the PDF files. If we parse a PDF document with the software, we obtain information on the page, TextBox, TextLine, and character (Char) hierarchically, as illustrated in Fig. 3.These include various text information, such as coordinates, text, font size, and font for each object.For example, text coordinates appear in the form of (x, y) coordinates along with the height and width of the page. Textbox reconstruction To reduce existing errors in TextBox recognition, as depicted in Fig. 3, TextBoxes were reconstructed starting from the Char unit with the information obtained from PDFMiner.First, the spacing between characters is analyzed using the coordinate information for each Char.Generally, each token's x-coordinate distance (character spacing) appears the same, but the distance is slightly different depending on the alignment method or language.Therefore, after collecting characters in the same y-coordinate, the corresponding characters were sorted based on the x-coordinate value.As displayed in Fig. 4, the distance between each Char is smaller than the font size of the Chars; thus, the Char is determined to be part of the same TextLine in an academic document consisting of two columns.After aligning the TextLines based on the y-axis, if the distance between each y-coordinate is smaller than the height of each TextLine, the two different TextLines are regarded as the same TextBox.However, this method cannot create a TextBox accurately by separating paragraphs from paragraphs.For more elaborate TextBox composition, it needs to decide whether to configure the TextBox by considering the left x-coordinate 0 , the right x-coordinate 1 , and the width (W) of each TextLine.For example, for sentences like those in Fig. 5, we can think of two cases in composing a TextBox by comparing each TextLine. First, the beginning of a paragraph is usually indented.Therefore, if the difference between the 0 values of and −1 is greater than the font size of the Chars existing in each TextLine, the two TextLines should be included in different TextBoxes.Second, a TextLine that appears at the end of a paragraph has a shorter width because it has fewer Chars on average.Therefore, when the width of −1 is smaller than the width of , −1 and should be assigned different TextBoxes. Refinement with font information PDFMiner can produce various pieces of information in terms of font information, such as the font name and style (e.g., bold, italic, etc.) as listed in Table 1.However, as in Fig. 6, English is frequently used for Korean abstracts in some journals published in Korea.In particular, abstracts written in Korean and English appear together on the first page of some research articles.In addition, certain strings are often treated as bold or italic and often have different fonts and sizes, such as section titles.Considering this problem, when composing a TextBox using the coordinate information described above, if the font information displayed on each line is different, it is not simply judged as a different line.After analyzing the font information of different languages that appear, the TextBox was determined by considering the number of the appearing fonts (e.g., bold and italic). Table 1: Example of font information when pdfminer is applied Although font information helps the layout composition, it is still confusing when the same font information is used for individual information marking or bold processing for emphasis or different metadata.Additional processing is required to correctly connect individual fonts to make a layout using the font information.Therefore, we compared only texts described in Korean and English and used only the fonts of the same language to determine the layout. Adjustment of text box order Academic papers may consist of one or two columns depending on the format for each journal.In some cases, only the main body consists of two columns, and the title, abstract, and author name are displayed in one column.For example, in Fig. 3, such information as the title and author name was arranged in the center, but the document object identifier (DOI) information or academic journal names appeared separately on the left and right sides.To effectively identify metadata consistently from varied layout formats, we sorted the textboxes extracted from the first pages of the research articles sequentially from top to bottom based on the y-axis. Automatic training data construction We compared the content extracted from the PDFMiner with the metadata prepared in advance to construct the layout-aware metadata automatically.If no metadata is available for the given research article, metadata can be automatically obtained through the DOI lookup.Therefore, this technique can be extended to all journal types where the registered DOI exists. However, the compared textual content is not always precisely matched.Therefore, to determine the extent of the match, we allowed only fields with almost identical (or high similarity) matches for each layout text information element automatically acquired in the previous step as training data.We used a mixed textual-similarity measure for efficient computation based on the Levenshtein distance and bilingual evaluation understudy (BLEU) score. The Levenshtein distance was calculated using Python's fuzzywuzzy 1 .The scores calculated using the BLEU [25] measure were summed to determine whether the given metadata displays a degree of agreement of 80% or more.Nevertheless, some post-processing is required in the process.In analyzing the text after extraction, some problems occur when dealing with expression substitutions (e.g., "<TEX>," cid:0000).Encoding errors reduced the portion of mathematical expressions that can be removed as much as possible, and we excluded the text with encoding problems to avoid these errors. Pretraining Layout-MetaBERT Although pretraining a BERT model requires a large corpus and a long training time, a fine-tuning step can make a difference in performance depending on the characteristics of the data used for pretraining.For example, when pretrained with specific domain data, such as SciBERT [19] and BioBERT [26], they performed better than Google's BERT model [10] in downstream tasks of science and technology or medical fields.However, to our best knowledge, there is no pretrained model designed to extract metadata based on research article data.Therefore, we newly pretrained a layout-aware language model, so called, Layout-MetaBERT, that can effectively deal with metadata from research articles.Fig. 7 describes how the previously constructed training data are used for pretraining and fine-tuning the Layout-MetaBERT.Different from the Google BERT model [10], in our Layout-MetaBERT pretraining, each document layout was considered a sequence in this study.Thus, each layout was classified by the [SEP] token to prepare the training data and was used for pretraining.In pretraining Layout-MetaBERT models, we followed three size models of the Google BERT: base (L = 12, H = 768, A = 12), small (L = 4, H = 512, A = 8), and tiny (L = 2, H = 128, A = 2), where L is the number transformer blocks, H is the hidden size, and A is the number of self-attention heads.We used a dictionary of 10,000 words built through the WordPiece mechanism and automatically generated training data extracted from the first page of 60 research journals among the 70 journals for the pretraining.The pretrained Layout-MetaBERT can be used for metadata extraction after fine-tuning. Experiments We summarize the results of three major components to examine the applicability of the LAME framework.First, we compare the results of the proposed automatic layout analysis with other layout analysis techniques.Second, we describe the statistics of the training data constructed according to the results of the automatic layout analysis.Finally, we compare metadata extraction performances our constructed Layout-MetaBERT models with other deep learning and machine learning techniques after fine-tuning. Comparison with other layout analysis methods No prepared correct answers exist for the target research articles; thus, we compared the generated layout boxes from PDFMiner, PubLayNet, and the proposed layout analysis method for two randomly selected documents (e.g., A and B) as depicted in Fig. 8.In Document A, the extraction results of PDFMiner and the layout analysis are similar.However, in PubLayNet, the information of the paragraphs is excessively separated, as indicated in Fig. 8-(a).For Document B, the extraction results of all techniques were somewhat similar, but PubLayNet displayed the author name and affiliation as one piece of information, and PDFMiner produced a separate box for the line-wrapped title. The proposed method could generate a good enough layout analysis for the first page of the research articles through the comparisons.Comparing all three layout analysis results manually for each layout box to calculate the accuracy requires too much human labor and is beyond the scope of this paper.The performance of the constructed Layout-MetaBERT indirectly measures the quality of the layout analysis. Training data construction To reflect various kinds of layout formats, we used 70 research journals (Appendix 1) provided by the Korea Institute of Science and Technology Information (KISTI) to extract major metadata elements, such as titles, author names, author affiliations, keywords, and abstracts in Korean and English based on the automatic layout analysis in Section 3.1.Among the 70 journals, two journals were written in only Korean, 23 journals in only English, and 45 in Korean and English. For each layout that separates metadata on the first page of the 70 journals (65,007 PDF documents), automatic labeling with ten labels was performed, and other layouts not included in the relevant information were labeled O.The statistics of automatically generated training data are presented in Table 2. Experimental results To check the performance of the proposed Layout-MetaBERT, 70 research journals (65,007 documents) were divided into 60 (51,676 documents) for pretraining (and fine-tuning) and 10 (13,331 documents) for testing, respectively.Table 3 lists the training and testing performances of the three Layout-MetaBERT models with widely used metadata extraction techniques.Finally, Table 4 describes the Macro-F1 and Micro-F1 scores for metadata classification comparisons with existing pretrained models. Fine-tuning and Hyperparameters In fine-tuning with various pretrained language models (e.g., three different sized models of Layout-MetaBERT, KoALBERT, KoELECTRA, and KoBERT), all experiments were conducted under the same configurations with an epoch of 5, batch size of 32, learning rate of 2e-5, and maximum sequence length of 256.In addition, we used the Nvidia RTX Titan 4-way system and Google's TensorFlow framework in Python 3.6.9for pretraining and fine-tuning. Stable performances of Layout-MetaBERT The proposed Layout-MetaBERT models can effectively extract metadata, as listed in Table 3.In particular, Layout-MetaBERT models make significant differences compared to the existing SOTA (i.e., Bi-GRU-CRF) model.Even the tiny model with the fewest parameters among the Layout-MetaBERT models has higher performance than other pretrained models in Macro-F1 and Micro-F1 scores, as displayed in Table 4.Moreover, three Layout-MetaBERT models have only minor differences between the Micro-F1 and Macro-F1 scores compared to other pretrained models.Moreover, the Layout-MetaBERT models exhibit 90% or more robustness in metadata extraction, confirming that pretraining the layout units with the BERT schemes is feasible in the metadata extraction task.Bi-GRU-CRF [6] (without position) 0.8610 0.8912 Bi-GRU-CRF [6] (with position) 0.9442 0.0985 CNN [13] 0.9425 0.824 SVM [11] 0.9411 0.8114 Table 4: Metadata extraction performances of primary BERT models for each label Experiments with position information Unlike other models, the Bi-GRU-CRF model used the absolute coordinates of metadata with other textual features.However, the model failed to discriminate unseen layouts from unseen journals when using the coordinate information for training various journal layout formats.Therefore, to determine the validity of the coordinate information, we performed additional experiments with the Bi-GRU-CRF (with position) and Bi-GRU-CRF (without position) models.Although Bi-GRU-CRF (with position) model demonstrated high performance in the training stage, it failed to recognize metadata-related layouts in unseen journals (less than 10% as F1 score).However, the performance of the Bi-GRU-CRF (without position) model had somewhat lower performance in the training stage compared to the other models.The model performed well, similar with that of KoALBERT.Thus, we confirmed that using absolute coordinate information can only be applied under the premise that the journals used in training also are used in testing. Additional performance improvements The proposed Layout-MetaBERT exhibited higher results than the existing SOTA model [6].However, absolute coordinate information could obtain poor results for documents in a format not learned.In addition, the proposed layout analysis method separates the metadata well from the first page of the academic documents of various layouts.However, the accuracy of the automatically generated training data is not perfect.There may be errors due to the difference between the metadata format of the document and the metadata written in advance.As mentioned, encoding errors also occur in extracting text from mathematical formulas or PDF documents.Generating the correct layout has a significant effect on extracting metadata and is an essential factor in automatically generating data.Therefore, if more sophisticated training data can be generated, the performance of Layout-MetaBERT can be further improved. Restrictions of Layout-MetaBERT Much research has been conducted on automatically extracting layouts from PDF documents.Creating accurate layouts has a significant influence on meta-extraction.This study attempted to compose the layout of the first page of an academic document using text information.Based on this, we trained the Layout-MetaBERT and confirmed the positive results for the applicability to the meta classification module.However, the proposed technique cannot be applied to all documents.An image-type PDF cannot be used unless the text is extracted.In this case, the extraction must be performed using a highperformance optical character recognition module. Expansion to other metadata types This study focused on extracting five major metadata elements (i.e., titles, abstracts, keywords, author names, and author information).Considering that the target research articles contain elements written in English, Korean, or both, the number of metadata becomes 10.However, other metadata (e.g., publication year, start page, end page, DOI, volume number, journal title, etc.) can be extracted further by applying highly refined regular expressions in the post-processing step. Conclusion In this paper, the LAME framework is proposed to extract metadata from PDFs of research articles with high performance.First, the automatic layout analysis detects the layout regions where metadata exists regardless of the journal formats based on text features, text coordinates, and font information.Second, by constructing automatic training data, we built high-quality metadata-separated training data for 70 journals (65,007 documents).In addition, our fine-tuned Layout-MetaBERT (base) demonstrated excellent metadata extraction performance (F1 = 94.6%) for even unseen journals with diverse layouts.Moreover, Layout-MetaBERT (tiny) with the fewest parameters exhibited superior performance than other pretraining models, implying that well-separated layouts induce effective metadata extraction when they meet appropriate language models. In future work, we plan to conduct experiments to determine whether the proposed model applies to the more than 500 other journals not used in this study.Moreover, resolving potential errors in the automatically generated training data is a concern to create layouts that separate each metadata element in an advanced way.Furthermore, extending the number of metadata items extracted without postprocessing is an exciting but challenging task to resolve as future work. 3 Proposed frameworkFig.2 depicts our LAME framework consisting of three major components: automatic layout analysis, layout-aware training data construction, and Layout-MetaBERT generation. Figure 3 : Figure 3: TextBox reconstruction based on the results of PDFMiner. Figure 4 : Figure 4: Example of the separated column layout. Figure 6 : Figure 6: Example of when a Korean abstract and an English abstract exist together. Table 2 Statistics for automatically generated training data Table 3 : Train and test performances of metadata extraction
5,030.2
2021-12-23T00:00:00.000
[ "Computer Science" ]
Assessment of the impact of Smartphone Technology on Tour Guide Performance in Kenya Several studies have been conducted to examine the influence of technology on the travel and tourism industry. However, there exists limited literature on the adaptation and usage of Smartphone technology by Kenyan tour guides, a gap this study sought to address. The objective of the study was to examine the effect of Information Communication Technology (ICT) on tour guiding performance in Kenya, investigate the effect of smartphone usage on the guides’ performance and finally, explore the possibility of adoption smart guiding techniques by Kenya’s tour guides. The study used descriptive methods and target practicing tour guides as the respondents. The data collected was analyzed using the Pearson’s Chi-square test of independence. The findings indicated Smartphone technology positively influenced guides, performance (χ 2 =65.025;df 2; P<0.05).The study concluded that smartphone and information communication technology have significantly influenced guides performance and hence recommend to the government and other stakeholders to invest more Introduction The mobile phone technology has not only become the most important communication technology worldwide, but also offers many additional functions such as access to the internet, audio-visual recordings or financial transactions. Mobile technology has been used to improve the delivery of services by tour guides in Kenya. Various authors have examined the technology trends that are likely to shape the use of information and communication technologies (ICT) in the future (Katz, 2017;Andrae & Edler, 2015;Martin, 2017). However, an analysis of how these trends may be relevant to tour guides and visitors to a destination is limited. To help close these knowledge gaps, this study tries to identify the mobile phone technology and its influence on the guide's performance. The findings from this research will assist service providers to assess the utility levels of new mobile technologies, not only by the tour guides but also other stakeholders in the tourism and hospitality industry. The study intends to inform the stakeholders on the smartphone uses by guides and the required support infrastructure with the main aim being to improve the service quality offered to the visitors. The objectives of this study were as therefore to; 1) Examine the effect of Information Communication Technology (ICT) on tour guiding performance in Kenya 2) Examine the effect of smartphone usage on the guide's performance. 3) Explore the possibility of adopting smart guiding by Kenya's guides and visitors Literature indicates that one of the biggest innovations in the 21 st century is the internet. Available data show that by 2018 there were 4 billion active internet users of which 3.3 billion are active in social media. Information from Communication Authority of Kenya (CAK) (2018) indicates that the mobile broadband has become accessible in Kenya where a total of 42 million internet subscription was reported in the year 2018. Kenya has experienced rapid growth over the last decade, with the ICT sector expanding from 10% to 22% in 2017, contributing to 1.6% of total GDP. While Smartphone use is on the rise in developing countries, penetration rates differ widely.In 2017 Mobile adoption in Kenya was 91% (46.94m) penetration of mobile subscriptions, compared to 80% mobile penetration in Africa and internet connectivity, at a penetration rate of 84% with43.3M of the total population having access to the Internet in Kenya.(https://www.jumia.co.ke/mobile-report) Literature Review on Smart Tour Guiding There is no consensus of definition on smart tourism, but those who have defined it agrees that it is based on Internet of Things (IoT), cloud computing,mobile communication and artificial intelligence communication ( Gretzel et al., 2015;Li et al., 2017;Del Vecchio et al., 2018). Smart tourism has been regarded as the second revolution to the tourism industry, the first one being the internet innovation (Rayman-Bacchus &Molina, 2001 andKitchin, 2014). The smart tourism has brought about application integrations and generation of new ICT which has affected the traveller's behaviour, consumer expectations and interactions within a destination visited, and finally the tourism business operation model in most parts of the world including Kenya. Smartphone and other hand-held devices has become a portal through which information can be seamless accessed. Through sensors from smartphones, visitos can access big data, open data and share through their phones. Some destinations have been referred to as smart destinations due to their heavy investment in ICT integration, networking and physical infrastructure that support smart tourism. Example of such destination on is Barcelona city that has an interactive bicycle and bus shelters which gives visitors schedules and availability of bicycles for transport (Cichosz, 2013). The city of Amsterdam also has technology that translates road signs into different languages (Hardman, 1994 Smart Technology and Tour Guiding The mobile phone technology is currently used not only as a communication tool but also as an information search tool, provision for location services and information required by visitors anytime and anywhere. The location dependent applications have become more popular in the mobile computing environment and travellers can now schedule reminders on their phone which are triggered once they reach the location. What only is required is for the phone to report their location to a server so that they can query the information they want. The advantage of the use of smartphone interaction is that the user does not require extra equipment other than the phone (CAK, 2018). There are different types of mobile digital application in tourism which may be recommended to any destination. The first categories are the Transport planning applications which are designed to allow users to track flight information in many locations in real time, helping them to share information on travel. Examples of such successful applications include TripIt, a travel itinerary website for organizing vacations, group trips or business travel (Kesselman. 2017;Model & Heritage, 2017), TripCase application, which allows users to book flight itineraries, hotel bookings, and rental car reservations (Emrouzeh et al., 2017), Trip deck (Kazez, 2010), Cloudbeds'Reservation System (Jaatinen & Kinnunen, 2017), eZee Frontdesk used by boutique hotels, lodges, resorts, and small hotels (Grotte, 2018) and Maestro's dashboard (Klein et al., 2013) International Journal of Social Science Research ISSN 2327-5510 2019 Likewise, the Event listings applications allow users to upload or download information on events and activities in their current location and to recommend places and events. The Travel planner applications perform integrated itinerary management functions including flights and car hire, hotel and restaurant reservations. Visitors can use this application to plan for their own itinerary in a given destination, and seek the assistance of a tour consultant once in a destination. This is normally used to organize independent tours (Gavalas et al., 2014). Accommodation planning applications function as a location-based tourist information center service for accommodation and are used by visitors to plan and book their accommodation through the internet an example being Booking.com. (Berger et al., 2002;Lehmann & Lehner, 2002). Also available to users are the Tour guide applications, which consist of city guides containing recommendations for restaurants, shopping, attractions, and nightlife. They are gradually replacing the tourist guide maps to a destination with an aim of reducing paperwork. Other forms of mobile digital applications used by guides are directional services such as Google map and Global Positioning System (GPS). The GPS has become a tool that provides location details where the satellite broadcasts signals from space, which are picked up and identified by the receiver giving the latitude, longitude and altitude of a location. Attraction applications enhance the visitor experience at a particular site or attraction such as museums and historical and archaeological site. These applications are interactive and the visitor using mobile phone can get information in a destination (Guttentag, 2010). Different countries have innovated mobile technologies that assist tour guides. Examples of such application are MobiDENK a mobile, a location-aware information system that draws the visitor's attention to historic sites of interest and provides location-dependent multimedia information. MobiDENK also offers visual navigation support (Krösche & Boll, 2005), SightSeeing4U a multimedia web offers personalized multimedia content and related functionality to the client applications (Li et al., 2009), and finally Mobile Travel Buddy, an application that gives information on the capital cities, population and other tourist-related services in a location (Ismail et al., 2016) This study proposes that the Kenyan government and other stakeholders in tourism can explore the possibility of investing in some of these applications with the aim of serving the customer better and transforming Nairobi to a smart city. The success of some of these smart applications in Kenya is the Uber taxi services, which can be said to have changed the taxis business in Kenya. Unlike the traditional taxis operators, you can today use your smartphone download an application that makes it possible for one to request Uber services in the comfort of your home and be notified when the driver arrives. This shows that the destination is ready for uptake of any application that improves delivery of services. Study Methodology This study was descriptive and targeted tour guides from different parts of the country who were randomly sampled to attend their annual training at Kenya Utalii College, one of the leading tourism and hospitality training institutions in Africa. The judgmental sampling International Journal of Social Science Research ISSN 2327-5510 2019 method was used. The criteria to participate in this study was that one should have at least three years of work experience and registered as a guide in Kenya. The target populations were tour guides. The sample size was 56 respondents. The data was collected using a questionnaire and interviews. Both qualitative and quantitative data was collected and was later analyzed using descriptive statistics while the relationship was analyzed using Chi-Square test of independence and Chi-Square Goodness of Fit was used to examine the observed and the expected frequency for most of the responses. Field observations by the authors, who have interacted with Kenyan tour guides for over twenty years, were used to corroborate the findings. Findings and Discussions The majority of the respondents (54%) were freelance tour guides as compared to 44% who were on full-time employment. About 34% owned the tour vehicles as compared to 64% who drove company vehicles. Majority of them 84% were not affiliated to Kenya Association of Tour Operators (KATO). All of them owned Smartphones and 88% indicated that they needed more training on how they can effectively use most features on their Smartphones. Since most visitors to Kenya are interested in viewing wildlife, technological investment in the vehicles is very important. The study sought to examine how much technological innovation is available in the tour vehicles. The study observed that the majority (90%) of the respondents said that their vehicles have a VHF radio while 88% of the said that their vehicles have a battery and phone charger. The results showed that half of the respondents had WIFI in their vehicle meaning visitors would access the internet. There is limited Internet penetration in some accommodation in the wilderness areas. About 83% of respondents have car track systems in their vehicles. In relation to adaptation and usage of the smartphone, the majority (71%) admitted that they are not using most features in their smartphones. About 88% of them said they needed more training on the application of technology in guiding. Although most guides would download some application that would help, them a good number were not able. Tour Guides and Their Preferred Social Media Nearly, all tour guides had a mobile phone. Some of them have a Smartphone which is not only used for communication but to perform other duties with the desire of improving the quality of services offered. Smartphone technology influence their communication and performance. The majority (76%) of the tour guides stated that technology had significantly improved their performance. This study noted that most guides use social media as a mode of communication the most famous being WhatsApp (98%) followed by email (95%). The least mode of social media used by the guides was twitter (20%), Linked In(22% and Instagram (39%).This finding is close to that of Jumia Mobile Report 2019, which indicated that most Kenyans used WhatsApp (74%), Facebook (70%) & Twitter (50%) as their preferred social media (https://www.jumia.co.ke/mobile-report) There was a significant relationship between guides performance and their usage of social media such Instagram (χ 2 =9.065 df =4, P=0.059) and LinkedIn (χ 2 =7.920,df =4, P=0.095). International Journal of Social Science Research ISSN 2327-5510 2019, Vol. 7,No. 2 The findings indicated that although respondents regarded Instagram as the best platform for sharing photographs of wild animals, most of them preferred using WhatsApp as seen in Figure 1. Influence Smartphone Features on the Guides' Performance The smartphone can be viewed as the most powerful and influential technology in the guiding career. Due to advanced technologies on Smartphone computing, storage, and transmission capabilities, a tour guide in Kenya have taken advantage of their features in improving the services. The features that most guides use were cameras, Geographical Information's Systems and location indicator. The study sought to examine the usage of phone cameras and its influence on their performance. The respondents were required to indicate how cameras have influenced their performance. The majority (44%) indicated that they used cameras while 46% said they use their phones for booking and payments of services as compared to 59% who use their phone sound amplifiers when guiding. Quite a number of the respondents used their cameras to take photos, weather forecasting when guiding and Global Positioning Systems (GPS) when driving. Most of them agreed that innovations in Smartphone have tremendously improved communication and service delivery. Features of the Smartphone Used by the Guides The Kenya Economic Survey Report of 2017/2018 indicated that in 2018, the number of active mobile subscriptions in Kenya stood at 46.6 million. The Guides were asked to indicate use of features that their smartphones and how they influenced their performance. All (100%) agreed that smartphones had download applications that they used when guiding. The majority used Bluetooth (95%), location indicator (93%), Global Positioning Systems (83%) and Video player features. Other features that were of importance were; transport application, and booking and payments applications. This finding shows that most of them were using their smartphone to perform their normal duties thus enhancing the quality of services given to the visitors. Tools Offered to Visitors While in a Tour The way the tour guide is prepared for a tour determines the quality of services he offers to the visitors. The study sought to investigate the tools they give to visitors while on tour. The majority (83%) indicated that they have binoculars with only 7% of them offering night goggles required for night game drives. Although sometimes some visitor has hearing challenges, only 7% offer Assistive Listening systems. Other facilities and equipment that were lacking were cameras with zoom lenses (71%) sound receivers and headphone, and voice amplifiers. The results show that availing night goggles (χ 2 =29.878 df1 P<0. 05), voice recorder (χ 2 =65.025 df2 P<0. 05) and transmitters (χ 2 =55.073 df2 P<0, 05), guide books and zoomed cameras significantly influenced guides performance, as illustrated on Table 1 below. .000 .000 .000 .000 .008 .000 .000 .000 a. 0 cells (0.0%) have expected frequencies less than 5. The minimum expected cell frequency is 20.5. b. 0 cells (0.0%) have expected frequencies less than 5. The minimum expected cell frequency is 13.7. Use of Internet Technology by the Guides The study also noted that the increasing availability of affordable Smartphones and cheap data bundles played a major role in facilitating Internet penetration by guides. This demand for Internet use has also been influenced by the growth of social networking, e-commerce, digitization of government services and online research activities. About 90 percent of Internet subscribers could access Internet via mobile phones by 2018 (CAK, 2018). The results indicated that most guides use the internet as a major source of information when guiding. Some (66%), respondents used it to market themselves, share photos and videos of wildlife, get travel updates (71%) and research (90%). Majority of them (85%) used the internet to get destination information, while 90% used the internet to download videos. According to CAK (2018), the increasing demand for and uptake of mobile services such as mobile money, mobile internet, mobile Apps, mobile banking, marketing and gaming had resulted to a significant increase in the number of mobile services subscriptions which was 45.5 million in March 2018 (CAK, 2018) Opportunities and Gaps that Guides Can Use to Improve Their Performance Information of geospatial location of wildlife and their distribution in the wilderness is missing. Most national parks and reserves have a lot of information on geo-spatial distribution and location of flora and fauna, currently there no Smartphone interactive application for guides and visitors. Such applications are available in other countries and with infrastructure development, this is also possible in Kenya. Interpretation of attractions en-route to the tourist attraction is an effective way of enhancing the visitor's experience. When guides are driving within the city and along the route to the tourist attractions, there are many areas where they make stop-overs. An example of such places is the Great Rift Valley view point near Limuru town and Shetani Lava Flow in Tsavo West. Although there are signposts interpreting such sites, there are no provisions for visitors and guides to interact with the interpretation posts. Today's visitors to archaeological and historical sites and museums in Kenya have to walk around these sites and read interpretations boards for each specimen. The assumption is that all visitors can read and understand the English language. This is not always the case and visitors are therefore forced to get the assistance of host guides. With an application that alerts the guides and visitors the presence of attraction and interpretation information, this would improve the customer experience since the interaction can be translated into different languages. Smartphone technologies application would, therefore, be highly recommended. Conclusions and Recommendations The study found that tour guides in Kenya have adapted to technology and has improved their performance. Most guides have Smartphones with which the users access the internet. However, the usage of the Smartphone was more effective in areas that were accessible with their 4G internet providers. It was observed that tour companies are connecting their vehicles to WIFI which is used by both the guides and the visitors. Guides use social media to communicate and socialize, and most have social groups within which they discuss and share experiences and challenges. Adaptation of ICT is gradually replacing their VHF radio and most communication is done through the phone. The infrastructure in the Kenyan national parks and reserves does not support internet accessibility limiting the usage of the phone in the wilderness. This is not the same as in towns where internet is readily accessible. The cameras in the smartphone are one of the features that have immensely improved their quality of the photos which through technology is shared with colleagues and visitors on the social platform. Although most phones have several features, some are hardly used. Location features maps GPS are among the ones not often used. The researcher's findings indicate that these features would be important mostly when a guide wants to locate an animal of interest using the coordinates. The cost of a good Smartphone, internet distribution, cost of internet bundles, battery life were some of the challenges that limited the usage of the phone. As a destination, the internet market is expected to continue growing considering the government initiatives that are already in place such as the development of the Konza Techno City, setting up more ICT hubs and continuous implementation of broadband projects aimed at providing more Internet connectivity. The ongoing voice infrastructure projects by the Communications Authority of Kenya under the Universal Service Fund are also expected to boost availability and access to mobile services especially in marginalized areas usually considered by service providers as economically unviable After such projects are actualized, the destination can have smart services that visitors can use to get information on accommodation, interact with attraction such as museums with the use other tourism-related applications as those discussed in the paper. Report from the National Cybersecurity Centre detected over 3.4 million cyber threats in 2018 alone which composed of, online fraud, online impersonation and online abuse attacks (CAK, 2018) This threats if not managed may limit the growth of ICT in Kenya and the idea of a smart destination. This study recommends further studies on infrastructural developments needed in the destination to facilitate the use of interactive tourism applications. Policy guidance on usages of such applications is highly recommended. As illustrated from the findings of this study, technology is changing the quality of services given by the guides and the tourism industry is ready for smart tourism.
4,830
2019-09-07T00:00:00.000
[ "Business", "Computer Science" ]
The Distribution of Fitness Effects of Beneficial Mutations in Pseudomonas aeruginosa Understanding how beneficial mutations affect fitness is crucial to our understanding of adaptation by natural selection. Here, using adaptation to the antibiotic rifampicin in the opportunistic pathogen Pseudomonas aeruginosa as a model system, we investigate the underlying distribution of fitness effects of beneficial mutations on which natural selection acts. Consistent with theory, the effects of beneficial mutations are exponentially distributed where the fitness of the wild type is moderate to high. However, when the fitness of the wild type is low, the data no longer follow an exponential distribution, because many beneficial mutations have large effects on fitness. There is no existing population genetic theory to explain this bias towards mutations of large effects, but it can be readily explained by the underlying biochemistry of rifampicin–RNA polymerase interactions. These results demonstrate the limitations of current population genetic theory for predicting adaptation to severe sources of stress, such as antibiotics, and they highlight the utility of integrating statistical and biophysical approaches to adaptation. Introduction Adaptation by natural selection ultimately depends on the spread of novel beneficial mutations that increase fitness. Can we predict the fitness effects of beneficial mutations? Gillespie [1,2] argued that extreme value theory (EVT) provides a simple answer to this question: the tails of all-Gumbel type distributions (a very flexible type of distribution that includes many familiar distributions, including the normal) are exponential. As such, the fitness effects of beneficial mutations will be exponentially distributed provided that the fitness of the wild-type is high enough so that beneficial mutations are drawn from the extreme tail of the distribution of fitness effects of mutations. It is however unclear how robust this theory is with respect to the fitness of the population prior to selection. As the absolute fitness of the wildtype decreases (for example, because of environmental change), a larger proportion of single mutations will increase fitness. Beneficial mutations will therefore no longer be drawn exclusively from the tail of the distribution, hence the exponential distribution may no longer apply ( Figure 1). In this paper, we investigate how the distribution of fitness effects of beneficial mutations changes with the fitness of the wildtype population using the evolution of antibiotic resistance in the opportunistic pathogen Pseudomonas aeruginosa as a model system. Experimental studies of the underlying distribution of fitness effects of beneficial mutations [3,4,5,6] have lagged behind the theory, both because beneficial mutations are exceedingly rare and because beneficial mutations of small effect are less likely to reach appreciable frequencies in populations because of the combined effects of drift [7] and competition between independent mutations [8]. To overcome these limitations, we used a fluctuation test to isolate clones of the bacterium Pseudomonas aeruginosa with mutations in the b-subunit of RNA polymerase (rpoB) that are beneficial in the presence of the drug rifampicin [9,10,11]. Our experimental design ensured that we obtained an unbiased sample of all beneficial mutations. First, we isolated mutants from populations that were propagated in culture media lacking rifampicin, implying that we isolated beneficial mutations prior to any selection for rifampicin resistance. Second, we experimentally prevented competition (ie clonal interference [8]) among independently derived beneficial mutations by randomly choosing independent mutants. Third, we ensured all mutations included in the analyses were unique by sequencing rpoB of the mutants. To test the hypothesis that the fitness effects of beneficial mutations are exponentially distributed, we used log-likelihood tests that have been specifically developed to test this hypothesis using this experimental design [12]. Using this approach, we show that the distribution of fitness effects of beneficial mutations is variable: under conditions where the fitness of the wild-type is high, the fitness effects of beneficial mutations are exponentially distributed, as predicted by theory. However, when the fitness of the wildtype is low, the data may no longer fit an exponential distribution because many beneficial mutations have large effects on fitness. We show that this nonexponential distribution of fitness effects emerges as a direct consequence of the molecular interactions that are under selection in this system and we argue that existing theory on the fitness effects of beneficial mutations cannot be applied to understand adaptation to novel stressful environments, such as those provided by antibiotics. Results/Discussion To investigate how the distribution of fitness effects of beneficial mutations changes with the fitness of the wild-type, we measured the fitness of beneficial mutations isolated at a high concentration of rifampicin (Table 1) and the wild-type across a gradient of rifampicin concentrations (Figure 2). At low concentrations of rifampicin (1-2 ug/mL), the fitness of the wildtype is high and we cannot reject the null hypothesis that the fitness effects of beneficial mutations are exponentially distributed, as determined by a likelihood-ratio test ( Figure 3, Table 2). However, at high concentrations of rifampicin (.2 ug/mL), the fitness of the wildtype is low and the fitness effects of beneficial mutations are not exponentially distributed ( Figure 3, Table 2). One limitation of this study is that our power to test the null hypothesis is weakest under conditions where the fitness of the wildtype is high, because only half of the mutants that we isolated increase fitness at low concentrations of rifampicin. Given that we had to sample 80 mutants in order to identify 15 beneficial mutations (this saturation effect is in part attributable to a strong mutational bias towards two mutations), it unlikely that increasing the sample size of our study would have substantially increased the power of our analysis. It is also important to note that this limitation is not unique to this study: beneficial mutations are rare events, and all other comparable experimental evolution studies are based on a similar sample size of beneficial mutations [4,13]. To gain insight into the mechanistic basis of fitness, we measured both the growth rate in the absence of antibiotics and the degree of rifampicin resistance for each beneficial mutation (Figure 4). At low concentrations of rifampicin (ie 1-2 ug/mL), selection for high levels of resistance is weak, and fitness is highly correlated with growth rate in the absence of antibiotics (r = .86-.9, P,.0001). The genetic variation in growth rate in the absence of antibiotics generated by spontaneous mutation is normally distributed (Figure 3; W = .92, P = .21), hence the fitness effects of beneficial mutations are exponentially distributed because only mutations in the right tail of the distribution (ie those mutations that are associated with a low cost of resistance [14]) are beneficial at low concentrations of rifampicin. At high concentrations of rifampicin, selection for resistance is strong and the fitness effects of beneficial mutations no longer fit an exponential distribution because most beneficial mutations had large effects on resistance and, therefore, fitness at high concentrations of rifampicin ( Figure 4). The large effect of beneficial mutations on resistance is consistent with the molecular interactions that occur between rifampicin and RNA polymerase. Structural studies have shown that rifampicin binds to a small, highly conserved pocket of the b-subunit of RNAP and only 12 amino acid residues are involved in direct interactions Author Summary Adaptation by natural selection depends on the spread of novel beneficial mutations, and one of the most important challenges in our understanding of adaptation is to be able to predict how beneficial mutations impact fitness. Here, we investigate the underlying distribution of fitness effects of beneficial mutations that natural selection acts on during the evolution of antibiotic resistance in the opportunistic human pathogen P. aeruginosa. When the fitness of the wild type is high, most beneficial mutations have small effects. This finding is consistent with existing population genetic models of adaptation based on statistical theory. When the fitness of the wild type is low, most beneficial mutations have large effects. This distribution cannot be explained by population genetic theory, but it can be readily understood by considering the biochemical basis of resistance. This study confirms an important prediction of population genetic theory, and it highlights the need to integrate statistical and biochemical approaches to adaptation in order to understand evolution in stressful environments, such as those provided by antibiotics. Plotted points show w i,j , the difference in fitness between beneficial mutations (i.e. those mutations that give higher fitness than the ancestral clone) and the least-fit beneficial mutation at each concentration of rifampicin. P values show the probability that the observed distribution of fitness effects of beneficial mutations is exponential (see table 2 with rifampicin [9,11]. Mutations at these residues cause a large increase in resistance (mean IC 50 = 423 ug/mL, s.e = 25 ug/mL, n = 10). Residues that surround the binding pocket interact only indirectly with rifampicin, and it has been argued that resistance arises at these residues due to amino acid changes that alter the folding of the protein in the binding pocket. We identified only a small number of beneficial mutations (n = 4) in residues that are involved in indirect Rif-RNAP interactions and mutations at these residues give rise to intermediate levels of rifampicin resistance (mean IC 50 = 197 ug/mL, s.e = 40 ug/mL, n = 4). This biophysical approach to understanding the effects of beneficial mutations suggests that the data may no longer fit an exponential distribution because of the high specificity of interactions between rifampicin and RNA polymerase: changes to the majority of amino acids that are involved in rifampicin-RNAP interactions results in large increases in resistance and, therefore, large increases in fitness at high concentrations of rifampicin. To test this hypothesis further, we assayed fitness in the presence of sorangicin [15], an antibiotic that has been shown to bind to the same domain of RNAP as rifampicin and share the same mode of action, inhibition of transcription initiation [16]. The biochemical difference between these antibiotics comes from the fact that sorangicin has a much higher conformational flexibility than rifampicin [16]. The fitness consequence of this difference is that many mutations that give a large increase in fitness under high concentrations of rifampicin give only a small increase in fitness in the presence of an equivalent dose of sorangicin (Figure 4), and the observed distribution of fitness effects of beneficial mutations does not differ significantly from the exponential ( Figure 5, 22logL = 5.72, n = 9, P = .09). It is important to note that these results do not necessarily imply that the distribution of fitness effects of mutations that are beneficial in the presence of sorangicin is universally exponential. Most mutations that increase sorangicin resistance do so by altering membrane permeability, instead of altering the structure of RNAP [16], but unfortunately the mutations that are responsible for this decrease in permeability to sorangicin are not known and we are therefore unable to measure the underlying distribution of fitness effects of mutations that are beneficial in the presence of sorangicin. Instead, our interpretation of this result is that it provides a clear demonstration that the high-affinity interactions that occur between rifampicin and RNAP are ultimately responsible for the non-exponential distribution of fitness effects of beneficial mutations at high concentrations of rifampicin. Conclusion The variability in the distribution of fitness effects of beneficial mutations in this study is consistent with population genetic theory. When the fitness of the wild-type is high, beneficial mutations can be viewed from a statistical perspective as representing draws from the extreme tail of the distribution of fitness effects of mutations, hence the fitness effects of beneficial mutations will be exponentially distributed. However, EVT does not specify how high fitness has to be in order for this theory to apply. In our experimental system, this distribution held over a wide range of parameter space: we failed to detect significant deviations from the exponential distribution when the fitness of the wild-type was reduced by 20-30%. When the fitness of the wildtype is low, statistical theory does not make any predictions regarding the form of the distribution of fitness effects of beneficial mutations and hence there is no reason to expect an exponential distribution. Comparable experimental studies in viral systems are in agreement with this idea: Sanjuan and colleagues [4] found that the fitness effects of beneficial mutations are exponentially distributed in VSV under conditions where the fitness of the wild-type is high, while Rokyta et al. [3] found that the fitness effects of beneficial mutations that allow phage to attack novel hosts (ie hosts that are inaccessible to a WT virus) are not exponentially distributed. Note that in these and our experiments, the data may stop fitting the exponential distribution under conditions of low wildtype fitness not only because EVT, by definition, no longer applies, but also because the underlying mutational distributions may vary with environmental conditions. Despite our lack of certainty of the statistical explanation for the observed distribution when the fitness of the wild-type is low, we have a good molecular mechanistic explanation (in retrospect we may have been able to predict this distribution a priori). Specifically, the distribution is biased towards mutations of large effect as a result of the high specificity of interactions between rifampicin and RNA polymerase that arises from the low conformational flexibility of rifampicin. Antibacterial and antiviral drugs are usually involved in highly specific interactions with their target proteins [17], suggesting that a bias towards mutations of large effect may be a general feature of adaptation to antibiotics [17,18] and other situations when high-specificity protein-ligand interactions are under strong selection, for example during hostparasite interactions [19] or when enzymes are selected to recognize novel substrates [20,21]. Recently, there has been considerable interest among population geneticists in developing general models of adaptation based on the statistical properties of extreme events. Our work highlights both the strengths and limitations of this approach and we suggest that the development of a complete theory of adaptation will require integrating molecular biology, in order to be able to predict the impact of mutations on fitness, and statistical approaches to adaptation, to be able to understand how natural selection samples the distribution of fitness effects of beneficial mutations during adaptive walks. Isolation of Beneficial Mutations A single clone of Pseudomonas aeruginosa PAO1 was inoculated into 5 ml of M9KB medium that was incubated overnight at 37 C with constant shaking (150 rpm). This overnight culture was diluted down 10 26 into fresh M9KB and 120 uL aliquots of this diluted culture were used to setup 480 cultures on 5 96 well microplates. These cultures were incubated overnight at 37 C without shaking. To isolate beneficial mutations, 5 uL of each of the 480 overnight cultures was plated out on M9KB supplemented with 62.11 ug/mL of rifampicin, the minimal concentration required for complete inhibition of growth of the wildtype strain. To isolate beneficial mutations, we isolated a single colony from each of the first 80 cultures that gave samples containing exactly 1 colony on agar plate containing rifampicin. Sequencing To determine the mutations underlying adaptation, we sequenced the rpoB gene in each of the 80 colonies that we isolated in our fluctuation test. Genomic DNA was isolated from each colony using a Wizard Genomic DNA extraction kit (Promega, UK) as per the manufacturer's instructions. Our sequencing strategy was to first sequence a highly-conserved domain of rpoB that is known to be important for rifampicin resistance in all 80 clones. This region was amplified with primers rpoB_fwd (59-GTTCTTCAGCGCCGAGCG-39) and rpoB_rev (59-GCGATGACGTGGTCGGC-39) that amplify the region of the rpoB gene between nucleotides 1178 and 1864. Reaction mixtures consisted of BIOTAQ polymerase (Bioline, UK), 1 mM dNTPs, 16 nM (NH 4 ) 2 SO 4 , 62.5 mM Tris-HCL (pH 8.8), .01% Tween 20, 2 mM MgCl 2 , and each primer at a concentration of .2pM. Amplification reactions were carried out as follows: 94 C for 5 minutes, followed by 35 cycles of 94 C for 30 seconds, 60 C for 30 seconds, and 72 C for 1 minute, followed by a final incubation at 72 C for 10 minutes. PCR products were purified using a MultiScreen PCR 96 filter plate (Millipore, UK) as per the manufacturer's instructions. Purified PCR products were sequenced with both forward and reverse primers using BigDye 3.1 sequencing (Applied Biosystems International) followed by ethanol/EDTA precipitation of sequencing products. In all cases, this strategy identified either a single mutation in this region of rpoB or no mutations. Clones that lacked a mutation in the highly conserved domain of rpoB were subsequently sequenced for a second region that has previously been implicated in rifampicin resistance spanning nucleotides 1 to 1012 of the rpoB gene. This region was amplified and sequenced with primers rpoB_up (59-ATGGCTTACTCATA-CACTGAG-39) and rpo_B1 (59-CTCGATGCG CACGACCTG -39). The protocol was the same as described above, except that the annealing temperature used in the PCR reactions was 54 C instead of 60 C. We idenfied a single mutation in this region in all of the clones that did not contain a mutation in the highly conserved domain of rpoB. As a further control, we sequenced the entire rpoB gene in six randomly chosen clones. We failed to detect any second site mutations in rpoB using this approach. Fitness Assay To assay fitness, we estimated the growth rate, r, of each beneficial mutation and wildtype PAO1 at 4 different concentrations of rifampicin (0,1,2,8, and 64 mg/L). Pre-assay overnight cultures of each mutant were prepared by growth in M9KB. These cultures were then diluted 100 fold into fresh culture medium and we measured the growth rate of each culture using an automated microplate reader by taking hourly measurements of optical density at 600 nm (OD 600 ) over a period of approximately 12 hours. All incubations were carried out at 37 C. Assays at low concentrations of rifampicin (0,1,2,8 mg/L) were carried out with 12 fold replication and assays at high concentrations of rifampicin (64 mg/L) were carried out with 18 fold replication. A further assay was carried out using the same method to measure fitness in the presence of sorangicin (20 ug/mL). OD 600 is proportional to the log of cell density, and the slope of OD 600 against time (mOD/min) in exponential growth phase therefore provides an estimate of r, the growth rate of the bacterial clone, such that r i,j is the growth rate to the i th genotype at the j th concentration of rifampicin. Statistical Analysis To test the hypothesis that the fitness effects of beneficial mutations are exponentially distributed, we used a likelihood ratio test developed by Beisel and colleagues [12]. According to EVT, there are three limiting tail distributions, the Fréchet (which has a heavier-than-exponential tail), the Gumbel (which has an exponential tail), and the Weibull (right-truncated). The tails of all three of EVT domains can be described by the generalized Pareto distribution (GPD), which has a cumulative distribution function given by: with shape parameter k and scale parameter t. One very interesting property of the GPD is that the shape parameter is threshold-independent. This property of the GPD is critical, because it implies that it is possible to account for any potential bias against detecting mutations of small beneficial effect by simply re-scaling fitness data so that the fitness of beneficial mutations is expressed relative to the least-fit beneficial mutation instead of the wild-type. To take advantage of this property, we estimated the fitness of each beneficial mutation as as w i,j = r i,j 2r 1,j where w i,j is the fitness of the i th beneficial mutation in the j th environment, r i,j is the growth rate of the i th beneficial mutation in the j th environment, and r 1,j is the growth rate of the least fit beneficial mutation in the the j th environment (ie the beneficial mutation that has the smallest increase in growth rate relative to the rifampicin sensitive clone in the j th environment). The likelihood ratio test developed by Beisel and colleagues calculates 22logL, negative twice the difference in log-likelihood between two statistical models, one model in which the shape parameter of the GPD is set to 0 and the other in which the scale parameter of the GPD is free to vary (ie H 0 :k = 0, H A : k?0). P values were calculated by performing 10000 parametric bootstrap replicates using a software package (EVDA) developed for R (software available at http://www.webpages.uidaho.edu/,joyce/ Lab%20Page/Computer-Programs.html). This test is potentially sensitive to measurement error, but the accuracy of our fitness measurements was high enough (Average CV = 19.8%) that measurement error should not inflate the probability of making a type I error [12]. Inhibition Assays of rpoB Mutants To measure the resistance conferred by rpoB mutations, we assayed growth in media containing rifampicin at different concentrations. Pre-assay cultures of rpoB mutants and PAO1 were prepared by overnight growth of freezer cultures in M9KB at 37 C. These cultures were then diluted 2.5610 25 into M9KB, or M9KB supplemented with rifampicin at the following concentrations (all in ug/mL): 0, 3.9, 7.8, 15.6, 31.3, 62.5, 125, 250, 500, and 1000. Assay cultures were incubated at 37 C and we measured the optical density of cultures after exactly 24 hours of incubation (+/ 210 minutes) at 600 nM using an automated microplate reader. We assayed the resistance of 12 cultures of each of the rpoB mutants that we identified and 12 replicates of PAO1. Resistance was calculated as IC 50 , the concentration of rifampicin necessary to cause a 50% reduction in optical density, using the following regression model: y~y min zy max {y max . 1z10 log IC50{x where y is optical density, measured in absorbance units, x is the concentration of rifampicin, measured in ug/mL, and H is parameter that estimates the rate of decay in optical density with increasing rifampicin concentration.
5,052.6
2009-03-01T00:00:00.000
[ "Biology" ]
ELF5 Suppresses Estrogen Sensitivity and Underpins the Acquisition of Antiestrogen Resistance in Luminal Breast Cancer The transcription factor ELF5 is responsible for gene expression patterning underlying molecular subtypes of breast cancer and may mediate acquired resistance to anti-estrogen therapy. Introduction The molecular subtypes of breast cancer are distinguished by their intrinsic patterns of gene expression [1] that have been refined to become prognostic tests under evaluation or in use [2]. Improving our understanding of the molecular events specifying these subtypes offers the hope of new predictive and prognostic markers, development of new therapies, and interventions to overcome resistance to existing therapies. The estrogen receptor (ER)positive luminal subtypes are characterized by patterns of gene expression driven by the combined direct and indirect transcriptional influences of ER and FOXA1 [3]. ELF5, also known as ESE2 [4] is a member of the epithelium specific (ESE) subgroup of the large E-twenty-six (ETS) transcription factor family [5], found in lung, placenta, kidney, and most prominently in the breast especially during pregnancy and lactation [6][7][8]. Placentation fails in Elf5 knockout mice [9] because de novo production of ELF5 acts with CDX2 and EOMES to specify and maintain commitment to the trophoblast cell lineage [10]. The early embryo continues to repress Elf5 expression in association with promoter methylation [11]. In the developing mammary epithelium Elf5 is re-expressed in a mutually exclusive pattern with ER [12]. Elf52/2 mice produced via tetraploid embryonic stem cell rescue [12] or conditional knockout [13] showed complete failure of mammary alveolargenesis, a developmental stage driven by prolactin and progesterone. These hormones induce Elf5 expression and reexpression of Elf5 in prolactin receptor knockout mammary epithelium rescued alveolargenesis [14]. Forced ELF5 expression in nulliparous mouse mammary gland produced precocious mammary epithelial cell differentiation and milk protein production. This was associated with erosion of the mammary CD61+ progenitor cell population, and conversely, Elf5 knockout caused accumulation of this population, establishing ELF5 as a key regulator of cell fate decisions made by this progenitor cell population [12] and explaining the developmental effects described above. The CD61+ progenitor cell is the cell of origin for basal breast cancers [15,16] and Elf5 is expressed predominantly by the ER2 progenitor subset [17], suggesting, together with the developmental effects of Elf5 outlined above, a role for ELF5 in determining aspects of molecular subtype of breast cancer. To examine this hypothesis we manipulated the expression of ELF5 in basal and luminal breast cancer cell lines and examined the phenotypic consequences. ELF5 Expression in Breast Cancer In the UNC337 breast cancer series [18] ELF5 was expressed predominantly by the basal subtype in addition to normal breast and normal-like subtype (Figure 1), an observation confirmed in cohorts described by Pawitan [19] and Wang [20] ( Figure S1). Oncomine (www.oncomine.org) revealed that ELF5 expression was low in tumors expressing ER, progesterone receptor (PR), or ERBB2 and high in the ''triple negative'' subtype lacking these markers. ELF5 expression was correlated with high grade, poor outcomes such as early recurrence, metastasis, and death, response to chemotherapy, and mutations in p53 or BrCa1, all characteristics of the basal subtypes ( Figure S2). ELF5 expression was lower in cancer compared to patient-matched and micro-dissected normal mammary epithelium ( Figure S2), and a series from Sgroi and colleagues [21] found ELF5 was one of the most consistently downregulated genes at all stages of breast carcinogenesis ( Figure S1). An Inducible Model of ELF5 Expression in Luminal Breast Cancer Cells To test the ability of ELF5 to drive estrogen insensitivity we used ER+ luminal breast cell lines T47D and MCF7 to construct DOXycycline (DOX)-inducible expression models of ELF5 ( Figure S3A). In humans, ELF5 is also known as ESE2 and 2 isoforms exist. The ESE2B isoform was expressed at 1,774-and 1,217-fold excess over the ESE2A isoform in MCF7 and T47D, respectively ( Figure S3B). We tagged ESE2B at its C-terminus with V5 (referred to subsequently as ELF5-V5), and demonstrated that this did not alter its ability to induce the transcription of its best characterized direct transcriptional target, whey acidic protein (Wap) in HC11 cells ( Figure S3C). Investigation of the Transcriptional Response to ELF5-V5 We interrogated our inducible models using Affymetrix arrays. Functional signatures within these expression profiles were identified by gene set enrichment analysis (GSEA) [22,23], and were visualized using the Enrichment Map plug-in for Cytoscape [24]. The original data are available via GEO (GSE30407), and GSEA and Limma analysis from the corresponding author. Figure 2 displays the GSEA networks derived from the effects of forced ELF5 expression in T47D or MCF7 cells and provides a comprehensive view of the functional consequences of forced ELF5 expression in the luminal subtype. Figure S4 provides the complete network as a fully scalable PDF allowing the identification of all nodes. Acute forced ELF5 expression caused enhancement (positive enrichmentred nodes) of oxidative phosphorylation, translation, proteasome function, and mRNA processing. We observed suppression (negative enrichment-blue nodes) of the DNA synthetic and mitotic phases of the cell cycle, intracellular kinase signaling, cell attachment, the transmembrane transport of small molecules, transcription, and a large set of genes involved in aspects of cancer, stem cell biology, and especially the distinction of breast cancer subtypes and estrogen sensitivity. The cancer-proliferation and breast cancer subtype sub networks, the subjects of further investigation, are shown in Figures S5 and S6, and the expression of the individual genes forming the leading edges of example sets from these clusters are shown as heat maps in Figures S7, S8, S9, S10. We validated these findings using human breast cancers. Using luminal A breast cancers from the UNC337 series we produced a ranked gene list by Pearson correlation with ELF5 expression. This approach produced an enrichment map that was very similar to that produced above ( Figure 2) by forced ELF5 expression, with cell cycle sets, cancer sets, and sets describing luminal characteristics Author Summary The molecular subtypes of breast cancer are distinguished by their intrinsic patterns of gene expression and can be used to group patients with different prognoses and treatment options. Although molecular subtyping tests are currently under evaluation, some of them are already in use to better tailor therapy for patients; however, the molecular events that are responsible for these different patterns of gene expression in breast cancer are largely undefined. The elucidation of their mechanistic basis would improve our understanding of the disease process and enhance the chances of developing better predictive and prognostic markers, new therapies, and interventions to overcome resistance to existing therapies. Here, we show that the transcription factor ELF5 is responsible for much of the patterning of gene expression that distinguishes the breast cancer subtypes. Additionally, our data suggest that ELF5 may also be involved in the development of resistance to therapies designed to stop estrogen stimulation of breast cancer. These effects of ELF5 appear to represent a partial carryover into breast cancer of its normal role in the mammary gland, where it is responsible for the development of milk-producing structures during pregnancy. and estrogen responsiveness prominent among the suppressed gene clusters (Figure S11), demonstrating a very similar action of endogenous ELF5 in luminal A breast cancers compared to forced ectopic expression in luminal breast cancer cells. Identification of ELF5 DNA Binding Sites by ChIP-Seq We used a mixture of antibodies against V5 and ELF5 to immunoprecipitate DNA bound by ELF5-V5 in T47D cells, which we then sequenced, allowing us to map the ELF5-bound regions of the human genome and to identify the direct transcriptional targets of ELF5. Intersection of MACS and SWEMBL peak calls [25,26] identified 1,763 common sites of ELF5 interaction in the genome at 48 h. Data are available in Table S1 or via GEO (GSE30407). DNA binding was much higher at 48 h than 24 h ( Figure 3A), consistent with the observed changes in gene expression by Affymetrix arrays. Combination of the Affymetrix expression and chromatin immunoprecipitation of DNA followed by DNA sequencing (ChIP-Seq) data showed that ELF5 binding within 10 kb of a transcription start site (TSS) changed the expression level of that gene to a much greater extent than expected by chance ( Figure 3B), demonstrating that ELF5 has consistent transcriptional activity via association with DNA within this range. ELF5 bound mostly to distal intragenic regions of the genome (50%) and to introns within genes (25%), but also at high frequency to promoter regions (20%) mostly within 1 kb of a TSS (18%). Downstream (Dstr) sites were seldom used. The 59 UTR was also a frequent target of ELF5 but the 39 UTR was infrequently targeted ( Figure 3C). Transcription factor motifs ( Figure 3D) contained within the DNA fragments precipitated by ELF5 were predominantly ELF5 and other ETS factor motifs; however, we also observed enrichment of sites for Stat1 and Stat3, which contain a TTCC core ets motif. We also observed very significant enrichment of sites for the FOXA1 and NKX3-2 transcription factors. The binding of ELF5 to the FOXA1 promoter region is shown in Figure 3E. We validated the indicated peak on the FOXA1 promoter, and three other target transcription factors by ChIP-qPCR ( Figure 3F). FOXA1, RUNX1, GATA3, and MEIS2 were validated as targets of ELF5 by comparison to input, indicating that ELF5 heads a transcriptional cascade. We searched for curated functional signatures among the ChIP targets using GSEA ( Figure 3G). Many of the functional signatures observed in the ChIP data were also present in the expression data, demonstrating a direct transcriptional action of ELF5 to exert these regulatory effects. The Effect of ELF5 on Breast Cancer Cell Accumulation We examined changes in phenotype observed in T47D-ELF5-V5 and MCF7-ELF5-V5 cells following DOX treatment. In control T47D and MCF7 cells carrying the puromycin-resistant, but otherwise empty expression vector, normal logarithmic accumulation of cells during culture continued with or without DOX ( Figure 4A). In contrast, when ELF5-V5 was induced (denoted as T47D-ELF5-V5 and MCF7-ELF5-V5), cells stopped accumulating between 24 and 48 h after DOX administration ( Figure 4B), regardless of the timing of induction ( Figure S12A). The 4-fold induction of ELF5-V5 expression by DOX in T47D cells was similar to that produced for endogenous ELF5 with R5020, a synthetic progestin ( Figure S12A, inset), demonstrating that this model produces physiological increases of ELF5 expression. The effect was also reversible ( Figure S12B). Investigation of anchorage-independent growth in soft agar showed that induction of ELF5-V5 produced fewer colonies ( Figure S12C). Xenografts of T47D-ELF5-V5 cells in nude mice grew at a slower rate when mice received DOX ( Figure 4C). Knockdown of ELF5 expression by more than 80% had a small effect on total cell accumulation in T47D or MCF7 cells ( Figure S12D). We knocked down ELF5 in two basal breast cancer cell lines and observed a significant and sustained reduction in cell accumulation rate ( Figure 4D and 4E), which was not seen in luminal cells ( Figure S12D). This observation clearly demonstrates a subtype-specific role of ELF5 in breast cancer cells. Cells can fail to accumulate in culture via two main mechanisms, by reduced rates of cell division or by the loss of cells through detachment and apoptosis. We investigated these possibilities. Each circular node is a gene set with diameter proportional to the number of genes. The outer node color represents the magnitude and direction of enrichment (see scale) in T47D cells, inner node color enrichment in MCF7 cells. Thickness of the edges (green lines) is proportional to the similarity of gene sets between linked nodes. The most related clusters are placed nearest to each other. The functions of prominent clusters are shown. The network can be examined in detail using the scalable PDF in Figure S4 We examined cell proliferation. Labeling of cells with BrdU and propidium iodide showed that induction of ELF5-V5 caused repartitioning of cells from S-phase into gap 1 of the cell cycle (G1) ( Figure S13A). Western blotting showed a loss of phosphorylated forms of the pocket proteins, p130, p107, and Rb, accompanied by loss of cyclin proteins A2, B1, and D1, and accumulation of the inhibitor p21 ( Figure S13B). Many of these changes also occurred at the mRNA level ( Figure S13C), indicating a transcriptional basis to these changes that together suggest inhibition of proliferation by G1 arrest. To test this we arrested T47D-ELF5-V5 cells in the G1 phase of the cell cycle using hydroxyurea (HU) and then released them, by HU wash out, into cycle in the presence and absence of induction of ELF5-V5 expression ( Figure 4F, corresponding flow cytometric plots in Figure S13D, and Western blot quantification in Figure S13E). Induction of ELF5-V5 reduced the percentage of cells exiting G1 into S-phase and was associated with a reduced accumulation of cyclin D1 protein and reduction in the expression of cyclin B1, demonstrating that ELF5-V5 expressing cells failed to re-enter the cell cycle from G1. We previously formed a set of 641 genes associated with cell cycle control by a combination of genes from cell cycle-related GO ontologies [27]. This set very significantly overlapped with 125 genes repressed, and 42 genes induced, by ELF5 expression ( Figure S14A), and of these 55 were ELF5 ChIP targets ( Figure S14A, Figure S14B heat maps) indicating a direct transcriptional influence of ELF5 on proliferation. Upregulated ELF5 ChIP targets were characterized by the presence of tumor suppressor genes while downregulated genes were enriched in genes controlling cell proliferation. Upregulated genes included RB1CC1 (promotes RB1 expression), TBRG1 (promotes G1 arrest via CDKN2A), IRF1 (initiates interferon response), COPS2 (p53 stabilizer), CHFR (prevents passage into mitosis), DAB2 (lost in ovarian cancer), and RAD50 (DNA damage checkpoint). Also in this group are DDIT3 (promotes apoptosis due to endoplasmic reticulum stress) and ERBB2IP (disrupts RAF/RAS signaling). ELF5 ChIP targets repressed by elevated ELF5 were characterized by genes required for mitosis, such as GTSE1 (microtubule rearrangement), KIF11 (spindle formation), FBXO3 (anaphase promoting complex), KNTC1 (mitotic check point), PPP1CC (PTW/PP1 complex member), and PMF1 (MIS12 complex chromosome alignment). Other proproliferative ELF5 ChIP targets that were downregulated include EGFR and IGF1R (potent mammary mitogen receptors), MAPK13 (downstream signaling molecule), c-MYC (key regulator of proliferation), KLF10 (transcriptional repressor of proliferation), and NME1 and SLC29A2 (required for nucleotide synthesis). This 55-gene signature is significantly enriched in many breast cancer series ( Figure S14C) and showed differential expression between ER+ and ER2 cancers ( Figure S14B, right-hand heat map). Interestingly the mitogenic genes that are repressed by forced ELF5 expression in ER+ T47D cells are generally highly expressed in ER2 cancers ( Figure S14B), showing again that ELF5 has a subtype-dependent role in cell proliferation and may contribute to the proliferative drive in ER2 cancers. A large number of detached and floating cells were observed in cultures after 48 h of DOX treatment and became most prominent by 72 h ( Figure 5A). Replating efficiency was greatly reduced, indicating that new adherence proteins could not be rapidly synthesized and deployed following their destruction with trypsin ( Figure 5B). Higher rates of apoptosis, measured using flow cytometry, were observed in T47D-ELF5-V5 and MCF7-ELF5-V5 cells ( Figure 5C). The levels of beta 1-integrin were much lower by 72 h ( Figure 5D quantitated in Figure S12E and S12F). Its signaling partner integrin-linked kinase (ILK) also showed reduced expression. Focal adhesion kinase (FAK) levels also fell slightly but phosphorylation of FAK was much reduced from 5 d, as was SRC kinase expression and especially phosphorylation, indicating reduced signal transduction in response to the lower levels of beta1-integrin and detection of the extracellular matrix. Together these results implicate loss of extracellular integrins in the detachment of cells in response to forced ELF5 expression. ELF5 Modulates Estrogen Action Among the direct transcriptional targets of ELF5 are a number with established roles in proliferation in response to estrogen, such as FOXA1, MYC, CDK6, FGFR1, and IGF1R (Table S1). In addition, key genes associated with the estrogen-sensitive phenotype, such as ESR1 and estrogen-response genes, such as GREB1 and XBP1, are downregulated ( Figure S9). Western blotting showed that induction of ELF5-V5 expression caused falls in the levels of ER, the estrogen-induced gene progesterone receptor (PGR), pioneer factor FOXA1, and progenitor cell-regulator GATA3 ( Figure 6A). The activities of ER and FOXA1 transcriptional reporters (ERE and UGT2B17, respectively) also fell ( Figure 6B), demonstrating that forced ELF5 expression suppressed estrogen sensitivity. These cell lines are dependent on estrogen for proliferation, raising the possibility that forced ELF5 expression inhibited proliferation simply by reducing ER expression. We tested this possibility by forced re-expression of ER and treatment with estrogen ( Figure 6C), but we did not observe any relief of the inhibition of proliferation caused by forced ELF5 expression. We further examined the effects of induction of ELF5 on estrogen-driven gene expression by intersecting our ELF5regulated genes with a previously defined set of estrogen-regulated genes in MCF7 cells [27]. Among a set of 477 genes showing estrogen-induced expression in MCF7 cells ( Figure 6D, ''E2 induced''), 115 showed loss of expression in response to ELF5-V5, an overlap with a highly significant p-value (p = 5E284) and odds ratio (OR = 16). These genes (heat map in Figure S14D, ''E2I''), contained signatures for cell cycle control and DNA replication gene sets. Furthermore when we focused on 71 estrogen-induced genes previously defined as involved in proliferation [27] ( Figure 6D, ''E2 Prolif''), we observed the same very significant enrichment (p = 2E232), confirming the action of ELF5 to repress Figure S14D, ''E2R'') was much less pronounced. Thus the predominant effect of forced ELF5 expression was suppression of estrogen-induced gene expression. Just 29 genes were direct transcriptional targets of both ELF5 and ER, a small fraction of the total number of genes showing changed expression, indicating that the actions of ELF5 and ER are largely executed by intermediaries, rather than direct action of ELF5 and ER at the same genomic locus. We used hypergeometric enrichment to discover previously defined experimental signatures among the direct transcriptional targets of ELF5. Signatures indicative of estrogen action were prominent among ELF5 ChIP targets downregulated (fold change [FC].1.5 and false discover rate [FDR].0.25) by ELF5 expression, including sets of genes that were ESR1 targets, involved in endocrine resistance or which distinguished the luminal from basal subtypes ( Figure 6E). Among the cancerfocused sets provided by Oncomine (set names in lower case in Figure 6E) were many associated with distinction of the triple negative and ER/PR/HER2-positive subtypes. Sets among the upregulated ChIP targets included metastasis, apoptosis, and high grade. Heat maps illustrating the changed expression of the individual ELF5 ChIP targets are shown using the top-hit breast cancer series (* marked sets in Figure 6E shown as heat maps in Figure S14E). This investigation again illustrates the repression of estrogen action by induction of ELF5, but indicates that many of the poorer prognostic aspects of the ER2 subtypes may be due to ELF5-induced genes. Again ELF5 appears to have subtypespecific actions. ELF5 Specifies Breast Cancer Subtype We examined the ability of the direct transcriptional targets of ELF5 to predict aspects of breast cancer phenotype. A set of 164 genes was defined as ChIP targets with altered expression in response to induction of ELF5 in T47D cells. This gene set accurately predicted ER status ( Figure S15A) in the Reyal breast cancer series [28]. The Confusion Matrix ( Figure S15B) shows that the direct transcriptional targets of ELF5 accurately predicted intrinsic subtype, with nearly 100% of the luminal/basal distinctions correctly identified. Clustering of the NKI-295 set [29], using the direct transcriptional targets of ELF5, distinguished the intrinsic subtypes and produced a clear separation of tumor characteristics such as poor prognosis, early metastasis, early death, recurrence, survival, grade, mutation status, and marker expression, such as ER and PR ( Figure S15C). We assessed the ability of ELF5 expression to directly alter molecular subtype using two methods developed for this purpose, GSEA [30] and expression signature analysis [15] (Figure 7). Figure 7A shows a cytoscape network of gene sets distinguishing molecular subtype, which combines data from forced expression of ELF5 in MCF7 luminal breast cancer cells (node center) with data from knockdown of ELF5 in HCC1937 basal breast cancer cells (outer node ring). This is a sub network of the complete cytoscape network ( Figure S16) and the T47D-HCC1937 sub network ( Figure S17) is almost identical. The gene sets clustered into four groups distinguishing luminal subtype, basal subtype, estrogen responsiveness, and the mesenchymal phenotype. ELF5 suppressed the mesenchymal phenotype in both luminal and basal cells, representing a subtype-independent action. In luminal cells forced ELF5 expression suppressed the luminal subtype and estrogen-responsive phenotype. In basal cells knockdown of ELF5 expression suppressed the basal subtype, illustrating subtypespecific actions of ELF5. Figure 7B shows the expression signature analysis. In HCC1937 cells knockdown of ELF5 produced a very significant shift in molecular subtype away from the basal subtype toward the claudin-low and normal-like subtypes, consistent with the enrichment of the mesenchymal phenotype observed by GSEA and the suppression of patterns of basal gene expression. In both luminal cell lines a shift away from the luminal subtype was observed ( Figure 7C and 7D), consistent with the GSEA results. In MCF7 the shift was toward the basal and Her2+ subtypes and in T47D toward the normal-like and claudin-low subtypes. ELF5 Underpins the Acquisition of Antiestrogen Resistance Expression studies and ChIP-Seq (Figures 2, 6E, and 7) showed that ELF5 transcriptionally regulated a number of genes involved in resistance to antiestrogens. We examined ELF5 expression in Tamoxifen (TAMR)- [31] or Faslodex (FASR)- [32] resistant cells, derived from MCF7 cells in Cardiff (MCF7C). Greatly elevated levels of ELF5 mRNA were observed ( Figure 8A) compared to their parental MCF7C cells, accompanied by loss of expression of key estrogen response genes. The 164-gene ELF5 transcriptional signature used to classify subtype in Figure 6 showed a response in TAMR cells (22 genes significantly suppressed, 24 significantly induced) and FASR cells (34 genes significantly suppressed, 46 significantly induced). qPCR confirmed the elevation of ELF5 mRNA in TAMR and FASR cells compared to the parental MCF7 (MCF7C) and MCF7 from the Garvan Institute (MCF7G) used elsewhere in this study ( Figure 8B). When we intersected the genes from the antiestrogen sets enriched in the expression data in response to forced ELF5 expression (Figure 2A) with ELF5 ChIP targets, and asked using Oncomine what drug treatments produced similar profiles, we found a predominance of signatures resembling those resulting from inhibition of EGFR (a key pathway driving Tamoxifen resistance in TAMRs [31]) among overexpressed genes, and signatures indicative of the IGFR1 pathway and other kinases, or mitosis-disrupting agents, among under expressed genes ( Figure 8C). Both these pathways have been implicated in the development of resistance to antiestrogens. Treatment with estrogen greatly reduced ELF5 expression in MCF7C cells and this effect was blunted in the TAMR cells ( Figure 8D). Knockdown of ELF5 in TAMR cells completely Figure 8E). ELF5 IHC on TAMR cell pellets confirmed the knockdown. Measurements of S-phase showed that 25% of TAMR cells and 42% of MCF7C cells were in S-phase of the cell cycle during logphase growth, consistent with the known characteristics of these lines. Knockdown of ELF5 reduced TAMR S-phase by 28%, and MCF7C S-phase by 12% ( Figure 8F), consistent with the observed effects on cell number. These observations demonstrate that elevation of ELF5 expression and greatly increased reliance upon it for cell proliferation is a key event in the acquisition of Tamoxifen insensitivity. Taken together, the results in this study show that ELF5 is involved in the proliferation of breast cancer cells in culture, that ELF5 suppressed the estrogen-responsive phenotype in luminal breast cancer cells, and induced aspects of the basal phenotype in basal breast cancer cells. In both subtypes ELF5 suppressed the mesenchymal phenotype. ELF5 specified patterns of gene expression that distinguished the breast cancer subtypes. Significantly for clinical management of breast cancer, elevation of ELF5 is a mechanism by which MCF7 cells can become insensitive to antiestrogen treatment. Discussion In this report we show that ELF5 exerts wide transcriptional effects with functional outcomes on cell proliferation, adhesion, the molecular determinants of breast cancer subtype and phenotype, and acquired resistance to Tamoxifen. These outcomes are aspects of a general specification of an estrogen-insensitive cell fate exerted through modulation of ER, FOXA1, and other transcriptional regulators in luminal cells, and the induction of basal characteristics in basal cells. Two key factors in determining the luminal phenotype are FOXA1 and ER. Recent findings show that ER-chromatin binding and the resulting transcriptional response to estrogens are dependent on FOXA1 expression and its action as a pioneer factor for chromatin-ER binding [3]. We show that forced expression of ELF5 in the luminal context directly represses FOXA1 expression and transcriptional activity. We also show that ER expression falls, as does the expression of many estrogenresponsive genes and ER-driven transcription. This mechanism allows ELF5 to suppress key aspects of the luminal phenotype, demonstrated by GSEA and by expression signature analysis. The resulting loss of proliferation is likely to result from multiple mechanisms. Although the proliferation of MCF7 and T47D cells is stimulated by estrogen-driven signaling and transcription, the loss of ER signaling was not the sole cause of proliferative arrest as forced ER re-expression did not effect a rescue ( Figure 6C). Rather, ELF5 may act directly to regulate genes controlling cell proliferation ( Figure S14). In addition the loss of beta1-integrin was observed. Beta1-integrin acts in mammary cells via FAK and src to induce p21 and cell cycle arrest [33], a mechanism that is also apparent in our data ( Figures 5D and S13C). There are additional large changes in the expression of many other key signaling molecules (Figures 2 and S4) that may also act to suppress proliferation, including EGFR and IGFR signaling. Finally the changes in metabolic activity, including protein and RNA synthesis and oxidative phosphorylation are also likely to impinge on cell proliferation. Loss of ELF5 expression in the basal subtype HCC1937 cells resulted in reduced cell proliferation, loss of basal patterns of gene expression, and a shift toward the mesenchymal phenotype, demonstrated by the GSEA results and the expression signature analysis. These results show that ELF5 specifies key characteristics of the basal phenotype, but also prevents the expression of the mesenchymal phenotype, as it does in the luminal context. Thus we can discern both subtype-dependent and -independent effects of ELF5. The subtype-dependent effects are a likely product of the differentiation state of the cell and the fate decisions made during differentiation produced by interactions of the ELF5 regulatory transcriptional network with other transcriptional regulatory networks present in the cell, while the independent effects are likely to result from direct transcriptional actions that lack these interactions and are thus subtype independent. We propose that the effects of ELF5 in the breast cancer cell lines represent a carry over of the normal developmental role of ELF5 into breast cancer ( Figure 8G). The primary developmental target for ELF5 is the mammary progenitor cell population (P), produced from the stem cell population (S), and which exhibits aspects of the two cell lineages that it produces (drawn as P cell overlaps). Under the dominant influence of ELF5 a progenitor cell differentiates to become an ER2 cell, and with further differentiation and hormonal stimulation ultimately produces a mature (M) alveolar cell capable of large-scale milk synthesis. By this process ELF5 establishes the secretory cell lineage [12]. It is likely that under a dominant estrogen influence the same progenitor cell differentiates to become an ER+ cell with different functions, such as regulation of the stem and progenitor hierarchy by paracrine influence. Recent findings support this mechanism [30]. It is likely that ELF5 and FOXA1 provide the key to the decision made by the progenitor cell. We hypothesize that the outcome of competing estrogen and ELF5 actions on a precancerous instance of the progenitor cell may play a significant role in determining the subtype of breast cancer that results. Additional events may occur subsequent to this decision that alter ELF5 expression and these may be involved in aspects of tumor progression, such as the acquisition of insensitivity to antiestrogens when ELF5 expression increases in the context of luminal breast cancer, or the acquisition of the mesenchymal phenotype when ELF5 expression is lost in basal breast cancer. Some tumors, such as the claudin low subtype, may be innately mesenchymal, as a result of the oncogenic transformation of late stem or early progenitor cells that do not yet express ELF5 or FOXA1 and ER. ELF5 may act to specify epithelial characteristics during the normal differentiation of the stem cell to become a progenitor cell. In this context forced ectopic expression of ELF5 may reduce the mesenchymal nature of the claudin low subtype. We have observed that progestin-induced inhibition of T47D cell proliferation is accompanied by an increase in ELF5 expression, and that this in turn acts to oppose the inhibitory action of progestins on cell cycle regulation [34]. A similar effect may protect some cells from complete growth inhibition by Tamoxifen, and prime them for later ELF5-driven escape of antiestrogen therapy. Further increases in ELF5 expression provide both an alternate source of proliferative signals and further suppression of sensitivity to ER-mediated signals, producing a mechanism allowing TAMR cells to escape TAM-induced growth inhibition. In this scenario the ELF5 ChIP target c-MYC may provide the proliferative drive. These experiments shed new light on the process of acquired resistance to antiestrogens, implicating ELF5 as a potential therapeutic target in antiestrogenresistant disease and providing a potential marker predicting the failure of antiestrogen therapy. Overall we show that the transcriptional activity of ELF5 suppresses estrogen action in luminal breast cancer, enhances expression of the basal phenotype, specifies patterns of gene expression distinguishing molecular subtype, and exerts a proliferative influence that can be modified to allow luminal breast cancer to become resistant to antiestrogen treatment. Ethics Statement Mice were maintained following the Australian code of practice for the care and use of animals for scientific purposes observed by The Garvan Institute of Medical Research/St. Vincent's Hospital Animal Ethics Committee (AEC). Plasmid Construction, Transfection, Infection, and Treatment ESE2B was tagged at the 39 end with V5 and incorporated into the pHUSH-ProEX vector [35] used as a retrovirus. Transient transfection used FuGENE (Roche). Puromycin was used at 1 mg/ ml for MCF7-EcoR and 2 mg/ml for T47D-EcoR, DOX at 0.1 mg/ml changed daily, R5020 at 10 nM. Attachment Assays Monolayers were detached with 0.25% trypsin, counted, and replated at equal density. Unattached cells were gently removed in PBS prior to counting with a haemocytometer. Flow Cytometry Asychronous cells were pulsed with 10 mM BrdU (Sigma) for 2 h (MCF7 or 20 min (T47D) prior to harvesting. Where indicated cells were synchronised with 1 mM hydroxyurea treatment for 40 h. Cells were harvested via trypsinisation, and fixed in 70% ethanol for 24 h, stained with 10 mg/ml propidium iodide (Sigma) for 2-5 h, and incubated with 0.5 mg/ml RNase A (Sigma). Flow cytometry was performed using FACS Calibur or FACS Canto cytometers (BD Biosciences), and data analysis performed with FlowJo software (Tree Star Inc.). Microarray Analysis Total RNA was extracted with the RNeasy Minikit (Qiagen) and DNase-treated with the DNase kit (Qiagen). RNA quality was assessed using a RNA Nano LabChip and Agilent Bioanalyzer 2100. Probe labeling and hybridization to Affymetrix Human Gene 1.0 ST Gene Arrays was done at the Ramaciotti Centre for Gene Function Analysis at the University of New South Wales. Microarrays were normalised using RMA via the NormalizeAffyme-trixST GenePattern module http://pwbc.garvan.unsw.edu.au/gp. Differentially expressed genes were detected using Limma, using positive false discovery rate (FDR) multiple hypothesis test correction. GSEA (version 2.04) used 1,000 permutations, in pre-ranked mode, using the t-statistic from Limma to rank each gene using gene sets from MSigDB version 3.0. Data are available from GEO with accession number GSE30407. ChIP-Sequencing Analysis Cells were fixed in 1% formaldehyde, at 37uC for 10 min, washed 26 with cold PBS, scraped into 600 ml PBS with protease inhibitors (P8340, Sigma), spun 2 min at 6,000 g, washed as before, and snap frozen in liquid nitrogen. ChIP-Seq as previously described [37] using a 50%-50% mixture of V5-specific antibody and the Santa Cruz N20 anti ELF5 antibody. DNA was processed for Illumina sequencing using 36-bp reads on a GAIIx. Sequences were aligned against NCBI Build 36.3 of the human genome using MAQ (http://maq.sourceforge.net/) with default parameters. The aligned reads were converted to BED format using a custom script. ChIP-qPCR used the ABI Prism 7900HT Sequence Detection System. Mice Balb/C Nude mice (Jackson Lab) were used as xenograft hosts. Breast Cancer Subtype Potential as classifiers was investigated using the diagonal linear discriminant analysis (DLDA) or Naïve Bayes classifier, 100 iterations of 10-fold cross-validation (CV) and the misclassification rate recorded in each instance. Boxplots of ER misclassification rate were achieved for each of the 1,000 (100 iterations of 10-fold CV) classifiers built to predict ER status. The confusion matrix shows as percentages the relationship between true intrinsic subtype classification, as defined by [1] and [38] and predicted by the Naïve Bayes classifier. Expression signature analysis [15] and GSEA [30] to examine changes in subtype was carried out as described. Figure S1 ELF5 expression in the breast cancer molecular subtypes. Top panels. Tumors from the indicated studies were classified by molecular subtype and their ELF5 expression level given by the Affymetrix probe set 220625_s_at is graphed. Similar results were found with the 220624_s_at probes (not shown) The p-value for differential expression of ELF5 by the basal subtype is shown. Bottom panel, Redrawn from Ma et al. [21] to combine the clinical data given as text with the graphical format. constructed using this vector without the ELF5-V5 cassette. (B) Quantification of ELF5 isoforms in T47D and MCF7 cells. qPCR specific to each isoform was used to compare expression levels. Amplification efficiency was very similar for both assays. Left-hand side panel, ESE2B was expressed at levels more than three orders of magnitude greater than ESE2A. Right-hand panel shows relative ESE2A expression. (C) Induction of ELF5-V5 expression increases the mRNA level of its direct transcriptional target, whey acidic protein (Wap) compared to an empty vector control plasmid, in HC11 mouse mammary epithelial cells. Cells were treated from day 4 by the lactogenic hormones prolactin, insulin and hydrocortisone. (TIF) Figure S4 Visualization of the transcriptional functions of ELF5 in breast cancer. GSEA-identified signatures indicative of function within expression profiles derived from forced ELF5 expression in T47D and MCF7 luminal breast cancer cells. Results are visualized using the enrichment map plug-in for Cytoscape. Each node is a gene set, diameter indicates size, outer node color represents the magnitude and direction of enrichment (see scale) in T47D cells, inner node color enrichment in MCF7 cells. Thickness of the edges (green lines) is proportional the similarity of linked nodes. The most related clusters are placed nearest to each other. View the PDF at 800% or 1,600% to explore the network in detail. (PDF) Figure S5 Visualization of the transcriptional functions of ELF5 in breast cancer cell cycle and cancer gene set sub network in T47D and MCF7 cells. Inset, the region of the complete network from Figure S4 that is highlighted in red and yellow is expanded here. Main panel, nodes, and edges forming the cell cycle and cancer-related network, as explained in the key and legend of Figure S4 Figure S15 The ELF5 transcriptional signature correctly distinguishes breast cancer subtype. An ELF5 transcriptional signature was defined as ELF5 ChIP targets with robust changes in expression in response to forced ELF5 expression in T47D cells. It was used to predict ER status (A) or breast cancer subtype (B) in the Reyal series. Rows of the confusion matrix show percent correct subtype prediction at the shaded cells and the distribution of confused predictions by subtype across the row. (C) Ability of this ELF5 transcriptional signature to predict breast cancer subtype and clinical characteristics in the NKI295 series. The 55 Elf5 gene signature was used to cluster the NKI295 series. Subtypes assigned to this series by its authors are colored as indicated. Heat map shows gene expression levels with enriched gene sets within the major gene clusters listed along side. Bottom panel shows associated clinical correlates with the significance of each colored bar indicated by the text at the right. Generally good outcomes are in yellow, poor outcomes in purple or red. (TIF) Figure S16 Visualization of the transcriptional functions of ELF5 in breast and mammary cancer. GSEA-identified signatures indicative of function within expression profiles derived from forced ELF5 expression in T47D luminal breast cancer cells and knockdown of ELF5 function in HCC1937 basal breast cancer cells. Results are visualized using the enrichment map plug-in for Cytoscape. Each node is a gene set, diameter indicates size, outer node color represents the magnitude and direction of enrichment (see scale) in HCC1937 cells, inner node color enrichment in T47D cells. Thickness of the edges (green lines) is proportional the similarity of linked nodes. The most related clusters are placed nearest to each other. The functions of prominent clusters are shown. (TIF) Figure S17 ELF5 specifies breast cancer subtype. GSEA network derived from forced ELF5 expression in T47D luminal breast cancer cells (inner node color) and knockdown of ELF5 expression in HCC1937 basal breast cancer cells (outer node color). Node size is proportional to gene set size, thicker green lines indicate greater leading edge gene overlap. Nodes are positioned according to similarity in leading edge genes. Labels indicate the functional significance of the four clusters generated. (TIF)
8,724.8
2012-12-01T00:00:00.000
[ "Biology" ]
Ultrastable embedded surface plasmon confocal interferometry As disease diagnosis becomes more sophisticated, there is a requirement to measure small numbers of molecules attached to, for instance, an antibody. This requires a sensor capable not only of high sensitivity but also the ability to make measurements over a highly localized region. In previous publications, we have shown how a modified confocal microscope allows one to make localized surface plasmon (SP) measurements on a scale far smaller than the surface plasmon propagation distance. The present implementation presents a new ultrastable interferometer system, which greatly improves the noise performance. Hitherto, we have used the central part of the back focal plane to form a reference beam with the reradiated surface plasmons. In the current system, we block the central part and use the spatial light modulator to deflect s-polarized light into the pinhole to form an interference signal with the surface plasmons, thus creating an ultrastable interferometer formed with two beams incident at very similar angles. We demonstrate the superior noise performance of the system in hostile environments and examine further adaptations of the system to further enhance noise performance. An optical interferometer with very low noise could improve the sensitivity of plasmonic-based biomolecule sensors. Suejit Pechprasarn and co-workers from the University of Nottingham in the UK and Hong Kong Polytechnic University have outlined a microscope system that supports ultrastable, highly sensitive confocal surface-plasmon interferometry. Unlike previous designs, this system has a spatial light modulator that imparts a phase shift to allow interference between reference and signal beams that have very similar angles of incidence, which would not usually interact. This should significantly improve the noise and stability of the system, thus allowing scientists to detect a smaller number of molecules on a sample. INTRODUCTION In many biological sensing experiments, a measure of the sensitivity of a label free sensor, such as a surface plasmon (SP) sensor, is given in terms of the minimum detectable change in refractive index units. This gives an approximate measure of smallest detectable coverage (in mass per unit area) of a layer of analyte deposited on the sample; it does not account for the lateral extent over which the deposited analyte extends. It is now becoming apparent that many molecules such as cytokines are extremely powerful indicators of inflammatory response. Measurement of cytokine response can thus provide a powerful early diagnostic tool as well as having considerable potential in prognosis. Unfortunately, even though the relative change in cytokine concentration is large, the absolute concentration of cytokines is small compared to other species (by typically 6-8 orders of magnitude compared to commonly occurring plasma proteins). 1 A sensor with poor lateral resolution requires a large number of analyte molecules compared to a sensor with similar sensitivity in terms of change in local refractive index where the measurement is highly localized. For this reason, there is a strong imperative to develop sensors with both improved sensitivity and lateral resolution. For instance, the use of heterodyne interferometry 2,3 provides one approach to achieve this aim. Similarly, Yuan's group 4 has developed several systems adapted to SP microscopic measurements. More recently, we have developed variants on confocal microscopy that perform in a similar manner to heterodyne interferometry but with simpler more compact instrumentation. In this paper, we present a microscopic plasmonic sensor which processes the optical beam paths to form a tightly focused common path interferometer with excellent immunity to external noise sources such as microphonics. The results presented in this paper were obtained on model non-biological samples to illustrate the operating principles and performance of the new configuration. The system is operated in a defocused condition to ensure that the SPs propagate a substantial distance across the sample; however, the principal region of interaction remains tightly focused as the SPs come to a focus on the optical axis. 5 We have shown previously 6 how a defocused confocal microscope acts as an interferometer in which two principal optical paths form the two beams of an interferometer, which are shown as the black rays P1 and P2 in Figure 1a and 1b. These paths consist of a beam illuminating the sample close to normal incidence and another which is converted to a SP which propagates along the sample surface reradiating continuously. 7 The confocal pinhole ensures that only the reradiated light that appears to come from the focus contributes to the detected signal. This confocal arrangement therefore ensures that the path of the SPs is well defined by the optical configuration rather than their propagation length. We have shown in subsequent publications 8,9 how a spatial light modulator conjugate with the back focal plane is effective in allowing for different processing methods that overcome mechanical scanning or allow the amplitude and phase of the SP to be extracted directly, thus providing an exceptionally robust means of processing the signal. For instance, the spatial light modulator (SLM) can be used to change the phase of P1 relative to P2 so that phase stepping may be performed which allows the phase associated with SP to be extracted directly. Although the interference between P1 and P2 provides a relatively stable interferometer, the system sensitivity will, in many practical situations, be limited by microphonic noise, since both these beams hit the sample at different incident angles. Microphonic noise of amplitude Dz introduces a phase shift given by: where Dw represents the phase noise, n represents the refractive index of the coupling medium and l represents the wavelength of the incident light in vacuum, and h p1 and h p2 represent the incident angle associated with paths P1 and P2. From now on we denote h p2 as h p since it is associated with the angle at which SPs are excited. In the implementation depicted in Figure 1a, where paths P1 and P2 interfere, h p1 is very close to 0, so that the first term in the brackets of Equation (1) can be replaced with 1. The idea behind the present system is to produce an interferometer where the reference and signal beams are incident at angles which are very close to each other so that the terms in the brackets disappear or at least become very small. The system still retains sensitivity to changes in the wave number of the SP, kp~2 pn l cos h p Clearly, as the ambient conditions change the value of h p changes, so that the cancellation will become imperfect; however, in a normal binding experiment, where the change in h p is small, the cancellation of microphonics will still be extremely good as quantified later in this paper. In our previous publications, we have used linear input polarization in the back focal plane which gives a continuous variation between ppolarization and s-polarization as the azimuthal angle changes 6,8,9 ( Figure 2a). The use of radial polarization incident in the back focal plane has often been advocated in SP microscopy which means the light incident on the sample is p-polarized for all azimuthal angles, so that more energy is coupled in SPs and the focus is tight and circularly symmetrical. 5 Our new interferometer, however, relies on linear polarization to generate interfering p-and s-polarized beams hitting the sample at similar incident angles. Examining Equation (1), we can see that an interferometer formed between a path generating a SP and one in which the incident light is primarily s-polarized will allow us to form an interferometer in which the sample and reference beams are incident on the sample at essentially the same angle of incidence. From Figure 1a, we can see that when the sample is defocused, the p-polarized light generates SPs, some of which appear to come from the focus to be collected by the confocal pinhole. In order to form our common path interferometer with the sample and reference beams incident at approximately the same angle, we remove path P1 and create a second path for the s-polarized light, which forms the reference arm of the interferometer. From Figure 1a, we see that the ray path corresponding to the s-polarized beam follows the path P19, so that on reflection, it does not propagate parallel to the optical axis, and will thus miss the pinhole. If an optical element is inserted into the return path that acts like a wedge (a linear phase gradient), it can be deflected parallel to the optical axis and will pass through the pinhole forming the reference beam of the interferometer. Alternatively, the optical element can be placed in the path of the incident beam, as shown in Figure 1b, so that its path follows P19 and hits the sample at point 'b' so that on reflection it also appears to come from focus thus also forming an arm of the interferometer. 6,8,9 In our experimental arrangement, the optical element (a spatial light modulator) was applied on the incident beam; thus corresponding to the scheme associated with Figure 1b. Figure 2 shows a schematic of the pattern on the back focal plane used to effect the beam deflection. In positions where the light is primarily p-polarized, no linear phase gradient is applied as a portion of the reradiated SPs will appear to come from the focus. On the other hand, where light is predominantly s-polarized, it will interact with the wedge, thus ensuring that it follows path P19 of Figure 1b, thus returning to the pinhole. It is, of course, apparent that only along orthogonal directions is the incident light in a pure polarization state, for all other angles, there is continuous variation of the relative proportions of the two polarization states. This does not, however, produce any Surface plasmon confocal interferometry Pechprasarn S et al 2 fundamental issues. As mention earlier, any incident s-polarization that does not interact with the wedge will miss the pinhole. Similarly, any p-polarized light interacting with the wedge will be deviated beyond the normal and miss the pinhole on the other side. This does, of course, entail some wastage of light. We can now summarize the form of the pattern on the SLM. Firstly, only light close to the angle for excitation of SPs is allowed to pass, while other angles are blocked simply by setting adjacent pixels in antiphase as described in Refs. 8 and 9. The wedge is centered around the azimuthal angle corresponding to predominantly s-incident polarization, in the present work this angle corresponds to 645 6 . The wedge angle is controlled by the gradient of the phase shifts and the effect of varying this value is discussed later in the paper. In addition, we can phase shift one beam of the interferometer relative to the other by simply imposing a constant phase shift in the arc region corresponding to predominant p-polarization. There are different ways of thinking about this approach, it may be thought of as generating an 'artificial' plasmon in that the spatial light modulator generates a continuously changing phase shift for the sincident polarization that mimics the phase shift imposed by the sample when a SP is excited. 10 Another way of thinking of the system is as a plasmonic analog of differential phase Nomarski microscopy; in this method, two adjacent regions on the sample are forced to interfere by passing through a polarization sensitive optical device. 11 In the present technique, we similarly use the SLM to force two beams to interfere that would otherwise not interact. On a uniform sample, it is a matter of convenience whether the SLM is placed on the incident or reflected beam, although for a structured sample, the respective transfer functions differ. When the SLM acts before the sample, it is more natural to think of the approach from the 'artificial' plasmon viewpoint; since this is the implementation used in our work, we use the shorthand 'artificial' plasmon to describe our system. It is important to note that confocal interferometry has been reported in the literature; 12 however, this method uses a separate reference beam to interfere with the beam returning to the pinhole. In the work reported here and previously, 8,9 the interference occurs between different parts of the illumination beam, so that one portion of the illumination beam acts as a reference. This approach facilitates our aim to obtain fine phase measurements between different paths within the same beam. MATERIALS AND METHODS The experimental set-up is essentially the same as described in Refs. 8 and 9 and is shown schematically in Figure 1c; the key difference in this paper is the manner in which the SLM (model no. BNS 512X512 phase; Boulder non-linear Systems Inc., Lafayette, Colorado, USA) is used to process the returning light by forming a wedge and performing the additional phase modulation as shown in Figure 2. Briefly, apart from the SLM, the key components in the system were the objective lens (1.45 numerical aperture (NA) oil immersion objective; Zeiss, Oberkochen, Germany), laser source (632.8 nm He-Ne laser, 10 mW) and charge coupled device (CCD) camera (Sony Digital Interface XCL-S600C). It should be pointed out that all the experiments reported in this paper were obtained with air as the final medium; if measurements in aqueous media are required, it is necessary to use a higher NA objective such as the 1.49 NA oil lenses available from Nikon (Chiyoda, Tokyo, Japan). In order to evaluate the system, the sample shown schematically in Figure 3 was fabricated. Five different layer thicknesses (including zero) of indium tin oxide were deposited on the gold substrate used to support the SPs. In order to obtain an independent measure of the thickness of the layers, these were measured from the top surface using a spectroscopic ellipsometer (alpha-SE J. A.Woollam (Inc.), Lincoln, Nebraska, USA). We then measured the same sample with different configurations of the SP microscope. The first set of SP measurements used the approach described in Ref. 9 where a normally incident reference beam was used to interfere with the SP waves. Four 90 6 phase steps were imposed on the reference beam so that a phase stepping algorithm could be used to extract the phase difference between the normally incident and the plasmonic contributions. This phase difference, w, can be plotted against the defocus, z. The mean gradient of this phase variation can be related to the angle of plasmon excitation, thus: where h r is the mean incident angle of the reference beam (h p1 of Equation (1)), which is in this case zero. Figure 4 shows the measured values of w(z) and from the gradients measured in the defocus region between 24000 and 22000 nm; we can obtain a value of h p for each sample region by estimating the mean gradient of the curve and fitting to Equation (2). Note that negative values of defocus correspond to moving the sample closer to the objective relative to the focal point. If we assume the same refractive index as used to fit the ellipsometric data (n51.8583), we can recover a film thickness from the SP measurements from the Fresnel equations 13 by calculating h p for different layer thicknesses. It should be noted, of course, that unlike the ellipsometric measurements, the SP measurements are obtained from below the sample surface, that is, through the gold film. The thickness values obtained are tabulated in the second and third columns of Table 1. The agreement in the obtained thickness values between the ellipsometry measurements and the phase stepping measurements is good. Some of the discrepancy between the values arises from fact that the exact measurement position on the sample may differ. RESULTS AND DISCUSSION We then used the 'artificial' plasmon to perform similar measurements on the stepped sample. Figure 5 shows the phase variation on region R2 of the coated sample for different wedge angles or phase gradients imposed in the back focal plane. The wedge was formed by a staircase pattern imposed on the SLM; the wedge angle was controlled by changing the gradient of the staircase. We see that from a defocus of 21000 nm the gradient is much less than that obtained with the normally incident reference beam. We note, however, that there are two regions where the phase is stationary with defocus; we can also see that careful tuning of the wedge angle gives a flat region for a particular sample region. For regions R3 and R4, similar flat regions are obtained for larger wedge angles and for R0 and R1 for smaller wedge angles. The problem is that these regions of flat phase variation do not extend over large regions of defocus and methods to increase this range are discussed in the further developments section; nevertheless, this modest range of flat response was adequate to demonstrate the superior noise immunity of the present system. As we discussed in Ref. 9, the longer the range of defocus used to calculate the wave vector of the SP, the more accurate the measurement, on the other hand, reducing the maximum defocus improves the lateral resolution. 5 We used the artificial plasmon method to measure the local gradient over a narrow range of defocus corresponding to the flat region from 22400 to 22100 nm. By phase stepping the region with predominant p-polarization, we obtained curves for the variation of w(z) shown in Figure 6 for the five regions of the sample; we see that the R2 curve is approximately horizontal, whereas R0 and R1 have positive gradients and R3 and R4 have negative gradients as expected. We can use the measured gradients with Equation (2) with h r equal to h p for the R2 region. These values are shown in the final column of Table 1, which show excellent agreement with results involving a normally incident reference beam. Surface plasmon confocal interferometry Pechprasarn S et al 4 We now consider the issue of noise immunity which we believe is the principal advantage of the new technique. The results above were all obtained with our optical table pumped up, to test the noise performance the legs of the optical table were deflated and a large bass speaker was placed on the table to induce large amounts of microphonic noise. The vibration was measured with a commercial Polytec, Waldbronn, Germany vibrometer sensor head (OFV 534) and controller (OFV 5000) illuminating the sample holder; when the table was pumped up and the speaker was off, the root mean square (rms) vibration was less than 10 nm, while with the speaker on and the table deflated, the rms vibration was 270 nm with peak to peak excursions of approximately 760 nm. The response waveform was approximately sinusoidal at 20 Hz. To get an estimate of the noise in thickness measurements, we measured the noise in the w(z) curves from the deviation of the measured phase shifts from their best linear fitted results. We then used this noise variance to generate a new set of data with the same underlying statistics; this was then used to get another estimate of w(z) from which a new estimate of thickness was obtained. This process was repeated 10 6 times to get the noise statistics presented in Table 2. From Table 2, we see a dramatic improvement in noise performance in the 'artificial' plasmon system. Several points emerge from this table: . The results for R2, where the wedge angle is best matched to the expected value of h p , give a noise standard deviation 54 times smaller compared to the 1000 nm scan and 35 times less compared to the 2000 nm scan. This improvement was achieved even though the scan range for the artificial plasmon experiment was only 300 nm. Extending the 'flat' artificial plasmon range will improve the signal to noise even more as demonstrated in Ref. 5 for the system with the normal reference beam. Approaches to extend the flat region are discussed below. . The improvement in noise performance in other regions, while still good, was not as great as for region R2; this is to be expected at the wedge angle: in these cases, it is less well matched to h p . The noise in the normal incidence phase stepping system is approximately constant on each sample region which is a consequence of the fact that the microphonics are not cancelled. In a typical biomedical experiment, the values of h p vary by small amounts so the noise immunity provided by a single wedge angle is likely to apply to the whole measurement. If this were not the case, the use of the electronically programmable SLM means that it is easy to vary the wedge angle interactively as the measurement proceeds. The system as presented thus has the potential for optimal performance in terms of sensitivity, spatial resolution and dynamic range. We believe that these three key characteristics have not hitherto been achieved in a single system. Variation of the phase output from the interferometer, w(z), for the five regions of the sample, where the system was tuned so that the w(z) slope of the region R2 was relatively flat. around 300 nm. In this section, we will show that this flat region can be extended by using a 'curved' wedge that deviates from the perfect linear case, such as the one shown in Figure 7. This is represented by adding a simple cubic term to introduce some curvature as shown in Equation (3). where a is the gradient of the wedge and the c is the parameter that is varied to get optimum region with flat phase response. The comparison between the linear wedge and the curved wedge is shown in Figure 8, where we can see that the curved wedge can extend the flat region to around 1 mm. These results were obtained using a 1.49 NA oil immersion objective (Nikon) and laser source (690 nm laser). CONCLUSIONS This paper has presented a technique to overcome the microphonic phase error in a confocal SP microscope, where the reference beam is provided by direct reflection of s-polarization at an angle similar to the plasmonic angle. In order to force the s-polarised light into the pinhole, we provide a wedge that interacts with the s-polarised light, achieved conveniently with a phase SLM. This method can be combined with phase stepping to obtain a highly sensitive measure of the SP wave vector. We have shown that the present technique gives at least an order of magnitude better noise immunity than the phase stepping method published in Ref. 9, despite using the same optical set-up. We have also demonstrated that further optimization in terms of increased range of noise immunity can be achieved with relatively subtle, but simply implemented, changes to the patterns imposed on the SLM. We believe that this is a major step for making robust and sensitive plasmonic measurements over small local regions, thus ultimately reducing the minimum number of detectable molecules adhering to the surface. Surface plasmon confocal interferometry Pechprasarn S et al 6
5,341
2014-07-01T00:00:00.000
[ "Physics" ]
Deep Convolutional Neural Network Assisted Reinforcement Learning Based Mobile Network Power Saving This paper addresses the power saving problem in mobile networks. Base station (BS) power and network traffic volume (NTV) models are first established. The BS power is modeled based on in-house equipment measurement by sampling different BS load configurations. The NTV model is built based on traffic data in the literature. Then, a threshold-based adaptive power saving method is discussed, serving as the benchmark. Next, a BS power control framework is created using Q-learning. The action-state function of the Q-learning is approximated via a deep convolutional neural network (DCNN). The DCNN-Q agent is designed to control the loads of cells in order to adapt to NTV variations and reduce power consumption. The DCNN-Q power saving framework is trained and simulated in a heterogeneous network including macrocells and microcells. It can be concluded that with the proposed DCNN-Q method, the power saving outperforms the threshold-based method. I. INTRODUCTION A. BACKGROUND In the era of data, information is flowing in an unprecedented way anytime everywhere. It is reported in [1], that the number of mobile broadband subscriptions will be approaching eight billion by 2025. The amount of mobile data traffic is anticipated to grow at an exponential pace, reaching 160 extrabyte (EB, 10 18 bytes) per month within the same time period. New emerging applications such as augmented reality (AR), virtual reality (VR), vehicle to everything (V2X), and internet of things (IoTs) are projected to have increasing contribution to the massive growth of data traffic. The fifth generation (5G) mobile network (MN) [2]- [4] has introduced groundbreaking technologies in order to satisfy this growing demand of data traffic. Millimeter-wave (mmWave), for instance, is a well-recognized solution as high bandwidths in mmWave are able to provide more available radio resources. In addition, the use of massive multiple-input multiple-output (MIMO), which equips base stations (BSs) and user equipments (UEs) with an increasing number of The associate editor coordinating the review of this manuscript and approving it for publication was Ivan Wang-Hei Ho . antennas, can reduce intercell interference and boost network throughput. Most importantly, reducing cell size and increasing cell density have been the main source of enhancing network throughput [5], [6]. There is no exception in 5G networks, as they are expected to significantly scale up cell densities. However, denser cells come at the cost of larger MN power consumption, which increases green house gas emissions and accelerates global warming. Operators such as Vodafone, have targeted to reduce green house gas emission by 50% by 2025 [7]. Reducing power consumption can not only reduce green house gas emission, but also reduce operating cost of MNs. To tackle the problem of MN power saving, practical models for BS power consumption and data traffic as well as smart resource management techniques are required. Authors in [8] measured BS power in real equipment and proposed a number of linear power models in terms of load for the remote unit (RU) only. In [9], power models were built for components in a BS, such as power amplifier and filter. It concluded that power consumption in downlink was dominant. Measurement of voice traffic was presented in [10]. More generally, the white paper [11] revealed traffic patterns of various applications in reality. Both measurement VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ reports showed that network traffic volume (NTV) was normally higher during weekdays and lower during weekends. There are a number of classic cell on/off algorithms, including optimizing user association, optimizing BS coverage, traffic prediction, and heterogeneous deployment [12]. In [10], a concept known as network-impact was proposed, which can be calculated by the maximum of sum of the original BS load and the additional load increments brought by neighboring BSs. The algorithm in [10] required heuristic parameters. In [13], the user association to BSs and dynamic BS operations were jointly optimized for the purpose of improving energy efficiency. The switching on and off of BSs relied on a greedy algorithm and heuristic parameters. Authors in [14] and [15] proposed algorithms to adjust cell coverage to reduce power consumption. Methods of traffic pattern and BS energy consumption pattern prediction were discussed in [16]. In [17], stochastic geometry was used to model distributions of macrocells and low-power cells. The minimum separation distance between a macrocell and a lowpower cell was optimized to reduce interference and power consumption. Besides the discoveries in academia, industry has designed schemes to reduce power consumption as well. The 3GPP 5G new radio (NR) [18] has replaced the alwayson cell-specific reference signal (CRS) in 4G long-term evolution (LTE) [19] with a novel reference signal framework, including demodulation reference signals (DMRSs) and channel state information reference signal (CSI-RSs). These are user-specific and flexibly configurable. As a result, power consumption is reduced when there is no traffic or measurement to certain UEs. Besides classic methods, machine learning (ML) based methods have attracted researchers to explore new approaches to solve the MN power saving problem [20], [21]. Having assumed accessible location information, [22] proposed a reinforcement learning (RL) based method to predict movement of UEs and dynamically adjust the powers of the handover target cell and the original cell. Authors [23] used RL to optimize durations of different sleep modes to reduce power consumption. These RL based methods did not considered realistic power and traffic models. Also, loads of BS were not directly controlled. In this paper, a centralized deep RL based method is proposed, to intelligently control BS loads according to realistic power and traffic models. In a multi-cell mobile network, it is straightforward to expect a distributed architecture where each cell is equipped with one RL agent. As a result, multiple agents perform RL individually. However, a distributed architecture suffers from the moving target problem [24], where the behavior of each agent can impact on behaviors of other agents. On the contrary, the centralized architecture used in this paper assumes one agent only controlling all cells in the mobile network. This can accelerate convergence. B. CONTRIBUTIONS The contributions of this paper are listed as follows. 1) A power model and a NTV model for base stations are proposed. The power model is established based on measurement data in real-world base stations. More importantly, detailed power consumption in a data unit (DU) and a RU is shown. The NTV model is obtained from measurement data in the literature. These two models are able to provide realistic descriptions on the dynamics of network power consumption in terms of time. 2) A threshold-based power saving method is proposed. This method uses a cell load adaptation equation to update cell loads to adjust power consumption. 3) Most importantly, a deep learning approach, i.e., deep convolutional neural network based Q-learning (DCNN-Q), for power saving is proposed. The proposed method uses a centralized architecture and Q-learning to control cell loads, with the action-state function approximated by a DCNN. The DCNN not only takes a one-dimensional (1D) load vector as input, but also a two-channel two-dimensional (2D) image containing information of instantaneous NTV requirement and network throughput. The rest of this paper is organized as follows. Section II proposes a power model based on measurement data and a NTV model based on literature data. Problem description and system model are presented in Section III. The benchmark method, i.e., the threshold-based method is investigated in Section IV. Our proposed DCNN-Q method is discussed in detail in Section V. Simulation/numerical results and analysis are presented in Section VI. Conclusions are drawn in Section VII. II. POWER MODEL AND NETWORK TRAFFIC MODEL A. POWER MEASUREMENT IN REAL-WORLD EQUIPMENT The power measurement was conducted in our in-house lab on real LTE DU and RU equipment. Both power of DU and RU in terms of different settings of load were measured, by installing a power meter to both the DU and RU power cables. Load of the system is the ratio of the number of active physical resource blocks (PRBs) over the number of total available PRBs. This was configured using the orthogonal channel noise simulator (OCNS) functionality via command line interface (CLI) during the measurement. The flowchart of measurement is depicted in Fig. 1. A typical set of load settings, i.e., 0%, 50%, 100%, were configured. The total measurement period lasted for 10 hours and the readings of power meter were recorded every 15 minutes. As a result, there were 41 measured samples in total. B. POWER MODEL After obtaining the measured power data, a power model can be established. In the paper, linear models for DU power P DU and RU power P RU based on measured data are proposed, i.e., P DU (l) = 1.68l + 266.98 (1) P RU (l) = 153.50l + 93.95 (2) where l represents the load of the unit. Both proposed models, the total power model P Total (l), and measured data are shown in Fig. 2. It can be observed that the power of DU is not sensitive to the change of load. On the other hand, the change of load can result in changing the power of RU from 94 W to 247 W. Moreover, when load falls down to zero, switch-off DU and RU can be assumed. In this case, the total power is assumed to reduce to zero in the paper, although in practice there can be a small amount of energy consumption. Hence, the total power can be expressed as The power saving comes from two sources. First, each cell adapts its load according to current network traffic. Second, certain low load cells need to handover their traffic to other cells such that these low load cells can be completely switched off. C. NETWORK TRAFFIC MODEL Network traffic model in this paper was developed based on measured NTV data published in [11]. The measured NTV data were extracted by visual inspection. It can be observed in [11] that the shape of NTV in each single day is similar. However, the absolute NTV values in weekdays and weekends are different. Therefore, to establish the model, a twostep approach is used in this paper. First, a normalized NTV model for a single day is established, to characterize how NTV is varying in different hours of a day. Second, another model is established to characterize how NTV is varying in different days of a week. The NTV model V 1 (t) for a single day can be expressed as a 20th-order polynomial as where t ∈ [0, 24) is the hour of a day and a n is the coefficient of the nth-order term. Least-squared estimation was performed and the coefficients a n can be found in Table 1. Fig. 3 shows the comparison of normalized measured NTV in a day in [11]. It can be seen that the valley of NTV appears at approximately 4 am, which accounts for 15% of the peak of a day. The peak of NTV occurs at 9 pm. The proposed 20th-order polynomial model provides sufficient approximation to the real-world one-day measured NTV. Comparison between the single-day NTV model and measured NTV in [11]. Next, to capture the variation of days within a week, the NTV model V 2 (τ ) for a week can be expressed as a 5th-order polynomial as where τ = 1, 2, . . . , 7 represents Monday to Sunday and b m is the coefficient of the mth-order term. The coefficients b m are in Table 2. Then, with (4) and (5), the NTV of a week can be synthesized via where t is the hour in a week (t ∈ [0, 168)), mod(·) is the modulus operator, · is the flooring operator, and η is a VOLUME 8, 2020 scaling factor to scale the normalized NTV to a realistic NTV. The normalized NTV model is depicted in Fig. 4. It can be observed that during the weekdays, the NTVs are similar. However, during weekend, the NTVs drop from Saturday to Sunday. Furthermore, we define a parameter γ called safety margin, which quantifies the largest rate of change of NTV between two adjacent time instances t ν and t ν+1 , i.e., where ν is the sub-interval index when a certain length of observation interval is divided into equal-size sub-intervals. A network with γ taken into consideration will be able to satisfy the period when the traffic increases at the steepest rate. From (6), it can be computed numerically that γ equals 0.4. Assume that there are N user users per cell with index i = 1, 2, · · · , N user , the user traffic volume (UTV) for the ith user in terms of time is modeled as where Z i is user-specific independent and identically distributed (i.i.d.) log-normal random variable, i.e., lnZ i ∼ N (0, σ 2 ) ∀i, to describe user-specific traffic variations. Parameter σ is the standard deviation of UTV among different users. III. PROBLEM DESCRIPTION AND SYSTEM MODEL A. PROBLEM DESCRIPTION From Section II, the mobile network power saving problem is to adjust loads of cells according to the current NTV requirement. Furthermore, to save the largest amount of power, a handover mechanism needs to be considered such that certain cells can migrate its attached users to other cells and reduce its load to zero and switch off. However, since the number of cells can be massive in the area of interest (AOI), the solution space of this combinatorial optimization problem will be too large for exhaustive search, even for a single time instance. Moreover, the NTV is evolving in terms of time. The solution of the problem should be sufficiently flexible to handle the variation of NTV. B. NETWORK DEPLOYMENT In this paper, we consider an approximately 1km×1km AOI which is covered by four frequency bands. Among these four frequency bands, three of them are for urban micro (UMi) and one of them is for urban macro (UMa). Settings of these four frequency bands are listed in Table 3. The UMi cells have carrier frequencies 2.1 GHz, 2.7 GHz, and 3.6 GHz, and they are two-ring hexagonal [25] and with 200m intersite distance (ISD). For a two-ring hexagonal layout [25], each band has 19 three-sector sites, resulting in 57 cells in total. The UMa cell has carrier frequency 1.8 GHz and it is one-ring hexagonal with 500m ISD. For a one-ring hexagonal layout [25], each band has 7 three-sector sites, resulting in 21 cells in total. Users are uniformly and randomly dropped into the AOI for each band and the average total NTV for each band of each cell, i.e., the mean of the sum of all user traffic within a cell, equals (6) for a specific time t. It should be noticed that each band will fully cover the AOI. When a user is dropped inside the AOI, it will choose the cell in a certain band which provides the largest received power. Also, in this paper, for the sake of reducing power, handover between different bands is allowed. Namely, when cells in two different bands with similar coverage area, one cell in Band 1 can migrate all of its traffic to the other cell in Band 2, provided that Band 2 will not overload. Then, the cell in Band 1 will have zero load and it can completely switch off. C. SINR AND NETWORK THROUGHPUT CALCULATION Since different bands will not interfere each other, band index is dropped in the following expressions. Consider a certain band, let P k be the transmit power of the kth cell and let β ij P j χ ij (9) where χ ij is a coefficient representing the interference ratio between the ith base station and the jth base station. As the jth base station is only using N (k) j PRBs, the interference power emitted by it is a fraction of its total power P j . At the same time, the ith base station has only N (k) i active PRBs, the interference power it receives is a fraction of the interference power emitted by the jth base station. As a result, χ ij is a function of N PRB , N The network throughput T (k) provided by the kth cell can be computed as where µ represents a factor accounting the overhead and number of layers during the transmission process. The area throughput in the AOI T area is then the sum of the network throughput of all cells, i.e., From (9) and (11), it can be observed that the network throughput may not always be monotonically increasing with loads, because as loads increase, mutual interference among cells increases as well. Also, when a set of new loads are configured for all the cells, the SINR and throughput map of the AOI need to be updated. IV. BENCHMARK METHOD: THRESHOLD-BASED POWER SAVING Controlling problems like MN power saving are usually approached by threshold-based methods. Namely, a feedback loop is established and the feedback is mapped to a metric, such that actions will be taken accordingly based on whether the metric is higher or lower than a threshold. For power saving, these actions include scaling up or down the loads of cells, and handing over traffic to other bands and switching off cells whose loads are zero, and switching on cells. Let V (k) X be the NTV, the network throughput, and the cell load of the kth cell in Band X , respectively. The adaptation of cell load l (k) X at time t n+1 is expressed as where γ from (7) is a safety margin such that the cell load is enough for the steepest NTV increase. It can be seen from (13) that the cell load at time t n+1 is the sum of two terms. The first term is a scaled version of load at the previous time t n . The gap between two time instances is customizable and it is assumed half an hour in this paper. The second term is an additional load if Band Y is switched off and Band Y migrates its traffic to Band X . When the load of the kth cell l (k) X (t n+1 ) in Band X is less than a threshold ξ 1 , i.e., l (k) X (t n+1 ) < ξ 1 the cell will handover its traffic to another band then the cell can be switched off and l (k) X (t n+1 ) will be set to zero. For simplicity, we assume that the handover is done by handing over from Band X to Band X + 1. This is a feasible simplification if the traffic distributions in Band X and Band X + 1 are statistically the same. On the other hand, letl active X denote the average load of active cells in Band X . Ifl active X > ξ 2 , meaning that current active cells have heavy load, then inactive cells in Band X −1 should be switched on to help handle traffic. The settings VOLUME 8, 2020 of γ , ξ 1 , and ξ 2 in this paper are heuristically determined and listed in Table 4. The procedure of the threshold-based power saving method is shown in Fig. 5. The procedure starts with scaling the cell load only based on traffic variation. Then, each band starts to adjust the on and off situations. This is achieved by calculating the average load of active cells. If this load is larger than ξ 2 , it requires more cells to offload upcoming traffic. Therefore, inactive cells in Band X −1 are switched on. If this load is less than ξ 2 , the load of each cell in Band X will be compared to ξ 1 . If a cell load is less than ξ 1 , it means this cell has low load and can be switched off as soon as its traffic is handed over to Band X + 1. Otherwise, the updated load is calculated according to (13). V. DCNN-Q FOR MN POWER SAVING A. RL REVIEW RL is a trial-and-error machine learning technique, which samples the environment and takes actions to the environment. The environment is everything that cannot be arbitrarily modified by the RL agent and will provide a feedback containing the reward corresponding to the action to the RL agent. When the RL agent obtains a sample from the environment, this sample is known as a state. The RL agent attempts to make a sequence of decision on actions in order to achieve a certain goal. The difficulty of RL is that when an action is taken in each step, it will impact on actions in later stages. A Markov decision process (MDP) provides widely used model for RL [26]. A MDP can be modeled by a 4-tuple E = E S, A, P, R . State space S consists of all possible states of the environment. A state s ∈ S is the perception of the environment of the RL agent. Action space A contains potential actions to be taken by the RL agent. Assume that the state is s, when action a ∈ A is taken, the environment will transit to a new state s . This transition is modeled by a hidden transfer function P : S × A × S → R, which represents the transition probability. Moreover, in each state transition, a reward is produced and it is characterized A policy π associates a state s to an action, which can be categorized as deterministic or randomized. A deterministic policy maps a state to an action π : S → A. On the contrary, a randomized policy maps a state to a probability distribution π : S × A → R, representing the probability of taking action a ∈ A in state s. In the learning process, the state-action value function (Q function) Q π (s, a) stores the estimated values of accumulated discounted rewards using policy π . When the model of the environment is accessible (modelbased learning), i.e., the hidden transfer function P is known, the expected values of the Q function can be computed iteratively with dynamic programming. According to [26], the optimal policy satisfies the optimal Bellman equation and can be found by selecting the action maximizing the state-action value iteratively. The state-action values increase monotonically each time the policy is updated with the best action. Therefore, when the policy converges, it converges to the optimal policy. However, in practice, it is usually difficult to obtain the model of the environment. Namely, the hidden transfer function P is unknown. In this case, model-free learning can be applied. Model-free learning assumes no knowledge of the environment and relies on approximating the Q function by sampling the environment, states, and rewards. A widely used model-free learning method is the Monte Carlo (MC) method [26], where the value function and policies are updated only when an episode of samples are finished. Another model-free learning method is the temporaldifference (TD) learning [26] where the value function and policies are updated in a step-by-step manner. A TD learning method known as Q-learning is used in this paper. The main characteristic of Q-learning is that the approximation of the Q function is independent of the policy being followed, which largely reduces complexity [26]. B. DCNN-Q ARCHITECTURE DESIGN The overview diagram of DCNN-Q for mobile network power saving is depicted in Fig. 6. This RL problem can be divided into the design of state space, action space, policy, reward function, and Q function, which will be detailed later paragraphs. 1) STATE SPACE In a power saving problem, a state should be able to capture what the current requirement of NTV is and how well the system is responding to such requirement. Therefore, a state s ∈ S is characterized by a traffic map, a throughput map, and a vector of current loads of all cells in the AOI, i.e., s = {traffic map, throughput map, loads of all cells}. It should be noticed that both maps are 2D while the vector of loads of all cells is 1D. 2) ACTION SPACE The cardinality of the action space should be properly design. If the cardinality of the action space is too small, then the granularity of actions becomes coarse. Conversely, if the cardinality of the action space is too large, convergence of training will be too slow. To reach this balance, three constraints are taken in this paper. First, instead of continuous load, only discretized loads are considered. For example, a load can only be chosen in the set of {0%, 25%, 50%, 75%, 100%}. However, even only discretized loads are considered, with 192 cells as shown in Table 3, there are still 5 192 combinations. Therefore, the second constraint is that once a load is chosen, all the cells in the same band will be set to the same load. Then, for four bands, the number of combinations reduces to 5 4 = 625. Third, certain combinations will be excluded in the action space as the pseudo codes shown in Fig. 7. In Fig. 7, the subscript k in w k represents the band identification (ID). From the action space generation algorithm, it can be observed that the UMa band is always on to guarantee there will be coverage in the AOI while UMi cells can be switched off. This constrain is able to avoid coverage holes in the AOI when certain UMi cells are switched off. After the algorithm in Fig. 7, the cardinality of the action space is reduced to 140. 3) POLICY A state is mapped to an action via policy π (s). A commonly chosen policy is the -greedy algorithm ( ∈ [0, 1]) [26]. It consists of two phases. In the exploitation phase, which has probability 1 − , the RL agent selects the action with the highest Q value. In the exploration phase, which has probability , the RL agent will choose an action in a random manner in the action space with equal probability. The -greedy algorithm is presented as where U is a uniform random variable in [0, 1]. 4) REWARD FUNCTION There are two principles to design the reward function. First, it should be penalized if the current network throughput is not able to satisfy the NTV requirement. With such design, the RL agent will learn from experience to avoid corresponding actions. Second, as another goal of the problem is to save power, the reward should be monotonically increasing if the network consumes less power, provided that the required NTV is satisfied. As a result, the reward function is modeled as where β is a positive coefficient describing how fast the reward is decaying with the increase in power and P (k),Total X is the total power of the kth cell in Band X . The choice of the exponential function is because it is continuous in its domain and able to handle interpolated power values. In this paper, VOLUME 8, 2020 β is set to be 0.004, which corresponds to a reward of 0.2 when the load is 25%. 5) Q FUNCTION The accumulated reward of a state-action pair is recorded by the Q function Q(s, a) and it is incrementally updated as the training progresses, i.e., Q(s, a) = Q(s, a) + α(r + δQ(s , a ) − Q(s, a)) (16) where s is the new state, a is the action to the new state, α is the learning rate, and δ is the discounting factor. To approximate the Q function, both table-based and NN-based methods were used in the literature. In this paper, the NN-based method is adopted. A NN-based Q function has two benefits. First, it does not need massive storage compared to a table-based method when the action space and state space are large. Second, it can handle complex inputs such as a mixture of 2D and 1D data and unseen states. A NN structure is proposed for approximation of the Q function. The NN accepts a state as input and outputs the Q value of each potential action. The NN is constructed by two parts which is shown in Fig. 8. The first part is a DCNN and the second part is a 3-layer fully-connected network. The DCNN maps a two-channel 2D image to a vector. One channel of the 2D image is the throughput map of the AOI and the other channel is the traffic map of the AOI. The two-channel image is then passed to five convolution blocks in serial and each convolution block consists of a convolution layer, a rectifier (ReLU) [27], and a pooling layer. The convolution layer is responsible for exacting highlevel features of the 2D input. The ReLU is a typical nonlinear activation in NNs. The pooling layer is responsible for reducing complexity and extracting dominate features. Then, after five convolution blocks, the output is passed into a drop-out layer to further reduce complexity and avoid overfitting. Since a state consists not only the 2D image but also a 1D vector storing the current loads of all the cells, the output of the drop-out layer is normalized and concatenated with the 1D load vector. The concatenated 1D vector is input to the second part of the NN, i.e., the 3-layer fully-connected network, the output of which is the the Q function. The objective function of the DCNN is the root mean squared error (RMSE) between the predicted Q value vector and the updated Q value vector. To achieve the best performance of the DCNN, both the throughput map and the traffic map will be normalized before being input to the DCNN. Let 2D matrices M 1 and M 2 denote the throughput map and traffic map, respectively. The normalization of M 1 includes cutting-off and scaling, i.e., The normalization of M 2 is achieved bỹ The DCNN-Q learning is described in pseudo codes in Fig. 9. To begin with, the policy is initialized with equal probability. In each iteration of the training, the RL agent chooses an action according to -greedy policy. As soon as the action is determined, it will be mapped to cell loads in all the bands. Then, all the cells will adjust their loads according to the action. After these processes, the situation of mutual inference is changed and hence SINR in the AOI needs to be re-calculated. Then, the throughput map needs to be updated and the new state is formed. Next, the reward is computed according to (15) and the next action is obtained from the policy function. The RL agent updates the Q function and the policy. These steps are then repeated until the maximum number of iteration is reached. VI. RESULTS AND ANALYSIS The proposed DCNN-Q power saving is trained and tested according to the parameters listed in Table 5. As comparisons, the always-full-load method, the threshold-based method, and the DCNN-Q method are discussed. The always-full-load method means that all cells at all bands are operating with 100% loads. Normalized mean reward with respect to the number of weeks trained is shown in Fig. 10. Normalization is down in terms of the mean reward after 10 weeks of training. It can be seen that there are fluctuations between 20 to 40 weeks, as the size of training data is still small. After 40 weeks of training, performance starts to improve. After 200 weeks of training, the result is 13% better than 10 weeks of training. As the length of training needs to reach balance between performance and training cost, we use the 200-week trained RL agent to test the proposed DCNN-Q performance. NTV requirement and throughput provided by these three methods in terms of time within a week are illustrated in Fig. 11. The step size is half an hour. The always-full-load method provides a constant network throughput, which is 30% higher than the highest NTV peak during a week. This is the foundation for an intelligent power saving method. The threshold-based method is able to adjust its network throughput according to NTV. It can be seen that the threshold-based method is aggressive when the required NTV is low and is conservative when the required NTV is high. On the contrary, DCNN-Q does not behave like the threshold-based method. DCNN-Q is more conservative when NTV is low by reserving a larger safety margin, and more aggressive when NTV is high. It can also be observed that the change of configuration in DCNN-Q is sharper. This is because the action space of the DCNN-Q is discrete. Network power consumption in terms of time within a week is illustrated in Fig. 12. The power consumption of the always-full-load method is constant. The thresholdbased method has the lowest power consumption when NTV is low and had higher power consumption when NTV is high. Its range of power consumption is relatively wide. Conversely, the provided network throughput by the DCNN-Q has limited range of power consumption values. Fig. 13 depicts the normalized aggregate power consumption of the three methods. Normalization is done relative to the threshold-based method. Always-full-load consumes the most power as expected, which is 41% higher than the threshold-based method. The proposed DCNN-Q method is able to save 19% power compared to the threshold-based method and 42% compared to the always-full-load method. This demonstrates that the proposed method is able to achieve significant power saving. VII. CONCLUSION To investigate power saving for mobile networks, it is important to establish practical power and network traffic models. Based on our in-house measurement, linear models are sufficiently accurate to describe base station power consumption in terms of load. The power of RU is more sensitive to load change, whereas the power of DU is steady. Power reduction is achieved via the adaptation of loads of the network and dynamic switching on and off according to required NTV. A polynomial model for synthesizing NTV is proposed, describing traffic fluctuations over one week. The thresholdbased method, which relies on heuristically set thresholds, serves as the benchmark and is able to reduce power consumption by 30% compared to always-full load. As a significant enhancement, the centralized DCNN-Q method is proposed. The DCNN-Q uses a DCNN, which accepts a joint input of 2D images and a 1D vector, to approximate the Q function in the Q-learning framework. The proposed DCNN-Q method is capable of saving 19% power compared to the threshold-based method. This demonstrates that DCNN-Q is a promising solution to confine mobile network power when both the data and the size of a network are soaring. For future work, instead of the centralized method proposed in this paper, a distributed learning framework would be another direction of research. Also, optimization on energy efficiency, i.e., bits/joule, is essential to consider for green network research.
8,127
2020-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Prediction of CTL epitope, in silico modeling and functional analysis of cytolethal distending toxin (CDT) protein of Campylobacter jejuni Background Campylobacter jejuni is a potent bacterial pathogen culpable for diarrheal disease called campylobacteriosis. It is realized as a major health issue attributable to unavailability of appropriate vaccines and clinical treatment options. As other pathogens, C. jejuni entails host cellular components of an infected individual to disseminate this disease. These host–pathogen interfaces during C. jejuni infection are complex, vibrant and involved in the nicking of host cell environment, enzymes and pathways. Existing therapies are trusted only on a much smaller number of drugs, most of them are insufficient because of their severe host toxicity or drug-resistance phenomena. To find out remedial alternatives, the identification of new biotargets is highly anticipated. Understanding the molecules involved in pathogenesis has the potential to yield new and exciting strategies for therapeutic intervention. In this direction, advances in bioinformatics have opened up new possibilities for the rapid measurement of global changes during infection and this could be exploited to understand the molecular interactions involved in campylobacteriosis. Methods In this study, homology modeling, epitope prediction and identification of ligand binding sites has been explored. Further attempt to generate strapping 3D model of cytolethal distending toxin protein from C. jejuni have been described for the first time. Results CDT protein isolated from C. jejuni was analyzed using various bioinformatics and immuno-informatics tools including sequence and structure tools. A total of fifty five antigenic determinants were predicted and prediction results of CTL epitopes revealed that five MHC ligand are found in CDT. The three potential pocket binding site are found in the sequence that can be useful for drug designing. Conclusions This model, we hope, will be of help in designing and predicting novel CDT inhibitors and vaccine candidates. Background Campylobacter jejuni is a prominent bacterial cause of enteric campylobacteriosis in the entire world [1]. Campylobacter is extensively distributed in poultry; nevertheless, cattle, pigs, sheep, and pet animals may also be a source of these microorganisms. This infection may be due to either eating of semi cooked meat or crosscontamination of ready-to-eat food at the time of preparation or storage. C. jejuni-linked enterocolitis is characteristically coupled with a local acute inflammatory response that involves intestinal tissue damage [2]. The genome of C. jejuni has been sequenced, yet only a few prospective virulence factors produced by C. jejuni are well considered [3]. Cytolethal distending toxins (CDT) are a class of heterotrimeric toxins produced by C. jejuni and also by closely related spp., such as C. fetus, C. coli [4,5], Shigella [6] and Escherichia coli [7]. This toxin is rearward transported across the golgi complex and the endoplasmic reticulum, and afterward translocated into the nuclear compartment, where it applies the toxic activity [8]. The CDT comprises of three protein subunits namely CdtA, CdtB, and CdtC causes progressive cellular distention with ultimate cell death and have been proposed as virulence factors in the pathogenesis of C. jejuni [9]. These results suggest that the CDTs are involved invasion, survival and internalization into the host cell [10][11][12][13]. Although CDT from C. jejuni has been studied and characterized in laboratory [14,15], but research on immune responses and pathogenesis of C. jejuni remains unexploited. The progress in computational methods competent of predicting immune epitopes for B lymphocytes and T lymphocytes will facilitate the viewing of pathogens for immunogenic antigens. The epitope based vaccines encourage an immune response by presenting immunogenic peptides unite to major histocompatibility complex to TCR [16]. Considering the unavailability of 3D structure of CDT, it is challenging to select proper target that would lead to predict epitope and ligand binding sites in protein. Hence, this study aims to investigate the CDT of C. jejuni with special focus on the structural and functional aspects through bioinformatics approach. This study has important implications on the selection of CTL epitope, a critical step in the development of vaccines. Sequence acquisition and analysis We have received the sequence of CDT of C. jejuni from the NCBI database by inserting query as "CDT C. jejuni". The sequence was saved in FASTA format and used for further analysis. The primary structure analysis was done by using expasy ProtParam (www.expasy.org). The secondary structure of the protein was computed using different servers like Jpred3, GOR-IV and SOPMA [17] to check the presence of alpha helix and beta plated sheets in the structure. To determine the possible function of C. jejuni, the sequence was subjected to comparative protein structure modeling in the different servers. 3D-Model building and validation Cytolethal distending toxin sequence of C. jejuni (CDTCJ) [EDZ32284.1] was used to develop 3D structure through homology modeling because crystal or NMR structure of the CTD protein was not available in the Protein Data Bank (PDB). The 3D structure of the CDT protein was done using a restrained-based approach in Modeller. The 3D model was generated using the ModWeb server that generates 3D models along with their confidence score (C-Score). The template selection for the homology modeling of the CDT protein was performed by submitting amino acid sequence of the target protein to ModWeb server [18]. The crystal structure of CDT from Haemophillus ducreyi (PDB ID:1SR4) was used as a template. After generating the 3D model, structure analysis and stereochemical analysis were performed using different evaluation and validation tools. The final model was validated by using SAVES online tool (http:// nihserver.mbi.ucla.edu/SAVES/). The Ramachandran plot was obtained using PROCHECK [19] and RAM-PAGE [20] which helped in evaluating backbone conformation. Ramachandran plot was also used to check non-GLY residues at the disallowed regions. The verify 3D and PROSA web tool [21] was used to determine Z-scores. The ERRAT was used to predict overall quality for model and quality of the model was assured using Z-scores. Epitope prediction of protein antigens SEPPA (Spatial Epitope Prediction of Protein Antigens) server at the Life Science and Technology School, Tongji University, Shanghai China, (http://lifecenter. sgst.cn/seppa/) was used to predict conformational Bcell epitope. The 3D protein structure predicted by Modeller was used as an input, each residue in the query protein will be given a score according to its neighborhood residues information. Higher score corresponds to higher probability of the residue to be involved in an epitope [22]. The default values of THRESHOLD was set at 1.80, this help to specify the epitope residues [23]. Transmembrane topology of the CDTCJ protein was checked using TMHMM online tool [24] and antigenicity of protein was checked using SVMTriP online antigen prediction server [25]. The several algorithms are available that can predict the location and binding specificity of CTL epitopes in the protein sequences. In this study, the cytotoxic T-lymphocyte epitope prediction was done using NetCTL-1.2 server [26]. Sub cellular localization prediction The sub cellular localization of CDT was predicted using CELLO, an approach based on multi-class SVM classification system [27]. CELLO uses four types of sequence coding schemes: the amino acid composition, the dipeptide composition, the partitioned amino acid composition and the sequence composition based on the physico-chemical properties of amino acids. TargetP1.1 server was also used to predict cleavage site prediction of CDT [28]. Protein interaction network mapping Protein-protein interactions were achieved from the STRING database [29] comprising known and predicted physical and functional protein-protein interactions. STRING in protein mode was used, and only interactions with high confidence levels (>0.7) were kept. STRING quantitatively integrates interaction data from these sources for many organisms, and transfers information among these organisms where applicable. Network visualization was done with the Cytoscape software [30]. Structure comparison The structure comparison was executed by using DaliLite server [32]. Results and discussion The current study was originated to perform structure based sequence analysis of the CDT protein isolated from C. jejuni. The protein sequence was obtained from the NCBI protein database using accession number gi| 205345645|gb|EDZ32284.1| cytolethal distending toxin [Campylobacter jejuni]. Primary structure analysis revealed that the CDT protein (268 aa) had a molecular weight of 29.94 kD and theoretical isoelectric point (PI) 6.81. An isoelectric point indicates a negatively charged protein. The instability index (II) was 18.60, thereby categorizes the protein as a stable. The aliphatic index appeared as 84.10 and the N-terminus of the sequence showed the presence of M (Met). The negative grand average of hydropathicity (GRAVY) of -0.061 denoted that the protein was hydrophillic. The amino acids, Asn (N), Phe (F), Ala (A), and Leu (L), were found in high praportion in the protein. The secondary structure disclosed the presence of 8.21% α-helices, 4.85% β-turns, 25.37% extended strand and 61.57% coils (Figure 1). Transmembrane topology of the CDTCJ protein was checked using TMHMM online tool. The TMHMM server showed that residues 23-268 presented outside region, residues 5-22 were within the transmembrane and residues 1-4 were inside the region of the protein. Hydropathy analysis of CDTCJ protein of C. jejuni by the TOPCONS [33], Signal P-4.0 [34] and TMHMM programs suggested the presence of only one TM helix. We therefore localized the N terminus of CDTCJ in the cytoplasm. A consensus predicted topology is presented in Figure 2. The sub cellular localization of CDT was predicted using CELLO, an approach based on a two-level support vector machine (SVM) system. This server predicts sub cellular localization of protein for Gram negative bacteria by supporting vector machines based on n-peptide compositions. The CELLO output gave significant reliability for outer membrane (0.198), periplasmic (1.76) extracellular (0.803) and cytoplasmic (2.493), it indicates that the protein is cytoplasmic. Model function and validation To determine the possible function of CDT, the sequence was subjected to comparative protein structure modeling using the target protein sequence as query for different servers described in Methods. The modeling of CDTCJ was performed using a restrained-based approach implemented in MODWEB [35] and significant hits were obtained. A set of three models for CDT protein was constructed. The 3D structure of a CDTCJ protein was developed from the X-ray structure of Haemophilus ducreyi (PDB ID: 1SR4 Chain A, at 2.0 Å Figure 1 Secondary structure of CDT of C. jejuni. resolution) as a template for homology modeling. The alignment coverage region for target residue (113-258) showed the 37% sequence identity with template 1SR4 residue 75-219. The resulting 3D models of CDTCJ were sorted according to the scores calculated from discrete optimized protein energy (DOPE) scoring function. The final model that shared the lowest root mean square deviation (RMSD), relative to the trace (Ca atoms) of the crystal structure was selected for further studies. The validation of the model was performed by accessing the quality of backbone conformation by PROCHECK for reliability. The perceived Ramchandran plot (Psi-Phi) pairs had 86.5% of residues in most favored regions, 11.1% core residues in additional allowed regions, 1.6% residues in generously allowed regions and 0.8% residues in disallowed regions (Figure 3). These values indicated a good quality model. Whereas the crystal structure of Haemophilus ducreyi PDB ID 1SR4 shows 89% residue in most favor region [36]. To characterize the model, structural motif and mechanistically important loops were assigned to build the final 3D model of CDTCJ. The 3D model of CDTCJ using the template 1SR4, consist of two domains that encompasses 8β-sheets and 3α-Helices (Figure 4). Verify3D and ERRAT were also used to further assess the quality of the CDTCJ model. Verify3D analyzes the compatibility of the model against its own amino acid sequence and results revealed that 59.86% of the residue had an average 3d-1D score 0.2. Verify3D and ProSA gave good scores for overall model quality. However, the ERRAT validation of CDTCJ model indicated regions where the calculated errors were higher than expected that decreases the overall quality score to 46.7%. Structure comparison analysis Comparative analysis of CDTCJ structure was performed using DaliLite v.3.3. server. This server is a network service for comparing protein structure in 3D and computes optimal and suboptimal structural alignments between two protein structures. It helps in understanding the fundamental role of proteins and their functions. The structural similarity relationships among protein structures allow users to infer the functions of newly discovered proteins [37]. The final refined model of CDTCJ was superimposed with template by using DaliLite. The superimposition of model to the template is shown in Figure 5. The result provided by DaliLite servers show the 851 alignments with compatible Z-score. The highest Z-score for structure from PDB ID: 2F2F, 1SR4 was 28.3, 27.5 and percent identity 38, 37 respectively. It is interesting to note that first two high Z-score proteins are 2F2F and 1SR4, were also used for the development of model 3D structure. Epitope prediction of protein antigens Potentially immunogenic regions of CDTCJ were predicted by using the SEPPA server. This server analyses 3D structures and aims at the division of antigens surface in epitopic and non epitopic patches on the basis of different propensity scores and solvent accessibility; they all rely on training datasets comprising resolved antibody/antigen complexes [38]. A total of 55 epitopes were predicted from 146 aa using default threshold value of 1.80. The predicted epitopes visualized with JMOL in different renderings are shown in Figure 6. In this structure, tints from blue to red represent a rising antigenicity. Highlighted epitope residues were predicted and shown in red solid spheres. The prediction results are also displayed in a table and each, residue is listed sequentially. The predicted epitope residues are highlighted in yellow and the core residues are shown in lowercase. Antigenic epitopes that are preferentially recognized by antibodies that can help in the design of vaccine components and immuno-diagnostic reagents [39]. Cytotoxic T-Lymphocytes (CTL) epitopes Epitope predictors are routinely tested on large sets of epitopes derived from various pathogens. Schellens et al. [40] identified eighteen new CTL epitopes out of a set of twenty two predicted CTL epitopes in HIV-1 using NetCTL. We screened all possible peptide fragments of 9aa within a particular protein, and eliminated those fragments that cannot be correctly processed by either the proteasome, TAP or the MHC class I molecules. Prediction results of CTL epitopes revealed that five MHC ligands were found in CDT sequence having high e-value score are positioned at 10 CCFMTFFLY 18 , 39 DT DPLKLGL 47 , 132 AQGNWIWGY 140 , 170 KTNTCLNAY 178 and 217 IQAPITNLY 225 . These are the immunodominant epitopes restricted by MHC class I located arbitrarily in the protein sequence. This data indicate that CTL epitopes in CDT are randomly distributed, and this distribution is similar to those of CTL epitopes in proteins from other proteomes. Protein interaction network mapping To compute protein-interaction properties of the CDT, we used the search tool for the retrieval of interacting genes and proteins (STRING) database of physical and functional interactions [41]. The prediction of CDTCJ interactions using protein structural similarities permit to construct various candidates interactions with possibly significant functional relevance. For this purpose, relation among the ten identified proteins was examined. The interaction network for genetically interacting proteins possibly related in function with C. jejuni is shown in Figure 7, and the detail information is presented in Table 1. Green lines indicate co-localization in genomes (likely operon structures), and blue lines indicate statistically significant co-occurrence across multiple genomes. A graph of the CDTCJ network shows the identified CDTCJ-interacting proteins and phylogenomic profiling of CDT-related functions. Figure 8 The predicted potential binding sites in CDT protein of C. jejuni. Pocket color description are indicated as: red -MPK, actinium -PAS, magenta -QSF, potassium -FPK, wheat -SFN, yellow -GHE, blue -CON and raspberry -PCS. The exact residue location information is given in Table 2. Ligand binding sites The potential binding sites (PBS) of proteins are those residues or atoms, which bind to ligands directly on protein surface; they are near to the ligand binding sites. After clustering the top three sites from different methods like PAS, QSF, FPK, SFN, GHE, CON, LCS, the MetaPocket 2.0 has predicted seven clusters for the protein structure, but we have presented here three best score pockets sites ( Figure 8). The first MetaPocket site (MPK1) consists of six pocket sites, the first pocket from GHECOM (GHE-1), the first pocket from LigisiteCS (LCS-1), the first pocket from Fpocket (FPK-1), the second pocket from PASS (PAS-2), the first pocket of Q-SiteFinder (QSF-1) and the first pocket from Concavity (CON-1) with total Zscore 11.06 and size of 6. The second MetaPocket site (MPK2) consists of four pockets, from SNF-1, FPK-2, QSF-3 and PAS-3 and the total Z-score is 7.61. The third MetaPocket site (MPK 3) consists of three pocket, from the second pocket of Q-SiteFinder (QSF-2), the third pocket from LigisiteCS (LCS-3), the third pocket from GHECOM (GHE-3) with total Z-score 2.90 and size of 3. Table 2 shows the potential binding sites from a predicted CDT protein of C. jejuni in residue. The header binding sites 1, 2 and 3 are designated for Meta-Pockets 1, 2, 3 respectively. In the case above, potential binding sites of three MetaPockets are given and they are shown in residue format with each line starting with 'RESI'. The residue described above is constructed in three parts: residue name, chain indicator and residue sequence number. Conclusions The purpose of the present study was to perform a global screening for new immunogenic HLA class I (HLA-I) restricted cytotoxic T cell (CTL) epitopes of potential utility as a vaccine candidate against campylobacteroisis. The five epitopes of CDTCJ were identified. It is anticipated that, the peptide 170 KTNTCLNAY 178 can serve as novel potential vaccine candidate against diarrhea. These results have important implications for the rational design of CTL epitope-based CDT campylobacteriosis diagnostics and vaccines applicable to all ethnic groups. The presented research offered a backbone for understanding structural and functional insights of CDT protein. The additional experimental work is required to validate this epitope. The identification of ligand-binding sites is often the starting point for protein function annotation and structure-based drug design. In this study, we identify three predicted potential binding sites in CDT protein of C. jejuni. These are active sites on protein surface that performs protein functions.
4,177.6
2014-02-19T00:00:00.000
[ "Biology", "Medicine" ]
Regression Model in Transitional Geological Environment For Calculation Farming and Production of Oil Palm Dominant Factor in Indragiri Hilir Riau Palm oil commodity is plantation sub-sector commodity which can increase the income of farmers and communities, providers of raw material processing industries that create added value. Cultivated by smallholders self consists of land area, peatlands tidal, coastal peatlands and coastal lands. Differences typology of this land will contribute to the different productions. Generally, this study aimed to analyze the factors of production and farming oil palm, according to the typology of land Specifically aimed to analyze the production and cultivation of oil palm as well as the dominant factor affecting the production Kalapa smallholders' according to the typology of the land and to formulate policy implications of oil palm development patterns of the people in Indragiri Hilir in Riau province. To answer this research analyzed with descriptive statistics and build a multiple regression model with dummy variables Ordinary Least Square method (OLS). Memperlihatan research results that palm oil production and farming on land typology highest compared with tidal peat, peat coast, and coastal lands. Oil palm farming income on a non-pattern land typology best compared with other lands (peat tides, coastal peatlands, and coastal land). The dominant factor affecting the production of palm oil in Indragiri Hilir is the amount of fertilizer, labor, plant age, herbicides, and soil typology dummy land. Policy Implications development of oil palm plantation in Indragiri Hilir in order to increase production, productivity and farm income oil palm can be through the construction of roads production, provision of means of production and palm oil processing industry to shorten the distance and shorten the time of transport that TBS of oil palm plantations to the factory. Furthermore, the use of fertilizers, labor and land typology is very responsive to TBS production. Therefore, in the farming of oil palm cultivation should follow the recommended technical. Introduction Development in the plantations directed to further accelerate the pace of production growth both from estates, Private and state plantations and nucleus estates of the people as well as plantations that are managed independently to support the construction industry, as well as improving the use and preservation of natural resources in the form of land and water , The role of such a large plantation sector to the increased use of farmers and provision of raw materials for the domestic industry as well as a source of foreign exchange (Heriyanto, 2017). Sub particularly palm oil plantation sector has a tremendous opportunity to be a mainstay on exports as a source of foreign exchange, increase the income of farmers and communities, providers of raw material processing industries that create added value.Also, oil palm plantations are also a significant source of food and nutrition in the menu of the population, so that their scarcity in the domestic market was highly significant in the economic development and welfare of the community (Laelani, 2011). Riau Province as one of the provinces that have the most significant oil palm plantations in Indonesia, in 2017 extensive farming area reached 2.7765 million ha with a production of 9.0714 million tons spread over twelve districts and the City (Badan Pusat Statistik, 2017).The plantations are managed in the third form of business entity are: (1) large estates managed by the State-Owned Enterprises which is managed by PT.Perkebunan Nusantara V, (2) national estate plantations managed by large private companies, and (3) community plantations managed by households in the form of individual businesses or generated independently. Oil palm plantation in Indragiri Hilir keep fifth place in Riau Province.According to the statistic plantations of oil palm in Indragiri Hilir in 2013 covering an area of 109 017 ha with a production of 249 604 tonnes and in 2017 increased to an area of 117 820 ha with a production of 721 084 tonnes grown by 79 545 heads of families spread across the region Indragiri Hilir and supported by 12 palm oil mills (PKS).District which has the largest oil palm plantation is the District-wide Kemuning with 39.388 ha with a production of 117.243 tons of which are managed by 34 363 heads of households, while the district has an area of oil palm plantation is the smallest sub-district of Kuala Indragiri 39.00 ha area with production 57.00 ton managed by the 35 heads of households (Badan Pusat Statistik, 2016a, 2016b). It is based overlaying (overlay) Spatial Plan map Riau Province SK. 673 dated 29 September 2014 primarily to Map Spread Plantation that the spread of plantations folk/self-help in Indragiri Hilir is in the status of forest and non-forest.In the non-forested area that is in the Tutorial Use Other, the which is an area that is free to use, while in forest areas items, namely in the area of conversion forest and Production Forest the use of the which is governed by forest legislation is so difficult, getting legality or proof of ownership of land. Problems faced by oil palm farmers self-help in Indragiri Hilir is a matter of technical and socioeconomic problems.Technically oil palm plantation development in wetland areas, especially land is wetlands including marginal land, the land synonymous with acid sulfate, has the nature of physics, chemistry and biology are worse than the mainland.Constraints on the development of wetlands, among others, land conditions are not uniform and dispersed locations, making it difficult to control pests and diseases as well as control the pattern of water.Wetlands have lower productivity and extremely low compared to the mainland (Tim IPB, 1192;Hardjoso dan Darmanto, 1996in Noor, 2004). Socially and economically that the cost of development and maintenance of the garden in a swamp area is more significant than in the mainland, in a swamp area is identical to the high cost of transport for the transport rely on water, high prices of inputs of production factors, the high cost of care and maintenance as well as the inexpensive price of fresh fruit Bunches of oil palm independent smallholders.The factors of production influence production of palm oil, the optimal use of production factors will provide the optimal production. Oil palm plantations is different from other commodities, because it requires the plant close to farmers, so that the fruit produced can be immediately sent to a factory (within ± 24 hours) so that the quality of the oil does not contain a high fatty acid (Gatto et al., 2017;Harahap et al., 2017).The existence of palm oil mills in Indragiri Hilir not fully able to be reached by the farmers, the majority of factories are in the typology of land so that farmers typology of wetlands (peat tides, coastal peat and coastal), which still relies on water transportation and land takes longer and greater costs. As for the factors of production in oil palm farming consists of: natural or land, means of production and labor.Cultivated by smallholders self consists of land area, peatlands tidal, coastal peatlands and coastal lands.The typology of this land will contribute to a different production to the unquestionable factors that influence the pattern of non-oil palm production. Based on the review above in general, this study aims to farm and dominant factor Production On Environmental conditions Precipitation Geological Transitions in Indragiri Hilir Riau Province Specifically aimed to analyze production and farming oil palm, the dominant factors affecting the production of oil palm according to the typology of land and formulate policy implications of oil palm development patterns of the people in Indragiri Hilir Riau Province. Plantation Development Concept The concept of agricultural development includes land resources, plasma nuftah, water, technology, and finance and human resources.Agricultural development aims to improve the income and welfare of farmers through increased agricultural production.Besides increasing agricultural production to meet the raw material in the domestic industry that continues to grow also aims to increase foreign exchange earnings from agricultural exports.As one of the steps that can be done to enhance the contribution of agriculture sector is the plantation crop production (Soekanda, 2001). Development in agricultural covering several developmental stages, namely: (1) traditional agriculture, ie agriculture is still extensive and not maximizing current input (2) agriculture transitional, which is a step in the transition from traditional agriculture (subsistence) to agriculture modern and (3 ) dynamics agriculture (modern) agriculture is already doing specialization of certain plants using capital intensification, with the production of labor-saving technologies pay attention to economies of scale (economies of scale), i.e. with how to drink it cost to get certain advantages. Land Typology Concepts Land can be defined as an area on the surface of the earth, covering all components of the biosphere that could be considered permanent or cyclical located above and below the region, including the atmosphere, soil, parent rock, relief, hydrology, plants and animals, as well as any consequences caused by human activity in the past and now all of which affect the land use by humans in the present and future (Brinkman and Anthony, 1973;Champ and Charles Edward Date, 1976).The land could be seen as a system composed of (i) the structural components are often called land characteristics, and (ii) a functional component that is often called the land quality.Land quality is essentially a group of elements of land (multiple attributes) that determine the level of capabilities and suitability. The use of land for agriculture, in general, can be divided into land use annuals, annual, and permanent.Land use seasonal crops mainly for seasonal crops in the rotation pattern can be with or intercropping and harvest each season with a period typically less than one year.The annual cropland use is the use of longterm crop as the results of the plant are the no longer productive economy, such as in plantation crops. Directed permanent land use on land that is not cultivated for agriculture, such as forests, conservation areas, urban, rural and ingredients.Agricultural land according to the physical form and its ecosystem can be divided into two major groups, namely wetlands and dry land. Dryland Dryland is land that is used for agricultural business using limited water and usually expects from rainfall.This land has diverse agro-ecosystem conditions, generally sloping with conditions of land stability that are less or sensitive to erosion, especially if processing does not pay attention to the principles of soil conservation.Dryland is generally in the highlands (mountainous areas) which are characterized by undulating topography and are recipients and infiltrants of rainwater which are then channeled into a low level, either through the surface of the land (river) or through the earth's groundwater network. Dryland farming according to physical conditions can be distinguished: a.The fields, the fields are dry farming land that is moving.The fields can be concluded in the fields of land a farmer has not made the preservation of soil fertility.The increase in land productivity occurs naturally only, therefore if productivity returns are not going well, then generate grassland widely. Farming business system (shifting cultivation) This is one attempt waste of natural resources of land b.Upland, the moor is a continuation of the farming system; this happens if the forest that may be opened for agricultural business activities is no longer possible.The upland farming properties are already settled.c.Gardens, gardens are permanent farms/farms, which are planted permanently / permanently, both in kind and in mixed.d.The yard, the yard is a piece of farming land that is around the house which is limited by a live hedge or dead fence.e. Ponds, ponds are one of the wet business fields but are in dry environments.Ponds can be divided into two types, namely a pool of still water and running water.Farming in ponds is usually carried out continuously with a production period of around 3-6 months.Fish farming in ponds is commercial and there are also only for family purposes. Wetlands Wetlands or lowlands are areas where the soil is saturated with water, either permanent or seasonal. According to (Noor, 2004), wetlands or lower lands are a swamp, brackish, peat, or other water bodies either naturally or artificially whose water flows or is inundated are fresh, brackish, saline including marine areas which are low in water at low tide no more than six meters.Within the limits of the Ramsar convention that reservoirs, ponds and rice fields are included in the wetland group.According to (Noor, 2004) which is included in the wetland group is: a. Swamp Swamp/marsh is a wetland area that is always flooded naturally because the drainage system is not good or is located lower than the surrounding area.Swampland, scientifically continuous or seasonal stagnant water either naturally or manmade, including marine areas that are less than 6 m in water during low tide, namely swamps and tidal land).b.Peatlands According to the epistemology, peat is a type of material or organic material that is naturally buried in a wet or saturated state.Pedologically, peat is a form of terrestrial land whose morphology and characteristics can be influenced by organic matter (Ananto, 2017;Ananto and Pasandaran, 2007;Ardi and Teddy, 1992;Daryono, 2009;Noor, 2010;Safriyani et al., 2016;Sodik et al., 2016;Sulaeman and Abdurachman, 2002;Suriadikarta, 2009Suriadikarta, , 2012Suriadikarta, , 1969;;Yuliani and Selatan, 2014;Zurich and Purba, 2014).Technically and practically, peatlands can be used as agricultural land, plantation land, mining materials, swamp forests, and industrial materials and materials.Peat is a type of wetland ecosystem that has various functions and benefits, especially hydrological functions.Naturally, the peat ecosystem is always in a state of waterlogging, has a low pH (acid), and nutrient poor. c. Coastal land The coastal area is a meeting area between land and sea, towards the land covering parts of the land which are still influenced by the characteristics of the sea such as tides, sea breeze and salt intrusion, while towards the sea covers the sea which is still influenced by natural processes in land such as sedimentation and freshwater flow and areas affected by human activities on land.The above shows that there are no real boundaries so that the boundaries of coastal areas are only imaginary lines which are determined by local circumstances. Farming concept Farming is one of the activities of organizing or managing assets and natural way of farming.Farming is a business activity of man to cultivate the land to obtain the results of plant or animal without causing a reduction in the ability of the land in question to obtain further results.Farming can also be interpreted as an activity which organizes agricultural inputs and technologies in a business related to agriculture, in farm income, there are two elements have used that element of the income and expenditure of the farm.Acceptance is the result of multiplying the number of products in total with the unit selling price, while the expenditure or costs intended as the value of the use of production facilities and others issued in the production process (Ahmadi, 2001).Production is related to revenue and production costs, the acceptance is received by farmers because it still has to be reduced by production costs, namely the overall costs used in the production process. Reception on agriculture is a production that is expressed in the form of money before deducting expenses for farming activities (Mosher, 2002).Next (Suratiyah, 2008), income is derived from the proceeds less the total cost, with the following formula: I=TR TC (4) Where: I = Income (Rp), TR = Total Revenue (Rp), dan TC = Total Cost (Rp) Farming costs are all expenses that are used in farming.Costs of farming are divided into two, namely fixed costs and variable costs.Fixed costs are costs that magnitude does not depend on the size of the production that will be produced, while variable costs are the costs that the size is influenced by production volume.Oil palm farm income analysis is done by calculating the cost of investment in fixed costs, variable costs, production, gross price revenue, and net revenue farming.Interest expense, income tax expense, and the rent would be calculated if the cost of farming has been set.Analysis of farming using the formula (Soekartawi, 2005) Dominant Factors Affecting Productivity Palm Oil There are several factors that affect the productivity of oil palm plantations, the climate, the shape of the area, soil conditions, planting materials, and cultivation techniques (Pusat Penelitian Kelapa Sawit, 2006).Growth and palm productivities influenced by many factors both factors that influenced natural or humaninfluenced factors.Factors that affect the productivity can be grouped into three factors, environmental factors, factors of plant material and technical culture action factor (Risza, 2009).These three factors are interrelated and mutually influence each other.(Risza, 2009) adding that the age of the plant, a population of plants per hectare, land preservation system, pollination system, the coordinate system of harvest-haul though, security systems of production and harvest a premium system also affects the productivity of oil palm.Premium system can provide motivation for workers to increase manpower resources to achieve the expected premium target. Therefore, by increasing the premiums granted an impact production increased due to the addition of labor.From the description that the factors that affect the productivity of oil palm are (1) land (topography), (2) the use of fertilizers ( 3 (5) the number of plant population per hectare (6) types of seed ( 7) the cropping pattern. Fertilizer is an addition and complement to the availability of nutrients in the soil.Fertilization actions are affecting levels of productivity.Fertilization should pay attention to more important things, such as soil type, the age of the plant itself and weather factors that fertilization delivers maximum results.Fertilization which includes fertilizers, fertilizers, manner, time and frequency of fertilization.The key to successful fertilization refers to a 5 right that is timely, appropriate dose, the right type of fertilizer, the right way and the right application where the application. The age composition of productive plants (10-15 years) compared to non-productive will lead to the fall of the productivity of oil palm.According to Risza (2009) oil palm crop productivity also depends on the age composition of the plant.The wider the age composition of juvenile plants and older plants, the lower the productivity per hectare.The age composition of this plant changes every year and therefore contributes to the achievement of productivity per hectare per year (Risza, 2009). Research methods This research was conducted in Indragiri Hilir Riau province because it has tipology types of land, peat tides, coastal peat and coastal lands.Research using multistage sampling methods Sampling area concerning the Spatial maps of the districts used to elect a representative (independent smallholders) in Indragiri Hilir can be seen in Table 1 below: Samples taken are Subdistrict of Kemuning, Keritang, Kempas, Gaung and Tempuling represent sub-district high category.District of Reteh, Batang Tuaka and Gaung Anak Serka representing the subdistrict and District Enok medium category.Kateman, Pulau Burung, and Concong represent districts with oil palm acreage lower category.according to the typology of the village and field observations set 20 sample villages in 11 districts with a total area of a sample of 92 respondents. Data analysis To analyze the production of palm oil according to the typology of land use descriptive statistics analysis.Before the estimation of multiple regression model, the data used should be ensured free of irregularities classical assumptions for Multicollinearity, Heteroskedasticity, and autocorrelation (Gujarati, 2008;Intriligator, 1978;Pindyck and Rubinfeld, 2014;Thomas, 1997;Verbeek, 2017aVerbeek, , 2000)).The classic test can be regarded as an econometric criterion to see if the results meet the basic classical linear estimation or not.With the fulfillment of these classical assumptions then the estimator Ordinary Least Squares (OLS) regression coefficient of linear bias is not the best estimator BLUE (Best Linear Unbiased Estimator) (Gujarati, 2011(Gujarati, , 2008(Gujarati, , 2003;;Pindyck and Rubinfeld, 1998;Thomas, 1977;Verbeek, 2000), that phase estimate obtained correctly and effectively.One of the assumptions that must be met to satisfy BLUE properties are homoskedasticity when assumptions are not met, then the opposite is true, which means that heteroskedasticity error variance is not constant.Variance this constant error that does not lead to the conclusion reached is invalid or bias. In order to provide valid results in econometric necessary to test some of the assumptions of normality econometrics covering detection, multicollinearity, heteroscedasticity and autocorrelation of the equation in the regression model (Gujarati, 2011(Gujarati, , 2008(Gujarati, , 2003;;Pindyck and Rubinfeld, 1998;Thomas, 1977;Verbeek, 2000). Detection of normality was conducted to determine whether the variable or normal distribution using the Shapiro-Wilk by the following formula (Intriligator, 1978;Pindyck and Rubinfeld, 1998;Thomas, 1977;Verbeek, 2017bVerbeek, , 2000)): for even numbers or the (n-1) for an odd number, where: v = degrees of freedom; T = number of observations; K = number of variables; a,i n = parameters of the Shapiro-Wilk statistical. Multicollinearity test is used to determine whether there is a correlation between the independent variables in the regression model.To detect mul multicollinearity tikolinieritas in a model made by looking at Variance Inflation Factor (VIF) to the equation, as follows (Gujarati, 2008(Gujarati, , 2003;;Thomas, 1977): Variance Inflation Factor = 1/ tolerance Multicollinearity problems become very serious if the variance inflation factor of greater than 10 while the multicollinearity problem is not considered serious if the value is smaller variance inflation factor equal to 10. Heteroskedasticity detection is used to determine whether a variant of the confounding variable is not constant for all observations.Heteroscedasticity problem detection using the Breusch-Pagan test (Pindyck and Rubinfeld, 1998;Thomas, 1977;Verbeek, 2017b) with the following formula: Where: h = Unknown elements, which is a function derived continuously (does not depend on i) that h (.)> 0 and h (0) = 1.s = variance z = variables that affect terms disturbance variance.Value Statistics Bruesch-Pagan insignificant showed no problems Heteroskedasticity. Autocorrelation used to determine whether a linear regression model there is a correlation between the member observation one other observation moved at different times.To test for autocorrelation using Durbin Watson, with the following formula (Pindyck and Rubinfeld, 1998;Thomas, 1977;Verbeek, 2017b) where d = coefficient of Durbin-Watson; t = t; n = sample; e = residual. The d value obtained from comparation of du and dl value, if 0 <d <dL or 4 -dL <d <4 means there is autocorrelation, when the value of d lies between dL <d <du or 4 -du <d <d <dL means cannot be ensured autocorrelation, when du <d <4 -du means no autocorrelation positive / negative. Palm oil production and farming peoples based typology of Land Production of palm oil is the result obtained by the farmers of the results of the processing of palm oil or farm management.The production is a determining factor for the amount of revenue that the farmers in addition to the price.The higher production of palm oil the higher revenue that the farmer, if the fixed price.therefore oil palm growers seek to obtain maximum production Production of oil palm (non pattern) generated by each typology of different land in Indragiri Hilir.To determine the production of oil palm land typology can be seen in Fig. 1.The table shows that the average production of palm oil self-generated patterns highest to lowest in a row on a land area of 11351.7 kg / ha / year next on peat tidal amounted 9.703,2kg / ha / year, production on peatlands discuss coast of 9344.9 kg / ha / year and the typology of coastal land for coastal 7250.5 kg / ha / year. Comparison of production can be seen in Fig. 1.Referring to Fig. 1 shows that the high production on TBS on mainland soil typology is influenced by the type of land, seed varieties, fertilizer applications, maintenance by spraying herbicides and by way of slashing.While on the other land typology there that use this type of seed is not clear.On the land typology majority of farmers, land applies cropping pattern equilateral triangle, while the majority of farmers other land typologies apply to crop patterns and irregular quadrangle. At the time of planting oil palm farmers on land typology land use fertilizer as much as 52.94%, while the peat tidal typology as much as 43.75%, coastal peat as much as 29.41% and 33.33% as much coastal land.In TM-time farmers fertilizing the soil typology of land as much as 88.24%, on peatlands tidal typology as much as 18.75%, on the typology of the coastal peatlands as much as 76.47% and the typology of coastal land wholly or 100% use of fertilizers. Fig. 2. Palm Oil Production Costs based typology Farming Land Fig. 2 shows the cost of production of oil palm on peat highest tides, the next is the cost of production on the land mainland and coastal lands.The lowest production costs, namely farming coastal land peat.The high cost of farm production of palm oil on peatland tidal due to farmers on this land typology other than extraction costs also incurred costs for maintenance on the plants produce ranging from slashing weeds manually and by spraying herbicides, fertilizers, disposal and cleaning disc midrib while farmers in the typology of coastal land clearing are not spending it manually and only clear land by spraying herbicides.The high cost of labor in peatlands and coastal tidal because the distance from the house to the garden and orchard conditions, where the farther the distance from the house to the garden, the cost will increase and the higher grass or weeds growing wages. Net income based typology Oil Palm Farming Land can be seen in Fig 3. Oil palm farmers' incomes are highest on mainland soil typology Rp. 5,939,517.51,-followed the next while on peat tidal Rp. 559,143.50,-coastal land Rp. 350.000, -and coastal peat Rp. 215000.The low income of oil palm farmers self-supporting patterns caused due to the high prices of inputs and outputs as well as the length of the inexpensive price of TBS journey from the ground to the shelter (MCC) over 12 hours which resulted in the release TBS experiencing the fruit from the stalk or quote affect the selling price and additional costs to be borne by farmers. Oil palm growers in the typology of the coastal peatlands did not feel the losses; this was due to revenues from the sale of TBS real wages of the work performed and the cost of depreciation of plants considered as revenue instead of an expense.Net income of non-oil palm farmers in the land pattern mainland high compared to net income of oil palm on peat land tides, coastal peatlands and coastal lands.Net income of farmers on the mainland average of 10.6 times higher than net income in peatland tidal and net income in land application mainland average of 116.3 times higher than the net income of farmers in coastal land. The high net income of farmers on the mainland than in other lands because (1) production of FFB in mainland land is higher than production in peatlands tidal peat production in coastal areas and coastal land; (2) the selling price of land TBS in the mainland is also higher than the selling price of FFB in peatland tides, production in the coastal peatlands and coastal lands.The high oil palm farmers' net income in the coastal area than the income of farmers in the coastal peatlands plug because the cost of production in the coastal area is relatively lower than the cost of production on peatlands coastal. Dominant Factors Affecting Production of Palm Oil The results of the model estimation palm oil production factor in this study is very good as where the look of the coefficient of determination (R 2 ) is 0.6809.This shows that 68.09 percent of the variable amount of production can be explained by the variable of the principal amount, the amount of fertilizer, labor, plant age, herbicides, coastal land Dummy, dummy peatland tidal land dummy land and seed type.While 31.01 percent is influenced by other variables that are not included in the model.This variation is significant at the 1 percent significance level visits of F count equal to 19.2 and probability <0.0001. Results of statistical test for normality using the Shapiro-Wilk statistical calculations show that the results of the Shapiro-Wilk for palm oil production of 0.05.The value is significant at the 10 percent significance level.Multicollinearity test VIF value for all independent variables (number of plants, the amount of fertilizer, labor, age plants, herbicides, coastal land Dummy, dummy peatland tidal land dummy land and dummy type of seed) has a value less than 10.Test results heteroskedasticity show Breusch-pagan statistics at 30.95 zero value on the real level of 10 percent.Value Durbin-Watson (DW) on models built in the amount of 1.808 at n = 92 and k = 8 from Table distribution DW with the real level of 1 percent was obtained dL value by 1.336 and du amounted to 1,714.his indicates that the normally distributed data, multicollinearity does not occur, not happening and not happening heterokedasity autocorrelation. The dominant factor affecting oil palm production in Indragiri Hilir can be seen from the results of the estimation model of the dominant factors affect oil palm coconut farm production can be seen in Table 2. dummy, dummy coastal land and seed type did not affect the production of oil palm in Indragiri Hilir.Next Table 2 above shows that the variables, while the number of plants, peat tidal dummy, dummy coastal land and seed type is not significant or not significant to the variable of palm oil production.The not influential factor of the number of plants for oil palm farmers self-pattern (people) in Indragiri Hilir applies cropping pattern and the number of plants that vary between 70-300 stems / ha, whereas the ideal number of plants between 128-143 stems/ha.This means that the addition of the principal amount of palm oil plants will not significantly affect production. Coastal land also did not significantly affect production compared with another land, this is due to that the coastal land characteristics not significantly different from other land (peat tidal and inland).Next peatland tidal no real effect on production compared with another land (coastal and inland), this is due to that as previously described that have almost the same characteristics the land is located between the coastal and inland area.It is also due to the manufacture of dikes, canals, sluice and the production are largely not well established, subsequent transport oil palm production in the district is largely the Indragiri downstream through water transport. While these types of seedlings did not affect the production of palm oil in Indragiri Hilir caused difficult to distinguish between quality seeds and not superior.Of seeds by farmers do not have a legal, because farmers buy seed is not a direct superior of breeding seeds but from retailers in the form of sprouts and seedlings so.Viewed from the aspect of the prevailing price is still far below the price of seeds are certified.While the factors of production that have a real effect on the production are the use of fertilizers, labor, the age of the plant, pesticide and land dummy land.These five factors are outlined as follows: a. Total fertilize The estimation results in the can show that data on the use of fertilizers has a positive influence on the amount of palm oil production and significantly different from zero at the 10 percent significance level so H0 is rejected and Ha accepted hypothesis.This means that if the amount of one kilogram of fertilizer plus it will increase the production 5.9176 kilograms.Fertilizer production facilities that have an important role in the growth of palm oil; the study was in line with (Heriyanto et al., 2018;Mustofa et al., 2010).Growth in oil wellhead will provide palm oil production.Additionally, fertilizer is a nutrient for plants that do not all provided by nature or provided by nature is not sufficient for the plant to be absorbed for the growth and the production of palm oil.Given that oil palm farm in Indragiri Hilir majority is on marginal land. b. Labor The estimation results in the data may indicate that the number of workers has a positive influence on the amount of palm oil production in Indragiri Hilir and significantly different from zero at the real level of 1 percent, so H0 is rejected and Ha accepted hypothesis.This means that if the number of labor increases by one HOK then production will also increase 28.35857 kilograms. c. Mainland Results obtained estimates that mainland land positive effect on the production of oil palm in Indragiri Hilir and significantly different from zero at the 1 percent significance level so that H0 hypothesis is rejected and Ha accepted.This means that if farmers are farming oil palm on the land area of production is higher compared to another land (peatland tidal and coastal land) amounted to 2033.881 kg.This is due to that land area is the most suitable land for oil palm because it can affect the production of 14:56% and Purba, 2001). d. Age Plants The estimation results in the data may indicate that the age of the plant positive effect on the amount of palm oil production and significantly different from zero at the 10 percent significance level so H0 is accepted and hypothesis Ha rejected.This means that if the age of the plant increases, the amount of production will increase.Coefficient age of the plant has a positive sign that is equal to 800.2312 which means that every increase of 1 year age of the plant will increase production amounted to 800.2312 kg/ha.It can be concluded that in the case of oil palm age in Indragiri Hilir are largely still in their productive age. Policy Implications To achieve agricultural development policy in the plantation sector subset out in the Strategic Plan of the Directorate General of Plantation Year 2015-2019, drafted a policy which consists of a common policy and technology policy.General policy aims to synergize all the resources of the estate in order to increase the competitiveness of the plantation business, value added, productivity and quality of farm products through the active participation of the community estate, and the application of modern organization which is based on science and technology and is supported by governance good.Technical Policy estate development aimed at improving production, the productivity of oil palm plantations. To increase the production and productivity of oil palm plantations in the governmental pattern Indragiri Hilir can be done in the following way: First: The expansion of the plant is done by planting oil palm on vacant land or new land by applying the latest technological innovations and adapted to the soil conditions.The development of oil palm plantations in Indragiri Hilir done on the typology of tidal peat, peat coast and in the coastal area.Both the use of quality seeds to increase production of palm oil and palm oil productivity of the people by the pattern of nonrecommended or recommended are the type Marihat, Topaz, Socfin and Lonsum. Increased production and productivity of oil palm patterns on non-productive plants in Indragiri Hilir be done by improving the maintenance of plants ranging from planting to produce crops to devote more manpower for maintenance by way of slashing or spraying with herbicides.Furthermore, the provision of fertilizer from the time of planting to produce crops by the recommendation or suggestion that serve as additional nutrients not provided by nature and to neutralize the soil.As well as the construction of the water system as a trio of water circulation and keep the water level and avoid the intrusion of sea water.The infrastructure development of oil palm plantations can support the cultivation, post-harvest and marketing. Fertilize The estimation results in the can show that the herbicide positive effect on the amount of palm oil production and significantly different from zero at the 10 percent significance level that the hypothesis H0 is accepted and Ha rejected.This means that if the herbicide increased one liter / ha, the number of production will increase.Herbicides coefficient has a positive sign that is equal to 138.0101 which means that every increase of 1 liter / ha of herbicide will increase palm oil production amounted to 138.0101 kg / ha.It can be concluded that largely oil palm in Indragiri Hilir district, there are still palm pest weeds that have an impact on the production of palm oil. Conclusion Based on the analysis and discussion of the conclusions of this study as follows: 1.Production and productivity of land, farming land of palm oil on the mainland highest compared with peat typology tidal, coastal peat and coastal lands. Oil palm farming income on a non-pattern land typology best compared with another land (peat tides, coastal peatlands and coastal area) 2. The dominant factor affecting the production of palm oil in Indragiri Hilir is the amount of fertilizer, labor, plant age, herbicides and soil typology dummy 3. Policy Implications development of oil palm plantation in Indragiri Hilir in order to increase production, productivity and farm income oil palm can be through the construction of roads production, provision of means of production and palm oil processing industry to shorten the distance and shorten the time of transport that TBS of oil palm plantations to the factory.Furthermore, the use of fertilizers, labor and land typology is very responsive to TBS production.Therefore, in the farming of oil palm cultivation should follow the recommended technical. Fig. 1 . Fig. 1.Average production TBS based typology of Land Farmers on the land typology entirely land use types of seeds kind of Topaz, Marihat, and Lonsum.While on the other land typology there that use this type of seed is not clear.On the land typology majority of farmers, land applies cropping pattern equilateral triangle, while the majority of farmers other land typologies apply to crop patterns and irregular quadrangle.At the time of planting oil palm farmers on land typology land use fertilizer as much as 52.94%, while the peat tidal typology as much as 43.75%, coastal peat as much as 29.41% and 33.33% as much coastal land.In TM-time farmers fertilizing the soil typology of land as much as 88.24%, on peatlands tidal typology as much as 18.75%, on the typology of the coastal peatlands as much as 76.47% and the typology of coastal land wholly or 100% use of fertilizers. Table 1 . Total Sample Oil Palm Farmers Governmental According to Land and Rural Typology in Indragiri Hilir Significant at the 10 percent significance levelBased on the model estimation results in Table2that there are five variables that significantly affect people's production palm oil, the amount of fertilizer, labor, plant age, herbicides and land dummy land.Whereas the principal amount of plants, peat tidal *
8,859.2
2019-03-01T00:00:00.000
[ "Economics", "Agricultural And Food Sciences" ]
Alterations in Surface Gloss and Hardness of Direct Dental Resin Composites and Indirect CAD/CAM Composite Block after Single Application of Bifluorid 10 Varnish: An In Vitro Study : The surface characteristics of the restorative material are essential to its longevity. Since resin composites are polymeric-based materials, they could be degraded when exposed to oral conditions and chemical treatment. Certain chemical solutions, such as fluoride varnish, have the potential to deteriorate the resin composite’s surface properties such as gloss and hardness. The current study aimed to assess and compare the surface gloss and hardness of different types of dental resin composites (nanohybrid, ormocer, bulk-fill flowable direct composites, and indirect CAD/CAM resin composite blocks (BreCAM.HIPC)) after a single application of Bifluorid 10 varnish. A total of 80 disc-shaped resin composite specimens were evenly distributed in four groups of 20 specimens. These were divided into two equal subgroups of specimens with topical fluoride (TF) application (n = 10) and without TF application (n = 10). The specimens were examined for surface gloss and hardness. Independent sample t-test was used to investigate statistically the effect of TF on the gloss as well as the hardness of each material. One-way ANOVA and post hoc tests were used to assess the difference in gloss and hardness among the materials without and with TF application. The significance level was adjusted to p ≤ 0.05. The results of gloss showed that the TF application led to a significant reduction in gloss values of all tested composites. The gloss among the various materials was significantly different. The TF had no significant effect on the hardness of nanohybrid, bulk-fill flowable, and BreCAM.HIPC composites ( p = 0.8, 0.6, and 0.3, respectively). On the other hand, the hardness of ormocer was significantly reduced after TF application. Comparing the different resin composite materials, the hardness significantly differed. This study concluded that surface gloss and hardness seem to be impacted by the type and composition of the resin composites and vary depending on fluoride application. Introduction In dental practice, dentinal hypersensitivity (DH) is a prevalent disease, especially in patients with abrasion, gingival recession, and tooth erosion [1].It is a painful condition generated by the exposed cervical dentin, causing a severe, stabbing pain [2].It affects around 20% of the population [3].Dentin hypersensitivity can be managed in the dentist's office with specific topical agents or through self-applied therapy at home.Topical application of gels, solutions, varnishes, resin sealers, and dentin adhesives are examples of in-office desensitizing treatment methods for managing hypersensitivity [4].Fluoride varnishes are one of the most widely used dental care techniques nowadays.Most fluoride varnishes comprise 5% sodium fluoride; nevertheless, the manufacturer may have various fluoride concentrations and forms [5].Sodium fluoride varnish is regarded as the gold standard desensitizing substance [6].Bifluorid 10 (5% sodium fluoride and 5% calcium fluoride) adheres to the teeth and begins to provide instant protection from any damaging stimulus [7].Moreover, it encourages the calcium fluoride to precipitate, sealing the open dentinal tubules [5].The biocompatibility of dental restorations and the use of natural polymers and inert dental materials are of great concern in dentistry [8]. Direct and indirect dental resin composites are used in daily clinical practice due to their esthetics.The durability of the resin composites is significantly influenced by the properties of the surface.Chemical treatments or even preventive measures like fluoride varnish and other dental hygiene products can harm the surface of restorative materials [9,10].They may have adverse effects on resin composites, like discoloration, surface erosion, reduced surface hardness, and the dissolving of inorganic fillers [11]. A more realistic appearance with improved esthetic and surface qualities has been attained by numerous advancements in dental composites.Surface gloss has a significant impact on whether composites look visually appealing [12].The loss of gloss in resin composites has a detrimental effect on esthetics, since it determines whether the surrounding teeth are harmonious or not [13].One of the most crucial elements in attaining superior clinical results and satisfactory esthetics is the restoration's surface quality [10].The surface quality of the material directly affects its ability to reflect direct light.Surface gloss is frequently used to determine dental materials' surface characteristics [9].The gloss is a result of the arrangement of light reflected on the surface and is closely correlated with surface roughness [9].Increased surface gloss improves the esthetic appearance of the resin composites [14].The clinical outcome of restoration depends on the surface hardness of the resin composites [15].Changes in oral conditions may have the potential to damage the surface of the resins and change their surface hardness by influencing degradation of the organic part of the resin matrix [15,16]. Recent resin composites typically express surface and optical properties that closely mimic those of the natural dental structure and demonstrate attractive clinical endurance [17].Resin-based composites are comprised of a matrix of organic substances, coupling agents and inorganic fillers, and other elements such as polymerization inhibitors, initiators, accelerators, photosensitizers, and photoinitiators.Advancements in filler technology (such as nanohybrid composites) and matrix formulations (e.g., ormocer and bulk-fill flowable composites) provide enhanced mechanical properties of the recent composite materials [12].In addition, the introduction of indirect CAD/CAM resin composite blocks enhances their esthetic appearance and mechanical performance [18].Composite CAD/CAM blocks were introduced as a material to enhance and adequately cure under elevated pressure and temperature, resulting in an increased conversion rate.The manufacturers claimed that it provides enhanced mechanical, chemical, and optical properties, which increase gloss retention, which is an important factor for a better esthetic appearance [19]. There is no information available comparing the gloss retention and hardness of newly introduced CAD/CAM composites and the other direct resin composites, when exposed to topical application of single-dose fluoride varnishes.This in vitro study was aimed to assess the surface gloss and hardness of different types of dental resin composites (nanohybrid, ormocer, bulk-fill flowable direct composites, and indirect CAD/CAM resin composite blocks) after a single application of Bifluorid 10 varnish.The null hypothesis was that there was no difference between the tested types of resin composites as regards their surface gloss and hardness.In addition, the single application of Bifluorid 10 varnish did not alter the surface gloss or hardness of the tested types of resin composites. Materials and Methods This study measured the change in the surface gloss and hardness of three direct dental resin composites and one indirect CAD/CAM composite block after a single application of Bifluorid 10 varnish.The three commercial direct resin-based composite materials were nanohybrid composite, ormocer composite, and bulk-fill flowable composite, while the indirect CAD/CAM resin-based composite blocks were BreCAM.HIPC.Bifluorid 10 varnish was used in a single application.The commercial materials used in this study are listed in Table 1.-------------- Sample Size Calculation The data for sample size calculation considered hardness and gloss, and based on these, the highest sample size was chosen.A standard sample size calculation was performed according to previous studies conducted by Gehlot et al. and MIKAMI et al. [15,20] in which the formula for analysis of variance was applied in G*Power statistical software (version 3.1.9.7).The average standard deviation (SD) was 0.150.Giving an alpha value of 0.05, a beta (β) level of 0.10 (10%), i.e., power = 90%, and an effect size (f) of 0.78.The minimum sample size needed with this effect size is n = 9 per group to test surface gloss and hardness.Thus, the obtained sample size of n = 10 per group is more than adequate to test the study hypothesis. Study Design A total of 80 disc-shaped resin composite specimens (8 mm diameter × 1 mm thickness) were made and standardized and evenly distributed in four groups of 20 disc-shaped specimens.These were divided into two equal subgroups of specimens with topical fluoride (TF) application (n = 10) and without TF application (n = 10) (Figure 1).The specimens were examined for surface gloss and hardness. A total of 80 disc-shaped resin composite specimens (8 mm diameter × 1 mm thickness) were made and standardized and evenly distributed in four groups of 20 discshaped specimens.These were divided into two equal subgroups of specimens with topical fluoride (TF) application (n = 10) and without TF application (n = 10) (Figure 1).The specimens were examined for surface gloss and hardness. Specimen Preparation Disc-shaped Teflon molds with a dimension of 8 mm diameter and 1.1 mm thickness were filled with each type of direct resin composite [21].The composites were gently pressed against transparent polyester strips (Mylar strip; SS White Co.Philadelphia, PA, USA) and a glass slide.The resin composites were then light-cured for 20 s using a highpower light emitting diode (LED) curing unit (LED device Mini LED, Satelec, Acteon, Viry-Châtillon, France) placed in contact with the 1 mm glass slide for distance standardization, at an irradiance of 1200 mW/cm 2 and wavelength of 400-500 nm that was measured with an LED radiometer (Demetron, Kerr, Halluin, France).Afterward, all specimens were polished, from the measuring side for 30 s using a fine composite polishing kit (Shofu Composite Polishing Kit, Shofu Dental GmbH, Ratingen, Germany) with the purpose of achieving a final thickness of 1 mm.The indirect CAD/CAM resin composite blocks were reduced by standardizing the same dimensions using CAD/CAM Machine (group LU, Lava Ultimate A2 LT (3M USA, Saint Paul, MN, USA) to obtain a disc-shaped specimen.Each specimen was then manually reduced by the same polishing kit system mentioned above to obtain the 1 mm thickness.All specimens were stored in distilled water at 37 °C for 7 days in an incubator (CBM, S.r.l.Medical Equipment, 2431/V, Cremona, Italy).All specimens were then subjected to coating on one surface with a standardized thin coat of Bifluorid 10 (VOCO GmbH, Cuxhaven, Germany) using a brush for a single dose.After that, the coating was allowed to be absorbed for 20 s and then dried gently with air.Specimens were then stored in 20 mL artificial saliva for 24 h in an Specimen Preparation Disc-shaped Teflon molds with a dimension of 8 mm diameter and 1.1 mm thickness were filled with each type of direct resin composite [21].The composites were gently pressed against transparent polyester strips (Mylar strip; SS White Co.Philadelphia, PA, USA) and a glass slide.The resin composites were then light-cured for 20 s using a high-power light emitting diode (LED) curing unit (LED device Mini LED, Satelec, Acteon, Viry-Châtillon, France) placed in contact with the 1 mm glass slide for distance standardization, at an irradiance of 1200 mW/cm 2 and wavelength of 400-500 nm that was measured with an LED radiometer (Demetron, Kerr, Halluin, France).Afterward, all specimens were polished, from the measuring side for 30 s using a fine composite polishing kit (Shofu Composite Polishing Kit, Shofu Dental GmbH, Ratingen, Germany) with the purpose of achieving a final thickness of 1 mm.The indirect CAD/CAM resin composite blocks were reduced by standardizing the same dimensions using CAD/CAM Machine (group LU, Lava Ultimate A2 LT (3M USA, Saint Paul, MN, USA) to obtain a disc-shaped specimen.Each specimen was then manually reduced by the same polishing kit system mentioned above to obtain the 1 mm thickness.All specimens were stored in distilled water at 37 • C for 7 days in an incubator (CBM, S.r.l.Medical Equipment, 2431/V, Cremona, Italy).All specimens were then subjected to coating on one surface with a standardized thin coat of Bifluorid 10 (VOCO GmbH, Cuxhaven, Germany) using a brush for a single dose.After that, the coating was allowed to be absorbed for 20 s and then dried gently with air.Specimens were then stored in 20 mL artificial saliva for 24 h in an incubator at 37 • C. Prior to testing, the remnant of the coating was gently cleaned using a low-speed handpiece and a nylon bristle brush for 4 min.Then, the specimens were evaluated under a stereomicroscope (Olympus, Tokyo, Japan) to ensure complete removal of the coating. Testing Procedures 2.4.1. Surface Gloss Test The surface gloss of each specimen was measured using a glossmeter (ZGM 1130, Zehntner GmbH Testing Instruments, Sissach, Switzerland).Gloss was determined by directing a beam of light directed at an angle of 60 • to the surface of each specimens [22]; following that, the light reflected at the same angle was measured.The equipment was calibrated against a black glass supplied by the manufacturer, which had a reference value of 93.7 gloss units (GU), before the gloss was measured.To prevent exposure to outside light, the specimens were placed on the glossmeter's top plate and covered with a black cover while the gloss was being measured.Every specimen had five measurements taken at its center, from which the mean value was calculated for each one.Measurements of gloss were recorded. Surface Hardness Test Surface microhardness for each specimen was determined using Digital Vickers hardness tester (NEXUS 400TM, INNOVATEST, model no.4503, Maastricht, The Netherlands).The indentations were made within 15 s dwell time at a load of 100 g at 20× magnification [23].The mean surface microhardness value for each specimen was calculated. Statistical Analysis Results are expressed as mean and standard deviation.The statistical analysis was performed using the Statistical Package for the Social Sciences (12.0, SPSS Inc., IBM, Chicago, IL, USA).According to the results of the normality test conducted with the Shapiro-Wilk and Kolmogorov-Smirnov tests, independent sample t-test was used to investigate statistically the effect of TF on the gloss as well as the hardness of each material.One-way ANOVA (analysis of variance) and post hoc tests were used to assess the difference in gloss and hardness among the various materials initially without TF application.Similarly, the difference between the used materials after using TF was assessed by using the previous analysis.The significance level was adjusted to p ≤ 0.05. Surface Gloss Results The mean surface gloss values of the various materials without and with TF application are displayed in Table 2. Effect of TF on Gloss of Each Material TF application led to a significant reduction in gloss values of all tested resin composite materials (p = 0.0001 *). Gloss of Various Materials (Initially without TF and after TF Application) Without the application of TF, the gloss values were as follows in ascending order with significant differences between them (p = 0.0001): BreCAM.HIPC showed the lowest gloss value (53.6), followed by ormocer (57), then the Luna nanohybrid resin composite (60.7), and the bulk-fill flowable resin composite showed the highest gloss value (65.7). Similarly, after TF application, the gloss values were as follows in the same ascending order with significant differences between them (p = 0.0001): BreCAM.HIPC showed the lowest gloss value (48.4), followed by ormocer (50.3), then the Luna nanohybrid resin composite (52.2), and the bulk-fill flowable resin composite showed highest gloss value (54.9). Surface Hardness Results Table 3 displays the mean surface hardness (VHN) among the various materials without and with TF application. Effect of TF on the Hardness of Each Material TF had no significant effect on the hardness of the nanohybrid, bulk-fill flowable, and BreCAM.HIPC resin composites (p = 0.8, 0.6, and 0.3 respectively).On the other hand, the hardness of ormocer was significantly reduced after TF application (p = 0.0001 *). Hardness of Various Materials (Initially without TF and after TF Application) Without the application of TF, BreCAM.HIPC showed the least hardness (26.9 VHN), (p = 0.0001), while, the nanohybrid and ormocer displayed the highest hardness (p = 0.0001), with no significant difference between these two types (p = 0.9; 46.9 VHN and 44.8 VHN, respectively).Meanwhile, the bulk-fill resin composite showed intermediate results (38.19 VHN). Whereas, after TF application, the hardness values were as follows in ascending order with significant differences between them (p = 0.0001): ormocer showed the lowest hardness value (20.8 VHN), followed by BreCAM.HIPC (25.7 VHN), then the bulk-fill flowable resin composite (37.4 VHN), and the Luna nanohybrid showed the highest hardness value (46.6 VHN). Discussion The aim of this in vitro study was to compare the surface gloss and hardness of three direct resin composites (nanohybrid composite, ormocer composite, and bulk-fill flowable composite), and one indirect CAD/CAM resin composite block (BreCAM.HIPC) after challenging their surfaces with a single application of Bifluorid 10 varnish. Applying topical fluoride is a therapeutic and preventative dental therapy that can help improve tooth sensitivity, prevent tooth decay, arrest it, or slow it down, inhibit dental plaque, and promote tooth remineralization [24].Professionally applied topical fluoride is a kind of topical fluoride application in which the dentist can deliver it to the patient in the dental clinic in gel or varnish form.Applying fluoride varnish is an easy, affordable, accessible, and practical way for practitioners to achieve superior results [25,26].Bifluorid 10 comprises a mixture of sodium fluoride (NaF) and calcium fluoride (CaF).The high calcium content of saliva and dentinal fluid causes NaF to dissociate and release F ions, which diffuse through the tubules and precipitate when the dentin hypersensitivity is reduced by the calcium fluoride.The calcium fluoride (CaF) part of the varnish composition diffuses into the tubules and creates a semi-permanent barrier to occlude the tubules.By combining the calcium fluoride produced by the sodium fluoride reaction with the calcium in the dentin, the calcium fluoride is added to mechanically obstruct the dentin tubules [27,28].The primary benefits of Bifluorid 10 include rapid desensitization and the formation of a shielding barrier from thermal and mechanical provocations.Nevertheless, it can cause teeth staining [27].However, the possible determinantal effect of recent direct and indirect resin composite restorations has not been documented. Gloss is an optical property related to how light is distributed across an object's surface through reflection, scattering, and absorption [29].A surface's gloss value, which measures the amount of light reflected at the same angle as the incident light, is a parameter employed to assess how smooth a surface is [30].The inorganic filler type, load, and distribution, in addition to the refractive index and the thickness of the resin composites, have an impact on the surface gloss of the restorative materials [31]. The surface hardness of resin composite restorative materials indicates their resistance to scratching during service; however, the size and amount of filler content in the material may have an impact on this property [17,32,33].Topical application of fluoride may lead to adverse effects on the resin composites, including surface erosion, gloss reduction, dissolving of inorganic fillers, and reduced surface hardness [34].The main composition of the resin composites is an organic matrix and inorganic filler particles.It provides a heterogeneous nature to their microstructure, which is a challenging factor that could be reflected in surface features of the resin composites [10]. Nanotechnology has enabled the development of unique nanohybrid resin composites containing a mixture of different types and sizes of fillers particles (nanosized and conventional micron-sized) within the matrix.The decreased filler particle sizes and increased filler volume percentages enhance their physical and mechanical properties [35]. Bulk-fill flowable resin composites were recently introduced to permit the opportunity of applying materials of thickness as high as 4 to 6 mm.In order to enhance the depth of polymerization and boost the material's translucency, bulk-fill composite resins employ more reactive and different photoinitiators.Additionally, the amount of filler is decreased, while the size of the filler particles is raised (micron-sized) [36].Moreover, they contain a higher proportion of diluent monomers in their composition to decrease the viscosity of the mixture [37]. Ormocer, which stands for organically modified ceramics, refers to the alterations performed on the resin matrix [38].In contrast to conventional composites, ormocers have a matrix that contains both organic and inorganic materials.The ormocer matrix is based on using a saline precursor [38].Ceram.X Mono combines both ormocer and nanotechnology.It combines traditional glass fillers with organically modified spherical ceramic nanoparticles and nanofillers [39].The spherical pre-polymerized nanofillers and the organically modified resin matrix were claimed to produce enhanced mechanical and optical characteristics [40]. Recent advancements in resin composite technology, in conjunction with improvements in computer-aided design and computer-aided manufacturing (CAD/CAM), have provided an indirect resin composite block that is suitable to be used as a digital veneering material [41].High-impact polymer composite (HIPC) technology delivers a composite characterized by a cross-linked, amorphous, heat-cured PMMA matrix, reinforced with ceramic microfiller.It is assumed to have superior physical and mechanical properties to conventional light-cured polymethyl methacrylate (PMMA), due to the absence of dental glasses and residual uncured monomers [42,43]. Bifluorid 10 was selected to be used in the current research as it showed clinically confirmed effectiveness for the treatment of dentin hypersensitivity [4].The optimal standardization of the amount of fluoride varnish used was achieved by using the singledose form.Regarding storage, the Bifluorid 10 was kept in a refrigerator at 4 • C as advised by the manufacturer.The four types of resin composites were selected in the current study according to the type of polymerization (direct and indirect composites) and based on the filler size, type, and content (nanohybrid, ormocer-based, and bulk-fill flowable composites).Moreover, the storage of the specimens was performed in an artificial saliva for 24 h in an incubator at 37 • C to simulate the oral conditions.Different storage media could be used, such as distilled water, saline, ethanol, and artificial saliva.Artificial saliva was selected as it has the simplest physiological effect on the specimens to standardize the variables and simulate clinical situations [44,45]. The null hypothesis was rejected as the results of this study exposed a significant difference in surface gloss and hardness among all of the tested types of resin composites before TF application.Moreover, TF application led to a significant reduction in the gloss values of all tested resin composites.Although TF had no significant effect on the hardness of nanohybrid, bulk-fill flowable, or BreCAM composites, in HIPC resin composites, the hardness of ormocer was significantly reduced. The results showed that TF application generally leads to a significant reduction in surface gloss of all tested resin composites.The alteration in the surface gloss may be due to deterioration of the organic matrix and composition of the resin composites as a result of the action of fluoride to a various degree [17].The adverse effect of a single application of in-office Bifluorid 10 varnish on the surface hardness may be related to the feasible degradation of the resin matrix and the dislodgment of filler particles [17].An adverse relationship has been observed between the filler content of the resin composite and the extent of surface deterioration.Lower filler loading materials typically exhibit more surface layer deterioration. Moreover, the only affected resin composite after TF application as regards hardness was ormocer.This finding may be attributed to the lower degree of conversion of the matrix of ormocer in comparison to the dimethacrylate-based composites (nanohybrid and bulk-fill flowable composites) [46].The resistance of nanohybrid and bulk-fill flowable resin composites to scratching after application of TF may be due to both of them being urethane dimethacrylate-based (UDMA) composites, which have more resistance to degradation effects [46].In addition, UDMA-based composites have low viscosity and allow for a higher degree of conversion; they comprise hydrogen bonding, thereby improving conversion rates and enhancing mechanical characteristics [47,48].Moreover, the BreCAM.HIPC groups, at the same time, exhibited a high resistance to the deterioration effect of the TF application, which might be explained by the low percentage of residual monomers and the formation of cross-linked composites, in addition to the presence of ceramic microfillers which may resist the harmful effect of fluoride [49].Han et al. confirmed the previous finding as they showed a direct correlation between the resistance of resin composites to degradation and the distribution of fillers on their surface [49].In addition, the heat-curing laboratory procedure may be producing a very low percentage of residual monomers with a predictable maximum curing quality compared to the conventional light-curing procedure performed in the direct resin composite clinical curing technique [19,50]. Generally, it was observed that both nanohybrid and bulk-fill flowable composites showed a higher degree of surface gloss and hardness after the application of TF compared to the other tested types of resin composites, which may be caused by the smaller size of the fillers in the matrix compared to those in the other tested types. Nanohybrid resin composites include a nanosized filler particle that could be equally removed, along with the resin matrix, without noticeable adverse effects on hardness after exposure to TF [30].Moreover, a smaller nanosized filler provides higher gloss values [51].The increase in surface microhardness of the nanohybrid resin composite compared to the bulk flow resin composite may be due to the higher filler content which resists surface scratching [52].Meanwhile, the initial increase in surface hardness of ormocer before TF application may be due to development of a smooth surface as a result of the higher percentage of microceramic filler along the matrix.The decrease in initial surface gloss of BreCAM.HIPC may be attributed to the absence of a glassy phase in the PMMA matrix [19].The low surface hardness before the application of TF may be due to the lower filler content (20%) in comparison to the other tested types of direct resin composites.In a comparative study, artificial aging was performed to investigate the mechanical properties of dental LT Clear Resin after polishing [53].It was found that artificial aging decreases the compression and tensile strength.Moreover, the study conducted by Paradowska-Stolarz et al. [54] showed that BioMed Amber created using 3D printing is the most stable material regarding fractal dimension and texture analysis. Gerhardt et al. found that the size and content of the fillers affected the surface gloss.They determined that a smaller filler size led to a higher gloss.[55].Batista et al. concluded that nanohybrid composites displayed higher gloss values than ormocer-based resin composites [56].Zovko et al. showed that ormocer-based nanohybrid resin composites exhibited stable gloss after exposure to an acidic beverage [9]. The limitations of this study include that the in vitro conditions do not exactly fit into the clinical situation as regards the washing effect of natural saliva movement, and mouth temperature may affect the results.In addition, a concern about the differences in the hydrophilic properties and permeability of each type of resin composite should be considered in the study.Additionally, examination of the specimens over a short period of time and the lack of surface imaging investigation represent such limitations.Not taking in consideration the use of different artificial ageing solutions and thermocycling is considered another limitation of this study. Further investigations are recommended to evaluate the effect of topical fluoride application in several doses over longer periods rather than a single application.It is advised to perform further studies on other formulas of topical fluoride and other types of tooth-colored restoration materials.Moreover, it is suggested to examine other properties such as surface roughness, wear, and color stability.Furthermore, it is recommended to perform further investigation to study the effect of artificial ageing using different storage solutions under thermocycling, taking into consideration thermal changes. Conclusions Gloss retention and surface hardness appear to be influenced by the brand and type of resin composites.After a single application of Bifluorid 10, nanohybrid, bulk-fill flowable, and indirect CAD/CAM resin composites provide hardness stability.But all tested groups exhibit surface gloss alterations. Figure 1 . Figure 1.Distribution of groups according to resin composite type, topical fluoride application (TF), surface gloss, and hardness. Figure 1 . Figure 1.Distribution of groups according to resin composite type, topical fluoride application (TF), surface gloss, and hardness. Table 1 . The manufacturer's information of the materials used in the study. Table 2 . Mean surface gloss values of the various materials without and with TF application.Indicates a statistically significant difference (p-value < 0.05).Means with different small letters in the same row and means with different capital roman numbers in the same column both demonstrate a significant difference. * Table 3 . Mean surface hardness (VHN) of the various materials without and with TF application. * Indicates a statistically significant difference (p-value < 0.05).Means with different small letters in the same row and means with different capital roman numbers in the same column both demonstrate a significant difference.
6,409.2
2024-02-03T00:00:00.000
[ "Medicine", "Materials Science" ]
Investigation of Roughness Correlation in Polymer Brushes via X-ray Scattering Thin polymer films and coatings are used to tailor the properties of surfaces in various applications such as protection against corrosion, biochemical functionalities or electronic resistors. Polymer brushes are a certain kind of thin polymer films, where polymer chains are covalently grafted to a substrate and straighten up to form a brush structure. Here we report on differences and similarities between polymer brushes and spin-coated polymer films from polystyrene and polymethyl methacrylate with special emphasis on surface roughness and roughness correlation. The phenomenon of roughness correlation or conformality describes the replication of the roughness profile from the substrate surface to the polymer surface. It is of high interest for polymer physics of brush layers as well as applications, in which a homogeneous polymer layer thickness is required. We demonstrate that spin-coated films as well as polymer brushes show roughness correlation, but in contrast to spin-coated films, the correlation in brushes is stable to solvent vapor annealing. Roughness correlation is therefore an intrinsic property of polymer brushes. Introduction Thin polymer films are of high interest in various applications and disciplines, such as electronics, biomedicals or functional coatings [1,2]. One of the most fundamental properties of such films is the surface roughness [3]. As no surface is ideally flat, height deviations appear, giving the surface a certain structure and roughness profile. Usually, these deviations are described with the Root Mean Square (RMS) roughness, which averages the height deviation from a mean level along a sampling length. A statistical analysis of height irregularities is commonly performed on micro-to nanometer length scales with Atomic Force Microscopy (AFM) or with optical techniques, such as X-Ray Reflectivity (XRR) [3][4][5]. If thin polymer films are coated on a substrate, two roughness profiles exist, namely the silicon-polymer-interface and the top polymer surface. AFM is only useful for measurements of the polymer surface roughness. X-rays however penetrate the polymer layer and therefore XRR also characterizes underlying interfaces, such as the film-substrate-interface [4,5]. Roughness studies of those systems have extensively been performed and reported in literature with the mentioned methods [3,6]. In this report we intend to analyze the phenomenon of roughness correlation of polymer thin films. Depending on the preparation procedure and film thickness the geometries of both interfaces are not necessarily independent from each other. For example in conformal films, the roughness profile from one interface is copied to the overlaying surface, resulting in correlated roughness profiles and consequently a constant layer thickness of the polymer film ( Figure 1) [7,8]. While non-conformal films can be described with an average layer thickness, conformal films show a locally defined layer thickness. If two interfaces are correlated, scattered X-rays of both interfaces undergo constructive interference and additional oscillations (intensity maxima) appear. These signals can theoretically be characterized with XRR, but the intensity is in phase with the oscillations of the reflectivity curve [9][10][11]. Therefore, a simple modeling of XRR curves using a Parratt formalism will lead to erroneous results, especially by modeling interfacial roughnesses, which are systematically too low. In addition to XRR experiments another technique, like Grazing Incidence Small Angle X-ray Scattering (GISAXS) is required, to verify if two interfaces are correlated [8,12,13]. GISAXS is a powerful tool for structural characterization of soft-matter thin films. In GISAXS experiments a 2D detector allows the study of off-specular and non-specular scattering in reciprocal space, so that lateral structures (y-direction) of thin films and structures perpendicular to the substrate surface (z-direction) can be explored in respect to the X-ray beam (x-direction). Coordinates in y-and z-direction in real space correspond to wave vector coordinates q y and q z in reciprocal space. Unlike specular XRR measurements, where the incident angle changes during the measurement, GISAXS commonly may be carried out at a constant incident angle, which is close to the critical angle of the material, to balance a defined penetration of X-rays into the sample and get optimal scattering intensity [12,13]. A schematic illustration of a GISAXS setup is shown in Figure 2. . Schematic illustration of a GISAXS setup with X-rays hitting the sample under a grazing incident angle α i . The direct beam transmits the sample (primary beam on the 2D-detector), X-rays are reflected (specular peak) and scattered. At α i + α c the so called Yoneda peak appears. In a prototypical GISAXS detector image three intensity maxima are observed: The direct beam, which penetrates through the sample, the specular peak, related to the total reflection of the beam and in between a third scattering effect, at an angle equal to the sum of the incident angle and the critical angle (α i + α c ). This third peak is called Yoneda peak [14]. In correlated films, periodical oscillations in q z -direction occur in the scattering images between the specular and Yoneda peak, due to lateral correlations of the scattered X-rays from the substrate/polymer interface and the free polymer surface. Whereas uncorrelated films do not show any roughness related signals between Yoneda region and specular peak. This phenomenon of roughness correlation has been shown with GISAXS for liquid films on substrates by Tidswell and for spin-coated polymer thin films by Gutmann, Stamm and Müller-Buschbaum [7,[15][16][17][18][19]. Here we demonstrate roughness correlation of spin-coated polymer films, as well as polymer brushes and show that roughness correlation is an intrinsic property of brush systems. Polymer brushes with correlated roughness to the substrate (conformal brushes) are promising polymer films, for applications, in which stable and homogeneous polymer layers with a locally defined thickness are required, for example organic light emitting diodes. In contrast to spin-coated films, polymer brushes are covalently bond to the substrate surface. Different internal structures of grafted polymers have theoretically been described by Alexander and De Gennes [20][21][22][23][24]. A low number of chains per surface area (grafting density) results in a 'pancake' structure, as no interaction between the chains occur. With a higher density of chains within a certain area, repulsive forces become important and the chains straighten up from the surface, losing entropy. 'Mushroom' and brush structures are consequently obtained at high grafting densities ( Figure 3) [25][26][27]. In polymer brushes the chains are highly stretched and the layer thickness is therefore directly proportional to their molecular weight. For highly uniform brush systems this implies a direct interfacial correlation and enables an investigation of structural transitions in a brush layer via GISAXS experiments. In this paper the synthesis of polymer brushes is based on a grafting-from approach, in which an initiator for controlled radical polymerization is immobilized on a silicon substrate as Self-Assembled Monolayer (SAM). With Surface Initiated-Atom Transfer Radical Polymerization (SI-ATRP) polymer brushes are synthesized directly on the surface with low polydispersities and high grafting densities [28]. Silicon substrates are used, as they show low surfaces roughness and have extensively been studied with respect to the synthesis of brushes [1,2]. Analogous to the spin-coated films, polymer brushes also exhibit roughness correlation, which has been indicated by Akgun et al. and Kim et al. [29,30]. In this report we compare the surface roughness and roughness correlation of spin-coated polymer films and polymer brushes on silicon substrates, using AFM, XRR and GISAXS as methods for characterization. While AFM and XRR are basically used to characterize the surface structures, RMS roughness and layer thicknesses of all films, GISAXS is the only method to prove roughness correlation from non-specular scattering effects. Immobilization of APDMES on Silicon Wafer A silicon wafer disk was cut into samples of 2 × 2 cm 2 , immersed in ethanol and placed in an ultrasonic bath for 20 min. Afterwards every wafer was thoroughly rinsed with filtered Mili-Q-water and placed in a solution of H 2 SO 4 , H 2 O 2 , and Mili-Q-water in ratio of 15:5:3 for 20 min, before being rinsed with water again. By this procedure, the wafers were activated with hydroxyl groups. The immobilization of APDMES was performed via vapor deposition. After drying with argon, the samples were placed next to a small vial, containing APDMES in a vacuum oven, which was evaporated for 2 h. The vial was removed and the oven was heated up to 110 • C for another 2 h, to achieve a complete covalent bond of APDMES with the hydroxyl groups on the silicon surface. Redundant APDMES was removed from the surface by extraction with DCM in a Soxhlet extractor. Synthesis of 2-Bromo-2-methyl-N-[3-(dimethylsilylethoxy)propyl] on Silicon Substrates In a round-bottom-flask with magnetic stir bar, DMAP (15.6 mg, 0.13 mmol) was dissolved in 20 mL acetonitrile, while purging the solution with argon. A few minutes later BIBB (247 µL, 1.61 mmol) and TEA were added and stirred for 10 min. While stirring, every APDMES-functionalized wafer was placed in a screw-top vial with a septum and deoxygenated with an argon stream. To each vial a few milliliters of the solution were added, to cover the surface with the liquid. All vials were shaken on a shaker for 3 h, before the wafers were removed and rinsed with ACN and extracted with DCM in a Soxhlet extractor. Synthesis of PMMA Brushes Polymethyl methacrylate (PMMA) brushes were synthesized via grafting from approach and ARGET ATRP on functionalized wafers. In a two-neck-round-bottom-flask with magnetic stir bar, 16 mL water and 8 mL methanol were purged with argon for 15 min, before adding inhibitor-free MMA (20 mL, 0.19 mol). The catalyst CuBr 2 (7.4 mg, 0.03 mmol) with PMDETA (100 µL, 0.48 mol) as ligand and sodium ascorbate (65.3 mg, 0.33 mol) as reducing agent were added. Once all reactants were dissolved, the solution was poured over the functionalized wafers in an oxygen free vial, sealed with a rubber septum. After 20 min the polymerization was stopped and all samples were cleaned with DCM in Soxhlet extractor. The grafting density of 0.93 nm −2 of PMMA brushes was estimated via dry layer thickness approach with the ellipsometric layer thickness and molecular weight of PMMA, which was polymerized in the same solution with sacrificial initiator and analyzed with GPC. Synthesis of PS Brushes and PMMA-b-PS Brushes The synthesis of PS brushes was done in a Schlenk tube, containing the functionalized wafers. To assure oxygen free conditions the tube was sealed and purged with argon for 20 min. Anisole (10 mL, 0.09 mol) and styrene (10 mL, 0.09 mol) were added to another tube under argon counter flow and further flushed with inert gas for 15 min. CuBr (13.4 mg, 0.09 mol) was dissolved and the catalytic copper complex was formed with PMDETA (19.6 µL, 0,09 mmol). After dissolving the catalyst completely, three freeze-pump-thaw cycles were performed and the solution was added to the Schlenk tube containing the wafers. While stirring the solution was heated up to 30 • C an let react overnight. After polymerization, all samples were purified with DCM. PMMA-b-PS copolymer brushes were synthesized by simply following the two procedures mentioned before but performing the synthesis for the PS part immediately after the cleaning of PMMA brushes with DCM. After ARGET ATRP of MMA, the brushes still have active bromide end groups, which can be used to polymerize styrene. The number of ATRP active PMMA chains per surface area could not be analyzed. With our procedure to synthesize PMMA brushes we were able to prepare brushes with different layer thicknesses up to 150 nm, even with stopping and re-initiating the polymerization. Therefore, we assume that all PMMA chains still have active bromide end groups, which work as immobilized macro initiator for the polymerization of PS for PMMA-b-PS diblock copolymer brushes. Spin-Coating Procedure for Polymer Thin Films The spin-coating procedure was adapted from Schubert et al., using PS (10 g/L, M n = 130,520 g/mol) and toluene as solvent [31]. A wafer was adjusted in the sample holder of the spin-coater (Laurell, model WS-650MZ-23npp) and with a syringe 100 µL of the polymer solution were flushed rapidly on the sample, while spinning at 2000 rpm. After another 30 s spinning at the same speed, solvent vapor annealing was done for a few samples. For annealing of the spin coated films, a vial with THF was placed next to the samples in a sealed container for 24 h. Film Thickness Determination with Ellipsometry The dry layer thickness of every polymer film was measured with an Optrel multiscope ellipsometer using a wavelength of 632.8 nm at an incident angle of 60 • . To simulate a sample layer model, refractive indices of 3.885 for Si, 1.461 for SiO 2 , 1.49 for PMMA and 1.5916 for PS were assumed. AFM Characterization All AFM measurements were performed with an Agilent Technologies 5500 SPM device in tapping mode. Cantilevers by Micromesh, type HQ:NSC14/AL BS were used. Images were recorded at a scanning speed of 0.5 ln/s with 2048 ln within a 3 × 3 µm 2 area. Image processing and analysis were done using the Gwyddion software, version 2.57, to calculate the RMS roughness and the radially averaged Power Spectral Density (PSD) over the whole imaged area. XRR Measurements XRR measurements were performed at a Bruker D8 device at the Forschungszentrum Jülich with a Cu-anode lab source and a wavelength of 1.54 Å. Every sample was measured for 2 h, in which the incident angle is changed from 0 to 3 • within 2 min. The vertical beam size was 0.2 mm. Further processing and analysis were done with Parratt32 software, which is based on Parratt algorithm, to determine layer thickness and RMS roughness [32]. Figures 11 and 13 were performed at Forschungszentrum Jülich at the GALAXI beamline with a Ga Metaljet source (Kα radiation, E = 9243 eV photon energy, wavelength 0.74 Å), a sample detector distance of 3530 mm and an incident angle of 0.7 • . A Pilatus 1M 2D detector was used with a fully evacuated flight path from source to detector. To compensate for gaps between detector modules in the detector images and line cuts, the detector was moved five times after 12 min of irradiation for each position at a total exposure time of 1 h. Further information about the beamline can be found in literature [33]. All measurements were performed at an incident angle higher than the critical angles of studied polymers (α i = 0.7 • ), where the critical angle of PMMA at this wavelength is much lower (α c = 0.14 • ). With this setup the beam penetrates the whole sample depths, from the top polymer surface to the substrate-polymer-interface, giving a large separation of Yoneda peak and specular peak on the detector. GISAXS experiments in Further GISAXS experiments ( Figure 12) were performed at Xenoxs Xeuss 3.0 lab system with a Cu-anode (1.54 Å). The sample detector distance was 1100 mm and the incident angle α i = 0.7 • with 1 h exposure time. Results and Discussion The phenomenon of roughness correlation has been shown for liquid films on surfaces, spin-coated thin films and polymer blends. For polymer brushes it has been hinted at. In their studies on spin-coated PS films, Müller-Buschbaum et al. observed a correlated film after spin-coating, which lost all correlations after heating above the glass transition temperature [16][17][18]. During this annealing, temperature increases the mobility of the polymer and the film relaxes from a nonequilibrium state after spin-coating, to an equilibrium stable state. In contrast to spin-coated films, the roughness correlation present in polymer brushes is stable to annealing at high temperatures, which was shown by unpublished results from Ochsmann and Akgun et al. [29,34]. However, a detailed analysis and comparison between both polymer systems has not been published yet. Therefore we concentrate on the similarities and differences in topography, surface roughness and roughness correlation between spin-coated films and polymer brushes and their behavior with respect to annealing. Various samples were prepared, which are shown in Figure 4. As PMMA and PS are standard polymers, which have also been extensively studied in brush systems, both were chosen as materials for these experiments. Next to spin-coated PS films and PMMA-and PMMA-b-PS brushes, PS was also spin-coated on top of PMMA brushes, to investigate the roughness correlation of polymer multilayers and prove the persistence of correlated roughness in polymer brushes. Furthermore, samples with a spin-coated film were characterized with and without solvent vapor annealing. An annealing of polymer brushes was not carried out, as the brushes have been cleaned with boiling DCM in Soxhlet extractor, harsher conditions than vapor annealing with THF. Consequently, if polymer brushes show roughness correlation, this property is seen as intrinsic and stable with respect to annealing. AFM Analysis of the Surface Structure As commonly used method to characterize polymer thin films, AFM measurements were performed for an untreated silicon wafer and every sample from Figure 4. With these measurements we can examine differences between spin-coated films and brushes in surface structure and show that roughness correlation can not be verified via AFM. Furthermore, ellipsometry results give information about the layer thickness, but also about the type of polymer, as PMMA and PS have different refractive indices. In contrast to spin-coated films, polymer brushes can not be washed off from silicon substrates with solvents, due to their covalent bond to the silicon surface. We can therefore distinguish between both polymers and verify the successful synthesis of polymer brushes. Figure 5 shows that in comparison to a bare substrate, the RMS roughness of spin-coated films with and without annealing is not significantly increased, only 0.21 nm and 0.25 nm, compared to 0.11 nm for the bare substrate. The ellipsometric layer thicknesses of both films (46.1 nm and 48.9 nm) differs by 2.8 nm, which is a deviation from the spin-coating process and cannot be related to the annealing. Comparing the topography images obtained from AFM measurements of PS spin-coated films with and without annealing, very small changes can be observed, as the lateral size and depth of deeper areas increases with annealing. In comparison to the bare substrates an additional lateral structure emerges in the spin-coated systems. This structure manifests in a waviness and is already present before annealing. The associated lateral length is not represented in the RMS values. In order to analyze this length qualitatively we calculate a radially averaged power spectral density (PSD, Figure 6) from the AFM images. In the PSD only minor differences between samples with and without annealing are visible, but both curves exhibit maxima at larger length scales, additionally to the same peak position for smaller length scales, as the blank wafer. A change of distances between higher domains in the topography is also shown by the peak shift of the PSD to larger length scales. The polymer brush systems also exhibit additional lateral structures. PMMA brushes and spin-coated PS films on top of silicon substrates and PMMA brushes have similar AFM results, regarding topography and RMS roughness (Figure 7). Furthermore PSD functions for these systems are also similar ( Figure 6). The dry layer thicknesses of the PMMA brushes and the PMMA layers underneath the spin-coated PS films are between 45.8 nm and 47.4 nm. Due to a low miscibility of both polymers, we assume that no significant diffusion of PS into the brush layer occurs. Therefore two bilayer systems with dry thicknesses of 90.9 nm for the sample without annealing and 91.5 nm for the sample with annealing are obtained. Diblock copolymer brushes are significantly different in RMS roughness (1.68 nm) and surface topography. In the topography image small domains are visible, caused by PS dimples on top of PMMA brushes, which have a diameter of approximately 70 nm. The miscibility of PS and PMMA is relatively low and the PS blocks will therefore form coils on top of PMMA brushes, to reduce the interaction with PMMA brushes underneath and with air. We assume that the dimples are formed by a few PS chains, which accumulate with stretched chains in the middle surrounded by chains, which bend to the center of the dimple. This behavior of avoidance of diblock copolymer brushes and binary mixed brushes has already been shown in literature and explains the additional maximum in the PSD at a length scale of 70 nm in Figure 6 [35]. A gaussian fit for this peak gives an average diameter of 69.8 ± 2.8 nm for the PS domains. Ellipsometry measurements were performed after synthesis of PMMA brushes and after synthesizing the PS block, in which layer thicknesses of 47.1 nm for PMMA layer and 15.3 nm for PS brushes were recorded. The combination of the diameter of 69.8 ± 2.8 nm and the layer thickness of 15.3 nm (height of the PS domains) indicates an oval shape of the PS domains ( Figure 8). AFM images the surface topography without any information about the underlying interface. Theoretically, a certain position on an uncoated silicon wafer could be measured with AFM and compared with measurements at the same position after coating, but finding the exact same position with an area of interest of 3 × 3 µm 2 is impossible. Therefore, X-ray studies are required, enabling the simultaneous analysis of all interfaces and the top surface as well as their correlation. A lateral characterization via AFM in combination with ellipsometric layer thickness measurements is not suited for an investigation of roughness correlation. XRR Experiments on Polymer Thin Films From XRR studies of all polymer thin films, RMS roughness and layer thicknesses are determined and compared to AFM measurements. The reflectivity curves in Figure 9 clearly show the expected dependence of 2π/∆q of Kiesig fringes width and layer thickness. While oscillations are clearly observed in spin-coated films on a silicon wafer and on PMMA brushes before annealing, a damping due to increasing roughness is present after annealing. Figure 9. X-ray reflectivity curves of polymer thin films. Measuring points are displayed as dots and fits are presented as solid lines. Note that the measurement for the spin-coated PS film after annealing could not be fitted, as the sample could not be adjusted properly. Via curve fitting procedure, scattering length density (SLD), dry layer thickness (d h ) and RMS roughness (σ) could be estimated. The PS film on top of the PMMA brushes is expected to be uncorrelated after annealing, while the brush layer is still correlated. Accordingly, two different roughness profiles are stacked in a bilayer system, which explains the increase in RMS roughness after annealing. The X-ray reflectivity curve of the spin-coated PS film after annealing could not be fitted, which may be related to problems in sample alignment and intensity normalization due to an irregular sample shape. Therefore, for the spin-coated PS film only the layer thickness is estimated from the distance between two minima of Kiesig fringes and RMS roughness is not discussed further. For both multilayer samples, two modulations appear with different amplitudes, which is typical for multilayer systems. This confirms our hypothesis from AFM measurements of a multilayered system without diffusion of PS into the brush layer. While XRR results of PMMA brushes are similar to the spin-coated PS film, regarding RMS roughness, the copolymer brushes again show a much higher roughness, caused by the PS domains. In Figure 10 the dry thicknesses (measured using ellipsometry and XRR) and RMS roughnesses of all samples (measured with AFM and XRR) are shown. The layer thickness values measured with both methods are in good agreement for all films and differ only by a few nanometers. However, RMS roughness values from XRR are several times higher than values from AFM. One reason for the differences certainly is the area of measurement in both experiments. AFM measurements were taken within an area of 3 × 3 µm 2 , whereas XRR gives statistics of the sample surface in centimeter-range. The main reason is the correlated roughness. As mentioned in the introduction, the oscillations caused by interference effects of two conformal roughness profiles are in phase with Kiesig fringes, increasing their intensity. At this point the Parratt formalism is no longer valid, as it underestimates the RMS roughness. If the roughness profiles of substrate and top surface are correlated, non-specular scattering contributes to specular reflection, as both oscillations are in phase, increasing the intensity of modulations. Holy and Baumbach postulated this coherence, considering the distorted wave born approximation in reflectivity measurements [9][10][11]. Thus from the reflectivity curves in Figure 9 only indirect indication for roughness correlations may be obtained. GISAXS results are consequently discussed in the following part. GISAXS Studies on Polymer Thin Films In the 2D GISAXS measurements of the spin-coated PS films (Figure 11), intensity maxima are observed between Yoneda peak and specular peak [8,15]. With a detector cut (line cut profile) in q z -direction, intensity is plotted versus wave vector values, in order to visualize the oscillations more clearly. After solvent vapor annealing, these oscillations disappear, as the roughness is no longer correlated and the scattering of both interfaces are independent. Müller-Buschbaum and Stamm attributed the loss of correlation to a relaxation of the polymer film into an equilibrium state [19]. Here annealing was done via evaporated solvents instead of heating above glass transition temperature, but the effect on roughness correlation is identical. In Figure 11 detector images and line cuts of polymer brush systems are shown, in which intensity oscillations are also present. Compared to spin-coated films, polymer brushes show a persistent roughness correlation, which is stable to any annealing process. Other polymers than PMMA show the same behavior, as to be seen in the results for PMMA-b-PS brushes and for homopolymer PS brushes (Supporting Information). Roughness correlation is therefore an intrinsic property of polymer brushes, independent from the type of monomer. As further evidence that oscillations in detector cuts are directly linked to roughness correlation and layer thickness of the polymer layer, different layer thicknesses of PMMA brushes were analyzed with ellipsometry and GISAXS. These scattering experiments were conducted at a Xenocs Xeuss 3.0 system. All detector scans in Figure 12 indicate roughness correlation. The oscillation widths shrink from the 74.8 nm brush layer over 47.3 nm to 30.5 nm and show the expected inversely proportional relation of layer thickness and distance between two minima in the line profiles (∆q z ). Although the block-copolymer brush system shows small domains of collapsed PS chains on top of PMMA blocks, conformality between the surface of the PS domains and the silicon substrate is observed as the width of oscillations in the detector cut matches the thickness of PMMA-b-PS brush layer ( Figure 13). As an additional proof, that the oscillations are caused by roughness correlation of spin-coated films and brushes, PS films on PMMA brushes are analyzed before and after annealing. Before annealing two different modulations are visible in q z -line cut, namely the scattering from the PMMA brushes and PS film ( Figure 11). This is in an agreement with our assumption from AFM and XRR results that no diffusion of PS into the brush layer takes place. This assumption is also supported by the Flory-Huggins interaction parameter of PMMA and PS [36,37]. From literature an interaction parameter of is known [38]. Combined with the degree of polymerization N for both polymers, the segregation parameter χN can be calculated as 44 for our system. This clearly exceeds the segregation limit of χN ≈ 10 [36,37,39]. Therefore, the existence of conformal polymer multilayers is proven with two different polymer systems, namely brushes and spin-coated films. Again, the oscillations between Yoneda and specular peak disappear after annealing. At q z positions of 0.75 nm −1 and 0.76 nm −1 in the multilayer system with annealing, oscillations are still present related to the PMMA brush layer located under the PS film (Figure 11 bottom right). The conformality of PMMA is therefore independent from the covering PS layer. From these results we can conclude, that polymer brushes, as well as spin-coated films and their multilayer-combinations are able to copy structure information from the substrates to the free film interface. A more detailed view of the lateral extend of these correlations is possible through a more detailed analysis of the lateral cut-off length [15,18]. In correlated polymer thin films, not all structure features of the substrate are replicated to the top surface structure, so that especially small-scale structures get lost during transfer to the upper surface. With the lateral cut-off length Λ c the smallest lateral structure size transferred to the top surface can be estimated. To determine the Λ c values of correlated films, several q z line cuts along q y are extracted from the 2D scattering image. As a function of increasing q y , a decay of modulations along q z is observed. The absolute lateral cutoff length is defined as where ∆q cor is the first in-plane wave vector at which all modulations are vanished [15,18]. The pixel position of the beam center is defined as q y value of 0. For a better signal noise ratio, 4 pixels in q y -direction are summed up to one line cut. Furthermore, every line cut was smoothed by calculating the median of three values to one value. This procedure gives a clear view on the oscillations. As modulations disappear after annealing, Λ c could only be determined for samples without annealing procedure. The position, where the beam center of the direct beam hits the 2D detector is defined as a wave vector of q y = 0 nm −1 . Line cuts along q z will therefore be taken at higher pixel values. For example, at a q y wave vector of 0.03 nm −1 modulations of the spin-coated PS film disappear, which corresponds to a lateral cutoff length of 207 nm ( Figure 14). Figure 13. GISAXS line cuts in q z -direction of PMMA brushes and PMMA-b-PS diblock copolymer brushes. The distance between two oscillation minima in the PMMA brushes (∆q z ) is higher than in copolymer brushes, which is an indicator for a lower layer thickness of PMMA brushes. In Figure 15 all Λ c values are compared (individual line cuts can be found in he SI). The lateral cut-off lengths for the spin-coated PS film on the substrate and on top of PMMA brushes are equal. Thus the underlying PMMA brush layer seems to behave like a solid substrate, comparable to the silicon wafer. This indicates that in the spin-coated PS films Λ c is determined by the hydrodynamics during coating. For PMMA brushes, the smallest lateral structure length, which is replicated to the polymer surface is around 123 nm. Compared to the spin-coated films, polymer brushes seem to copy smaller length scales of the roughness profile. PMMA-b-PS brushes have the lowest lateral cutoff of 60 nm. This is in good agreement with the size of the PS aggregates of 69.8 ± 2.8 nm, indicating that the flat tops of PS dimples (see Figure 8) are also correlated with the conformal PMMA brush surface. From the silicon surface to the top PS layer, especially the larger length scale waviness information get lost. Figure 14. Line cuts in q z -direction as a function of q y , to determine the lateral cutoff length for roughness correlation of a spin-coated PS film. Intensity is plotted versus q z vector values, starting from q y = 0 nm −1 . At q y = 0.03 nm −1 modulations are no longer present, roughness is no longer correlated. Note that curves have been shifted for clarity. Summary In this report spin-coated polymer films, polymer brushes and multilayers of both polymer systems were analyzed with AFM, ellipsometry, XRR and GISAXS to compare them in regards of surface structure and roughness correlation. We demonstrated the necessity of non-specular scattering experiments to prove roughness correlation of these polymer thin films. While solvent vapor annealing of spin-coated films led to a loss of interfacial correlation, polymer brushes proved stable to solvent annealing processes. We can therefore conclude that roughness correlation is an intrinsic property of polymer brushes. Supplementary Materials: The following are available at http://www.mdpi.com/2073-4360/12/9/2101/s1, Figure S1: Detector image and qz line cut of PS brushes, to prove roughness correlation, Figure S2: Determination of the lateral cutoff length c of PMMA brushes (left) and PS brushes (right) via qz line cuts as a function of qy. All curves are shifted for better visibility and represent the mean value of scattering intensities of four pixels with additional smoothing afterwards. Modulations origin from roughness correlation disappear at qy = 0.051 for PMMA and qy = 0.065 for PS, Figure S3: Determination of the lateral cutoff length c of PMMA-b-PS diblock copolymer brushes (left) and PMMA brushes with a spin-coated PS film on top (right) via qz line cuts as a function of qy. All curves are shifted for better visibility and represent the mean value of scattering intensities of four pixels with additional smoothing afterwards. Modulations origin from roughness correlation disappear at qy = 0.106 for copolymer brushes and qy = 0.030 for the PMMA-PS multilayer system.
7,673.4
2020-09-01T00:00:00.000
[ "Materials Science" ]
APPLICATION AND PERFORMANCE ANALYSIS OF A NEW BUNDLE ADJUSTMENT MODEL As the basis for photogrammetry, Bundle Adjustment (BA) can restore the pose of cameras accurately, reconstruct the 3D models of environment, and serve as the criterion of digital production. For the classical nonlinear optimization of BA model based on the Euclidean coordinate, it suffers the problem of being seriously dependent on the initial values, making it unable to converge fast or converge to a global minimum. This paper first introduces a new BA model based on parallax angle feature parametrization, and then analyses the applications and performance of the model used in photogrammetry field. To discuss the impact and the performance of the model (especially in aerial photogrammetry), experiments using two aerial datasets under different initial values were conducted. The experiment results are better than some well-known software packages of BA, and the simulation results illustrate the stability of the new model than the normal BA under the Euclidean coordinate. In all, the new BA model shows promising applications in faster and more efficient aerial photogrammetry with good convergence and fast convergence speed. INTRODUCTION As entering the digital age, photogrammetry has been widely used in nearly all walks of life.The bundle adjustment (BA) model serves as one of the most effective methods for the modern high precision measurement positioning.In photogrammetry, the BA model is one of the most rigorous aerial triangulation methods, which can optimize the posture of the cameras and the three-dimensional coordinates of encrypted dots.This model is also important in generating the digital products and has been applied in many fields (Chaturvedi, 2000;Mikhail et al., 2001;Sauerbier, 2004). The classic method of the BA model expresses three-dimensional points under the Euclidean coordinate system, and takes the collinear equation as the optimized adjustment equation (Ackermann, 1984).However, a large number of experiments show that this representation method is only effective for close feature points.If the feature points in the environment are too far in distance to access large parallax from the image, great errors and uncertainties of these feature points may emerge in the depth direction, which will cause algorithm divergence.Actually, this situation is ubiquitous in the real world, because the three axes of the Euclidean coordinate system share the same data "dimension" (Yuan, 2009).When they have incremental values in the same order of magnitude, the relative increment value of Z-axis is far less than the lateral relative increment, which can cause the relative error of Z-axis parameters too tiny compared to the lateral ones.In other words, although the three-dimensional relative error matrix is full rank and irrelevant, it still has weak irrelevance due to the tiny value of Z-axis.If an effective operation is to be undertaken, the Z-axis parameters need to be amplified to adapt to the lateral parameters, which can cause the amplification of the error of the Z-axis at the same time. Since BA is a high dimensional nonlinear optimization problem, in which the multiple iterations need to be solved through Taylor series expansion of linearization, an appropriate initial value is necessary.The more accurate, the initial value is, the greater, the chance of convergence to global optimum will be, and the faster, the convergence speed will be.On the contrary, an unreasonable initial value may lead to optimization problems such as falling into local optimum and divergence, which entails more iterative times. To solve the problem of slow convergence speed of the BA algorithm based on the Euclidean coordinate system, a large number of ground control points or more accurate orientation information are always used, which are not directly from cameras (Yuan et al., 2009).A differential GPS dynamic positioning technique has once been successfully used to measure the instantaneous spatial location of the camera station, and been used for aerial triangulation which leaded to a reformation in aerial photogrammetry.However, the technique still has not gotten rid of the bondage of ground control points (Moré , 1978).In addition, many scholars have begun to try different numerical optimization algorithms such as the Gauss-Newton (GN) algorithm, the Levenberg-Marquart (LM) algorithm and the conjugate gradient algorithm (Hartley and Zisserman, 2003;Wu et al., 2011).Some modern optimization methods have also been used in photogrammetry, with the development of artificial intelligence, to solve the problem.However, without changing the collinear equation in Euclidean space, the problem of slow convergence speed could still not be solved essentially.As for the hard numerical matching between the radius vector of the elevation and the lateral parameters caused by Euclidean coordinates, a new BA model based on parallax angle feature parametrization, ParallaxBA, has recently been proposed (Zhao et al., 2015).This model can work without the consideration of the dimensional irrelevance between the elevation and the lateral parameters, and reduce the relative error of the radius vector.In many cases, normal BA based Euclidean XYZ feature Parametrization will lead to a result containing an ill-conditioned equation and an objective function with very small gradients under certain conditions, while the ParallaxBA can address these bottlenecks (Zhao et al., 2015).In this paper, the applications (especially in aerial photogrammetry) and the performance of the new BA model are discussed, and the convergence performance of the model is particularly analysed, since the divergence and the slow convergence speed is a problem studied for long in the aerial photogrammetry.Experiments and simulations are conducted as well to apply the new BA model and verify its performance. METHODOLOGY In fact, ParallaxBA was proposed first in the field of computer vision instead of aerial photogrammetry (Zhao et al., 2015).This new BA model is merely a free-net BA model that cannot achieve geo-location for the triangulated tie points, so it fails in supporting some applications for location and mapping in photogrammetry (Yan et al., 2017).To apply it in the aerial photogrammetry, Ground Control Points (GCPs) need to be added into the ParallaxBA model, where tie points and GCPs are represented in the form of angles and XYZ, respectively (Yan et al., 2017).After GCPs being added into the BA functions, the ParallaxBA model can then be used to acquire the spatial threedimensional points and be applied in aerial triangulation.This section simply introduces the method in the new BA model, including the creation of the observation function and the solution of the functions. Observation Function To conduct BA, the observation functions need to be constructed where different coordinates can be used to represent the feature points.Aiming at accurately expressing the different types of feature points which are close far and almost no depth information in the three-dimensional space, the parallax angle is used to express the different types of feature points such as the feature points that are observed for only one time.The basic idea of the hybrid feature parameterization in the new BA model is to express the location of spatial 3D feature points using the azimuth angle, the elevation angle and the parallax angle through combining two camera centres as anchor, whereas the GCPs are still expressed by XYZ.As the representation of the GCPs is normal like in the classic BA, it will not be repeatedly introduced here.The following is the process of the representation in the form of angles. Suppose that the feature point is observed only once, and the camera centre which first observes the feature point is defined as the main anchor denoted as t m therefore, the feature could be expressed as: where Fj = feature point  j ,  j = azimuth angle, altitude angle [ j ,  j ] = vector direction the main anchor t m to the feature point Fj in the P0 global coordinate (shown as Figure 1) Figure 1.The parameterization of 3D feature points based on the parallax angle. For the parameterization of the first observed feature point, there is no depth information.If the feature point Fj is observed for twice or more than twice, two camera centres of them are selected as the anchors, wherein one is the main anchor denoted as t m , and the other is secondary anchor denoted as t a .Therefore, the feature point Fj in 3D can be expressed by azimuth, elevation angle and parallax angle  j : where  j = parallax angle Compared with the feature point observed for once, the additional parallax angle  j represents the angle from vector to vector . and are the vector from the main anchor t m to feature point Fj, and that from the secondary anchor t a to feature point Fj, respectively. Supposing that the main anchor and secondary anchor of the feature point in 3D are t m and t a , respectively, the coordinate of the image point of the feature point Fj under the camera Pi can be expressed as: where u, v = image coordinate In this formula, , , where x, y, t = homogeneous image point coordinate K = intrinsic parameter matrix ( , , ) where Ri = rotation matrix of Pi, which represents the function of the Euler angle [ ] T The vector is the unit vector from the main anchor t m to the feature point Fj, which can be calculated by the following formula: sin cos ( , ) sin cos cos ̃ ( ≠ ) represents the standardized vector from the camera Pi to the feature point Fj. which can be calculated by the following formula: sin( ) sin (t t ) where = angle from vector t at m to the vector Therefore, can be calculated by the inner product of t at m and : arccos( ) In conclusion, the eq.( 4) to (8) are the observation equations of the least-square optimization of the new BA model. Solution of the Least-square Optimization The essence of the BA is just the least-square optimization, which can be in the function below: where () = observation function (formula (4)) [ , , , , , , , , ] , which contains all variables that need to be optimized , which is the observation variable to be optimized −1 = weight of the feature point When given the initial values of all the variables 0 , the solution of the optimization can be finally transformed to the solution of the following formula (Zhao et al., 2015): Where = first derivative of the observation values with respect to all the unknown variables To solve the nonlinear equation (Formula (10)), the Gauss-Newton method and the Levenberg-Marquart method are widely used in photogrammetry (Hartley and Zisserman, 2003).In this paper, these two solutions are also applied to verify the convergence performance of the new BA model.Since the applications and the performance of the BA model are focused and payed more attention here, the details and some other parts of the method are not introduced, and more can be referred from Zhao and Yan's work (Yan et al., 2017;Zhao et al., 2015). EXPERIMENT AND RESULT To discuss the applications (especially in aerial photogrammetry) and analyse the performance of the new BA model, two datasets are processed in the experiment.The first dataset is a group of aerial images --the Toronto dataset --released by the International Society for Photogrammetry and Remote Sensing (ISPRS) and the other one is a UAV dataset obtained at a flying altitude of 138.632 meters, covering an area of 26245.5 square meters. The Result of Toronto Dataset The Toronto dataset contains 13 images captured by the UCD cameras, 139648 three-dimensional feature points and 297097 projected points.To better analyse the performance of the ParallaxBA model based on the parallax angle with GN and LM, the performance of G2O (Grisetti et al., 2011) and sSBA (Konolige and Garage, 2010), which are known as two efficient BA software packages, are compared.Since sSBA only offers the LM module, so only the LM process of it is compared.Note that the LM algorithm is realized in different methods in these three software packages, and the one in ParallaxBA is the same as the sSBA.In addition, the performance of the same software packages differs when running in Windows and Linux, therefore the result in two systems are compared.Since the G2O and sSBA show very low efficiency in Windows, their result in that system are not presented. The MSE, the iterations, the number of functions and the run time are shown in Table 1, which are produced in the processing of the data with different software packages given the same initial values.Figure 2 shows the MSE curve of each iteration.The BA in G2O with GN cannot converge because of the singularity of the normal equation, while the MSE reaches 153.13 after 200 iterations in G2O with LM.In sSBA and ParallaxBA, the MSEs all reach 0.048656, but it needs 64 iterations in the former when only 8 and 20 iterations in the other.The ParallaxBA packages are more efficient as well, because their run time are far less than the others.Figure 3 shows the result of ParallaxBA --the reconstructed 3D features (blue points) and the camera centres (triangular cones). DISCUSSION The divergence and the convergence speed problem has been studied for long in aerial triangulation, thus the convergence performance of the ParallaxBA model is particularly discussed in this section.The convergence of BA greatly depends on the initial value (Mikhail et al., 2001).To analyse the convergence of the ParallaxBA model in a more comprehensive way, simulations are used here to conduct BA under different initial values.The normal BA (BA based on the Euclidean coordinate system) is also conducted for comparison. The simulated data is obtained under aerial photography condition, which contains 4 flight strips and 90 images (7680 x 13824 in size, without any geometric distortion) in total.The simulated movement trajectory is shown in Figure 5.The camera angle elements and line elements can be calculated through relative orientation, which are assumed as the optimal values.The Gauss noise under different levels is then added to the optimal values to simulate different initial values. Figure 5.The simulated Movement trajectory Convergence under Different Initial Values of Camera Angle Elements When given the same initial MSE, add the Gauss noise of 0.03, 0.05, 0.08, 0.10, 0.13 and 0.15 degrees to the initial values of camera angle elements and convergence performance is shown in Table 2 Convergence under Different Initial Values of Camera Line Elements Assume that the distance between the cameras is unit one, then add the Gauss noise of 0.1, 0.2, 0.3, 0.4 and 0.15 to the initial values of camera line elements.The convergence performance is shown in Table 3 and Figure 7 when given the same initial MSE. Obviously, the two kinds of BA converge to the same value when the error level (noise) is 0.3.However, as the error level keeps on increasing, the ParallaxBA model converges to a smaller MES than the normal BA when reaching the largest iteration.The simulation results under different initial values have also proved the stable convergence performance of the introduced BA model, in which its MSEs after iterations keep the same small, whereas the normal BA ones become larger as the initial error increases.Therefore, the performance of the new BA model is better than the normal one, and the new BA model can be applied more in the aerial photogrammetry and be further studied. Figure 2 . Figure 2. The MSE curve of each iteration Figure 4 . Figure 4.The ortho-rectification and mosaic result of the UAV dataset All the above results have validated the effectiveness and the fast processing speed of the ParallaxBA model.For the good performance of the new BA model, it shows promising future in applications in aerial photogrammetry. Figure 6 . Figure 6.The converging MSE of the ParallaxBA model and the normal BA model (XYZ) given different initial values of camera angle elements Figure 7 . Figure 7.The converging MSE of the ParallaxBA model and the normal BA model (XYZ) given different initial values of camera line elements 5. CONSLUSION Traditional least-square bundle adjustment methods of aerial triangulation are established based on the Euclidean coordinate system.A large number of experiments and studies have shown that there exist strong correlations between the unknowns in traditional models, which can lead to poor convergence and slow convergence speed.However, the new BA model introduced in this paper show promising results in photogrammetry especially in the aerial triangulation.The experiment results have illustrated the good performance of the BA model, whose MSE reaches 0.048656 after just 8 (using GN) and 20 (using LM) iterations.The simulation results under different initial values have also proved the stable convergence performance of the introduced BA model, in which its MSEs after iterations keep the same small, whereas the normal BA ones become larger as the initial error increases.Therefore, the performance of the new BA model is better than the normal one, and the new BA model can be applied more in the aerial photogrammetry and be further studied. Table 1 . The convergence of the Toronto dataset in G2O, sSBA and ParallaxBA Table 2 . and Figure6.Obviously, the two kinds of BA converge to the same value when the error level (noise) is 0.03 degrees.However, as the error level keeps on increasing, the ParallaxBA model converges to a smaller MES than the normal BA when reaching the largest iteration.The convergence of the ParallaxBA model and the normal BA model given different initial values of camera angle elements Table 3 . The convergence of the ParallaxBA model and the normal BA model given different initial values of camera line elements
4,034.8
2017-09-13T00:00:00.000
[ "Engineering" ]
Scale law of complex deformation transitions of nanotwins in stainless steel Understanding the deformation behavior of metallic materials containing nanotwins (NTs), which can enhance both strength and ductility, is useful for tailoring microstructures at the micro- and nano- scale to enhance mechanical properties. Here, we construct a clear deformation pattern of NTs in austenitic stainless steel by combining in situ tensile tests with a dislocation-based theoretical model and molecular dynamics simulations. Deformation NTs are observed in situ using a transmission electron microscope in different sample regions containing NTs with twin-lamella-spacing (λ) varying from a few nanometers to hundreds of nanometers. Two deformation transitions are found experimentally: from coactivated twinning/detwinning (λ < 5 nm) to secondary twinning (5 nm < λ < 129 nm), and then to the dislocation glide (λ > 129 nm). The simulation results are highly consistent with the observed strong λ-effect, and reveal the intrinsic transition mechanisms induced by partial dislocation slip. Coactivation of twinning and detwinning occurs in the NTs with λ < 5 nm, and the NTs with λ = 2 -3 nm contribute the highest ratio. The standard deviation of the evaluated λ is 0.5 nm. Figure 4 The statistical diagram of the deformation behaviors. 14 microzones are observed through in situ TEM tests, where 7 observed zones contain the NTs with λ < 5 nm, 5 observed zones have the NTs with 5 nm < λ < 129 nm, and 2 observed zones are the twins with λ at the submicrometer scale. The standard deviation of the λ is 20 nm for the upper limit of 129 nm and 0.5 nm for the lower limit of 5 nm. The statistical results exhibit that 80% of NTs (5 observed zones with λ < 5 nm) occur coactivated twinning and detwinning under a 70 o TB orientation angle to loading direction. 100% of NTs (2 observed zones with λ < 5 nm) exhibit detwinning and the subsequent martensite transformation under a 9 o TB orientation angle to loading direction. Among the 5 observed zones containing NTs with λ = 6 -129 nm, secondary twinning occurs with a frequency of 80%. In the left 2 observed zones, dislocation motion is active in the NTs with λ at the submicrometer scale. dislocations are called as the 60° system. When α 1 = 30° for leading and α 2 = -30° for trailing partials, these dislocations are defined as the screw system 2 . According to the Thompson tetrahedron illustrating the possible slip planes in FCC crystal, the mixed dislocation and screw dislocation contribute to the deformation twinning 3 . The 60° system is associated with the mixed dislocations. Therefore, these two kinds of dislocation systems are easy to nuclear a twinning deformation in the nanocrystalline Inner boundaries Inner boundaries Twin boundaries Twin boundaries Note that the movement of leading and trailing partials as well as the stacking fault are all driven by the applied stress . Thereby, when CD moves a distance , the work done by the applied stress can be expressed as: . (1) From the dislocation theory, the increment of dislocation line energy can be achieved, where is the absolute length of leading partial , is approximated as the width of the defined region , and is as the magnitude of a lattice dislocation . is the Poisson ratio. is the shear modulus. is the angle between a Burgers vector and the dislocation line, such as α 1 and α 2 shown in Supplementary Figure 12. Let's define the angle to be positive when the angle rotates anticlockwise from the dislocation line. Then, the critical twinning stress could be determined according to the relation of , given as: . (3) Here, for 60° system of dislocations, and for screw system. for 60° system of dislocations, and for screw system. On the other hand, when C'D' moves a distance, , the work done by the applied stress is given as: , (4) and the reduction of stacking fault energy is , where is the intrinsic stacking fault energy. Otherwise, the increased dislocation line energy of lattice dislocation can be derived as: . (6) According to the energy balance between the work done by the applied stress, the energy change in the dislocation segments and stacking fault plane, the critical trailing stress can be given as: . Here, for 60° system of dislocations, and for screw system. Moreover, the balance between the work done by the applied stress and the increment of dislocation line energy leads to the determination of the critical detwinning stress, given as: . The behavior of twinning deformation demands the critical twinning stress lower than the critical trailing stress and local stress, and the detwinning deformation occurs only when the critical detwinning stress is smaller than the local stress. Here, it should be pointed out that the 60°system and screw system are two types of classic dislocation system for deformation twinning. While the dislocation in a grain is most likely a mixed nature 2 , therefore, it is difficult to separate and probe the 60° dislocation and screw dislocation in TEM tests during deformation twinning. Another important issue is how to determine the maximum local stress in NTed metals, which must be greater than the critical twinning/detwinning stress for twinning/detwinning behaviors. A dislocation density-based plastic model has been developed in our previous work to describe the grain size and twin spacing-dependent mechanical properties of the NTed metals 5 . On the basis of this plastic model, the local flow stress in a unit of twin lamellae (Supplementary Figure 13) can be expressed as: , where and are the empirical constant and the Burgers constant, respectively. is the dislocation density in the interior crystal, and is local dislocation density in the twin lamellae, given as ,
1,348.2
2019-03-29T00:00:00.000
[ "Materials Science" ]
Genetic diversity of Arcobacter isolated from bivalves of Adriatic and their interactions with Mytilus galloprovincialis hemocytes Abstract The human food‐borne pathogens Arcobacter butzleri and A. cryaerophilus have been frequently isolated from the intestinal tracts and fecal samples of different farm animals and, after excretion, these microorganisms can contaminate the environment, including the aquatic one. In this regard, A. butzleri and A. cryaerophilus have been detected in seawater and bivalves of coastal areas which are affected by fecal contamination. The capability of bivalve hemocytes to interact with bacteria has been proposed as the main factor inversely conditioning their persistence in the bivalve. In this study, 12 strains of Arcobacter spp. were isolated between January and May 2013 from bivalves of Central Adriatic Sea of Italy in order to examine their genetic diversity as well as in vitro interactions with bivalve components of the immune response, such as hemocytes. Of these, seven isolates were A. butzleri and five A. cryaerophilus, and were genetically different. All strains showed ability to induce spreading and respiratory burst of Mytilus galloprovincialis hemocytes. Overall, our data demonstrate the high genetic diversity of these microorganisms circulating in the marine study area. Moreover, the Arcobacter–bivalve interaction suggests that they do not have a potential to persist in the tissues of M. galloprovincialis. | INTRODUCTION The genus Arcobacter has become increasingly important in recent years because species such as A. butzleri and A. cryaerophilus are considered emergent food-borne enteropathogens (Collado & Figueras, 2011). Arcobacter butzleri, in particular, has been classified as a serious hazard to human health by the International Commission on Microbiological Specifications for Foods (ICMSF 2002) and a significant zoonotic pathogen (Cardoen et al., 2009). In Europe, A. butzleri was often recovered from samples of patients with diarrhea in Belgium, France, and Italy (Prouzet-Mauleon, Labadi, Bouges, Menard, & Megraud, 2006;Vandamme et al., 1992;Vandenberg et al., 2004). Moreover, the persistence of microorganisms in bivalves can be species or strain dependent (De Abreu Corrìa et al., 2007;Maalouf et al., 2011;Morrison et al., 2011). This is not surprising considering that phagocytosis requires specific surface interactions with bacteria/bivalve (Canesi et al., 2002). Phagocytic efficiency of bivalve hemocytes toward foreign particles can be assessed by studying the hemocyte ability to spread toward them (Malagoli & Ottaviani, 2004;Mosca, Narcisi, Cargini, Calzetta, & Tiscar, 2011;Mosca, Lanni et al., 2013; and degrade these by producing reactive oxygen species (ROS). This latter activity is defined respiratory burst (Gunderson & Seifert, 2015;Ordàs, Novoa, & Figueras, 2000;Versleijen et al., 2008). To our knowledge, only one previous work studied the dynamic of interaction of A. butzleri and bivalves showing that the type strain A. butzleri LMG 10828 T did not persist in Mytilus galloprovincialis (Ottaviani, Chierichetti et al., 2013). In order to acquire preliminary information about potential persistence of Arcobacter spp. in host tissues, it is important to investigate the circulation of human pathogenic species in different marine environments and possible inter-and intraspecies differences in the interaction with the bivalves that are more commonly harvested in those areas. Despite this, reports on the genetic characterization of arcobacters isolated from the marine environment and pathogen-bivalve immunologic interaction are scarce (Collado, Jara, Vásquez, & Telsaint, 2014;Levican et al., 2014;Ottaviani, Chierichetti et al., 2013). In this study Arcobacter strains, isolated from bivalves collected from a restricted harvesting area of the Central Adriatic Sea of Italy, were molecularly identified and typed by enterobacterial repetitive intergenic consensus PCR (ERIC-PCR) analysis. Moreover, spreading activity and respiratory burst induced by these isolates on hemocytes of M. galloprovincialis, the most common bivalve harvested in the Central Adriatic Sea, were investigated. for 48 hr in aerobic conditions, 0.2 ml of the broth were inoculated by passive filtration with Millipore filters (0.45 μm, 47 mm) onto blood agar plates (trypticase soy agar supplemented with 5% sheep blood) and incubated for 48-72 hr at 30°C under aerobic conditions. | Identification and molecular characterization of Arcobacter isolates Presumptive colonies biochemically identified as Arcobacter spp. were tested by PCR for the rpoB gene of Arcobacter spp. with primers from Korczak et al. (2006). In brief, 4-5 colonies from each isolate were suspended in 500 μl of sterile distilled water and denatured at 95°C for 10 min, then centrifuged at 13.000 rpm for 1 min. The supernatant was re- and Milli-Q water. The PCR consisted of an initial denaturing at 94°C for 5 min followed by 40 cycles of 94°C for 1 min, 25°C for 1 min, and 72°C for 2 min, with a final extension at 72°C for 5 min. Arcobacter butzleri LMG 10828 T was used as the reference strain in all molecular reactions. | Experimental animals and hemolymph collection for phagocytosis assays Each phagocytosis assay was performed on hemolymph pooled from 10 organisms of M. galloprovincialis (40-50 mm shell length) harvested from the same coastal area from which bivalves sampling was carried out. Moreover, each phagocytosis assay was repeated five times on five different lots of mussels, monthly sampled from January to May 2013. The organisms were maintained in a closed recirculating system, equipped with a filtering apparatus and air supply. A multiparameter probe (mod. YSI 556; Technosea) was used to monitor the temperature (mean 18.1 ± 0.6°C), salinity (mean 33.1 ± 0.2‰), and dissolved oxygen (DO) (mean 6.8 ± 0.3 mg/L). The mussels were acclimated for 48 hr at least before the experiment and no feed was administered. The hemolymph was extracted from the posterior adductor muscle, it was pooled and the cell concentration was adjusted to 1 × 10 6 /ml in ice-cold artificial sterile seawater (ASS). The pool was then subdivided into two aliquots for phagocytosis assays. For both assays, cell wall of Saccharomyces cerevisiae (Zymosan A; Sigma-Aldrich) was used as the gold standard for its very high ability to stimulate phagocytosis (Malagoli & Ottaviani, 2004;Mosca 2013a,b;Ordàs et al., 2000). Moreover, not stimulated hemolymph represented the basal level. For the experiments on hemocytes, the Arcobacter strains, grown in tryptone soy broth (Oxoid) at 30°C for 48 hr, were centrifuged at 5000g for 20 min at 4°C. Pellet was resuspended in phosphate buffer saline (PBS, 10% w/v) and adjusted to the concentration of 5 × 10 9 CFU/ml after optical measurement (5 × 10 8 CFU/ml gave approximately 0.5 OD600 nm). | Spreading activity The spreading activity of hemocytes was investigated, evaluating changes in hemocyte shape from a rounded (inactive) to an ameboid (active) form. In particular was measured the shape factor (SF), a pure number quantifying the degree to which a cell deviates from circularity, by using a video imaging system integrated into a computerized analyzer of cells in suspension (Cell Viability Analyzer Beckman Coulter) (Malagoli & Ottaviani, 2004). An aliquot of each hemolymph sample was incubated for 30 min with bacterial suspension, corresponding to bacteria to hemocytes ratio of 80:1 (Mosca et al., 2011). Ninety μl of the mixture was then placed on a microscope slide within a chamber delimited by a vaseline ring. A coverslip was placed over the slide in order to partially cover the chamber and, following F I G U R E 1 Geographical location of sampling areas in the Central Adriatic coast of Italy (Marches Region): blue and red sites correspond to Mytilus galloprovincialis and Chamelea gallina sampling areas, respectively hemocyte adhesion (time 0), 10 μl of 10 1 μmol/L N-formyl-Meth-Leu-Phe (fMLP) was added to the uncovered part of the chamber. The digital image of the hemocytes was acquired after 20 min and the parameters for SF evaluation, that is, perimeter length and area, were automatically obtained by electronically tracing the edges of the cell. The assay was assayed in parallel with Zymosan A using a yeast to hemocytes ratio of 80:1 and untreated hemolymph (Mosca et al., 2011). Global results from five independent experiments for each test and control group were expressed as SF mean values ± standard deviation (SD). | Hemocyte respiratory burst Luminol-enhanced chemiluminescence (CL) assay has been widely used to measure the production of ROS in bivalve hemocytes (Gunderson & Seifert, 2015;Mosca et al., 2011;Mosca, Lanni et al., 2013;Ordàs et al., 2000;Versleijen et al., 2008). The measurement of the luminescence intensity (y-axis) versus time (x-axis) generates a positive polynomial curve. So, the results can be reported as integral value of the area under curve (AUC) (Gunderson & Seifert, 2015;Versleijen et al., 2008). For CL assay, hemolymph samples were placed in triplicate in 96-well microplates. Following 15 min incubation with Luminol (Sigma Aldrich) at 1 mmol/L, the hemocytes were stimulated to phagocytosis with Arcobacter suspensions (bacteria to hemocytes ratio of 80:1) and Zymosan A (zymosan to hemocytes ratio of 80:1). The chemiluminescence intensity by microplate reader (Sinergy H1; Bio-Tek) was detected for 60 min by plotting the curve of the counts per min and then calculating the AUC by integration (Gunderson & Seifert, 2015;Versleijen et al., 2008). Global results from five independent experiments for each test and control group were expressed as AUC mean values ± standard deviation (SD). | Statistical analysis For the analysis of ERIC-PCR electrophoresis products, dendrograms were constructed by the use of the Dice coefficient (Dice, 1945) and the unweighted-pair group method using arithmetic averages (software Treeconw version: 1.3b, Copyright © Yves Van de Peer University of Konstanz). Genetic relatedness between the PCR electrophoresis products of the strains was interpreted according to the method of Tenover et al. (1995). Thus, isolates were designated as indistinguishable, closely related; possibly related, and different with 0, 1-3, 4-6, >7 band differences, respectively (Tenover et al., 1995). Clusters were defined on the basis of the 80% similarity cutoff . In the phagocytosis assays, the statistical significance of differences between means was determined by one-way ANOVA for paired samples (each strain against each other and each control). The level for accepted significance was p < .01. Seven strains (58%) were isolated from C. gallina and 5 (42%) from M. galloprovincialis (Table 1). Six strains (50%) were isolated in April, 2 (17%) in each month of January and March, and 1 (8%) in each month of February and May (Table 1). However, the number of strains is too low to assess whether differences of prevalence are significant. | Immune response All data of spreading activity are summarized in Moreover, no significant intra-and interspecies differences were observed among strains. Nevertheless, the stimulation with Zymosan A induced AUC value significantly higher compared to those induced by Arcobacter strains (Table 2). | DISCUSSION Arcobacter species have been frequently isolated from seawater of the Mediterranean area (Fera et al., , 2010Gugliandolo et al., 2008;Maugeri et al., 2004), where they may survive for a long time (Collado et al., 2008). These results suggest that the marine environment, thus, the marine organisms used as a food, particularly bivalves, may represent a potential reservoir of Arcobacter for infection (Collado & Figueras, 2011). In agreement with previous studies carried out in different geographical areas, this study confirms that potentially pathogenic arcobacters are frequently found in bivalve samples and A. butzleri is the most prevalent species Levican et al., 2014;Mottola et al., 2016;Nieva-Echevarria et al., 2013). Previous studies in Spanish marine areas reported A. butzleri and A. cryaerophilus as the most prevalent species isolated from mussels and clams, respectively , while, with respect to the seasonality, A. butzleri predominated from June to October and A. cryaerophilus from January to May (Levican et al., 2014). In this study, in the monitored Adriatic Coast of Central Italy, A. butzleri was more frequently isolated in clams than in mussels and A. cryaerophilus more in mussels rather than in clams. Moreover, A. butzleri and A. cryaerophilus were isolated throughout the period of the study, from January to May. This study period covers even the coldest months of the year for this geographical area, that is, January-March when water temperatures ranges, usually, between 10 and 14°C. Our ERIC-PCR results demonstrated that different strains of A. butzleri and A. cryaerophilus circulated in the restricted marine environment object of the study in January-May 2013. This high variability of Arcobacter strains circulation is in agreement with recent studies from Chile and Spain Levican et al., 2014). Unfortunately, the short period of investigation allowed us to isolate a number of strains too limited to make any statistical evaluation of these data. For this reason, in a future investigation, we intend to extend the period of study in order to assess if in this marine ecosystem differences of isolation (Balbi et al., 2013;Canesi et al., 2001Canesi et al., , 2002Parisi et al., 2008;Pruzzo et al., 2005). Therefore, the elucidation of the interactions between pathogenic bacteria and hemocytes is important to explain their clearance in bivalve tissues and to predict the consequent efficiency of their elimination by depuration strategies (Pruzzo et al., 2005). The functional approach used in this study to evaluate pathogen-bivalve interactions demonstrates that all A. butzleri and A. cryaerophilus strains, including A. butzleri type strain, induced phagocytic answers of hemocytes significantly higher with respect to the basal level and spreading activity not significantly different from that of Zymosan. Zymosan is a highly concentrated immunogenic compound represented by the cell wall of Saccharomyces cerevisiae and, for this, it is usually used as the gold standard in phagocytosis assays. On the contrary, our Arcobacter strains were assayed without any treatment that could concentrate their antigenic components. This could explain the higher respiratory burst induced by Zymosan rather than that induced by strains. These results are consistent with bioaccumulation experiments of A. butzleri LMG 10828 in M. galloprovincialis that demonstrated how Arcobacter type strain was quickly removed from the host tissues (Ottaviani, Chierichetti et al., 2013). To the light of these findings it can be assumed for these Arcobacter strains a rapid clearance from the M. galloprovincialis tissues. In future investigations, to fully elucidate how bacteria-bivalve interactions develop and persist in natural condition, it is our intention also to consider the environment in which these interactions occur and to evaluate its influence on both the expression of bacterial cell properties and bivalve health status. However, to our knowledge, this study, even with these limitations, represent the first that tested, at the laboratory scale, the Arcobacter-mussel interactions by studying the phagocytic capability of M. galloprovincialis hemocytes toward A. butzleri and A. cryaerophilus strains, thus providing preliminary information to predict the efficiency of their elimination by host tissues and the consequent impact on human health. | CONCLUSIONS Different strains of human pathogenic A. butzleri and A. crioaerophilus were found in M. galloprovincialis and C. gallina harvested from a restricted marine environment of the Central Adriatic Sea of Italy between January and May 2013. However, the interaction of these Arcobacter strains with M. galloprovincialis hemocytes suggests that they do not have a potential to persist in mussels tissues and therefore they could be efficiently removed with the conventional purification practices. ACKNOWLEDGMENTS This work was funded by a research project (IZSUM 10/2011) from the Italian Ministry of Health.
3,448.2
2016-09-20T00:00:00.000
[ "Biology", "Environmental Science" ]
Strange bedfellows: on Pritchard’s disjunctivist hinge epistemology The paper discusses some themes in Duncan Pritchard’s last book, Epistemic Angst. Radical Skepticism and the Groundlessness of Our Believing. It considers it in relation to other forms of Wittgenstein-inspired hinge-epistemology. It focuses, in particular, on the proposed treatment of Closure in relation to entailments containing hinges, the treatment of Underdetermination-based skeptical paradox and the avail to disjunctivism to respond to the latter. It argues that, although bold and thought-provoking, the mix of hinge epistemology and disjunctivism Pritchard proposes is not motivated. Introduction In this paper I propose a close examination of some prominent themes in Duncan Pritchard's Epistemic Angst (2016). Pritchard's dialectical setup is familiar to connoisseurs of hinge epistemology. 1 For he contends that skepticism should be understood as a paradox, which, starting with prima facie acceptable premises, leads to the unacceptable conclusion that we do not possess knowledge of ordinary empirical propositions about mid-size physical objects in our surroundings. Moreover, Pritchard thinks that the paradox in fact comes in two distinct forms, which he calls the "Closure-based para-dox" and the "Underdetermination-based paradox". 2 He thinks that these paradoxes put pressure on our notion of knowledge, rather than that of warrant or justification, despite what other hinge epistemologists have maintained. 3 Yet, he rightly notices that the kind of knowledge targeted by these paradoxes is one that entails having rationally grounded or supported belief. Hence, that it is a notion of knowledge, which, like any epistemically internalist notion, does not sever the connection between its obtaining and its accessibility to a subject. Like other hinge epistemologists, 4 Pritchard notices that crudely externalist responses are impotent vis-à-vis either form of skeptical paradox. For either they simply do not address the internalist notion of knowledge that gives rise to them; or else, they must be revisionary of the idea, which Pritchard justly sees as ingrained in our common linguistic and epistemic practices, that knowledge and having rational support for what one in fact knows can, and often do obtain together. Finally, like several other hinge epistemologists, Pritchard favors an "undercutting" anti-skeptical strategy over an "overriding" one. That is, he aims to show that the relevant skeptical paradoxes are in fact illusory (cf. p. 16), rather than real but based on notions that, once better understood and therefore revised, will not give rise to them (ibid.). The main elements of novelty in Pritchard's version of hinge epistemology are his defense of the Closure Principle and his claim that there is nothing in Wittgenstein's On Certainty (1969) that could speak to the Underdetermination-based version of the skeptical paradox. This shortcoming, in its turn, motivates Pritchard's third main innovative move: the endorsement of epistemological disjunctivism. For, according to him, while Wittgenstein's views, once suitably developed, can take care of Closurebased skepticism, epistemological disjunctivism is called for to address the other form of the paradox. Finally, claims Pritchard, once these two paradoxes are solved in the ways proposed, it could actually be shown that, far from being in stark contrast, as most epistemologists would have it, hinge epistemology and disjunctivism can in fact lend support to each other. For, on the one hand, the Wittgensteinian strand shows the locality of epistemic evaluation and therefore sets the boundaries within which the factive rational support for our beliefs disjunctivists hold we have can in fact obtain. On the other hand, disjunctivism supplements hinge epistemology by providing it with the means to deal with Underdetermination-based skepticism. 2 Pritchard's two paradoxes are roughly equivalent to the Cartesian and the Humean paradoxes highlighted by Wright (1985Wright ( , 2004. Pritchard argues that the two paradoxes significantly differ and that they call for rather distinct solutions. Wright, in contrast, contends that despite their differences they depend on a common lacuna in the skeptical reasoning and hence call for a unified solution. The lacuna consists in not seeing that a lack of evidential justification for the denial of radically skeptical hypotheses, such as the dreaming or the BIV one, or for heavyweight assumptions, such as "There is an external world" or "Our senses are mostly reliable", does not necessarily entail that either that denial or the relevant assumptions are not made rationally. Wright labors to show that they are because we have a rational entitlement-that is, a non-evidential warrant-for them. While I concur with Wright's diagnosis of the common lacuna motivating these forms of skepticism, in my own work I have claimed that both paradoxes proceed by assuming rightly (contra Wright) that there are only evidential justifications, but by concluding-mistakenly-that lack of justification is tantamount to lack of rationality. By contrast, I have claimed that, even if unjustified and unjustifiable, the relevant assumptions are still rational, since epistemic rationality extends to the assumptions, which make the acquisition of epistemic (evidential) justifications possible in the first place. 3 See Wright (1985Wright ( , 2004 In the remainder of this paper, I will focus on these three main aspects of Pritchard's intriguing proposal. Closure It is a well-known consequence of Wittgenstein's epistemology that it gives rise to what seems, at least prima facie, a denial of the Closure principle for knowledge and other epistemic operators such as evidential warrant or justification. For, given the locality of reasons, we can know (or justifiably believe), according to Wittgenstein, ordinary empirical propositions about mid-size objects in our surroundings, but we cannot know the "heavy-weight" implications of those propositions, such as "I am not a BIV" or "There is an external world", on which the very possibility of knowing ordinary empirical propositions depends. Closer to Wittgenstein's own terms, we can know, for instance, the age of a given fossil, but we cannot thereby know that the Earth has been existing for a very long time. Presumably, to gain the latter piece of knowledge, we would run something like the following argument: (I) This fossil is 10 million years old (II) If this fossil is 10 million years old, the Earth has existed for a very long time (III) The Earth has existed for a very long time. Yet, any justification we have for (I)-such as the evidence provided by radiocarbon dating-depends on taking (III) for granted. If (III) were not in place, if we thought, for instance, that the Earth had just come into existence 5 min ago replete with all fossils and everything else currently on it, we would still have the relevant evidence-the radiocarbon dating-but no justification for (I). According to Wittgenstein, whenever we face this kind of epistemic dependence, we ought to recognize that (III) is a "hinge" (OC 341-343, cf. 105)-a necessary assumption-that needs to stay put for us to have a justification for ordinary empirical propositions like (I). If so, then, an argument like the previous one, which aimed at giving us a justification for (III), would be ultimately question begging, since it is necessary to take for granted its conclusion in order to have a justification for its premise(s) in the first place. Furthermore, for Wittgenstein, hinges are not like ordinary empirical propositions, and are in fact similar to rules. Hence, they are not suited to be the content of a propositional attitude like belief, which is, according to most, necessary for knowledge. Thus, while we may have justification or even knowledge of (I), and despite the fact that (I) entails (III), we do not have either justification for, or knowledge of (III). Now, several hinge epistemologists are prepared to face the situation, offering considerations to minimize the allegedly devastating effects of such an admission. Some, like Wright (2004, pp. 177-178), for instance, are willing to grant that Closure does not fail for other epistemic notions in the vicinity, like rational entitlement. 5 Others point out that it would fail, for principled reasons, only when the consequent of the conditional is a heavy-weight implication, but not when it is any other ordinary empirical proposition. 6 Hence, Closure holds, but it does not hold unrestrictedly. Yet, its limited failure would be compatible with the retention of that very principle in all those cases in which we do in fact need it in order to explain how we can extend our knowledge (or justified belief) from one ordinary empirical proposition to the consequent of a known entailment, which has the former proposition as its antecedent. 7 Pritchard, in contrast, wants to maintain the unrestricted validity of the Closure principle and yet, with Wittgenstein and several hinge epistemologists, he does not want to say that we can know heavy-weight assumptions, even if these are entailed by ordinary empirical propositions we know. His way out of this impasse consists in claiming that our attitude towards these assumptions is not one of belief and that we would never acquire such a belief because of an application of the Closure principle. Since belief is necessary for knowledge, it follows that the Closure principle is simply not applicable in those cases in which the consequent of the known conditional is a "heavy-weight" proposition, or, as Pritchard prefers to call it, a "hinge commitment". Now, it is important to note, first, that obsessing over the unrestricted validity of Closure is motivated only to the extent that Closure itself is seen as a generative principle. That is, as a principle which allows us to extend our justification (or knowledge) from the premises of a given argument, to its conclusions, via known entailment. This, however, is not a sacrosanct reading of it. In fact, since at least Wright (1985), it has become customary to distinguish between Closure and Transmission of justification (or other epistemic operators, such as warrant, knowledge and later on, entitlement). In my own work, I have explained the difference by saying that all it matters to Closure is that the same kind of epistemic operator would figure in the premises and the conclusion of the argument, irrespective of the provenance of the relevant epistemic good. Thus, for instance, if one were justified in believing that Q through testimony, and one were justified in believing that P through perception, and were justified a priori to believe that P entails Q, Closure for justification would be respected and yet it would not thereby give one a first justification to believe Q; nor would it enhance any antecedent justification one would have for Q already. Transmission, in contrast, would be the generative principle that would either give one a first justification to believe Q, or enhance one's antecedent justification for it. 8 While this is important, I think, to differentiate various epistemic principles, which may be more or less significant to us and thus difficult to abandon, the choice of calling "Closure" what others would call "Transmission" is ultimately terminological. So, let us grant Pritchard his own reading of Closure-which makes it a case of transmission by other epistemologists' lights-and let us turn to a discussion of his way out of the impasse. One way Pritchard could go would be to say, with Wittgenstein, that since hinges are rules and rules are not propositions, they are not apt to be contents of beliefs and knowledge. Yet, Pritchard does not want to take this route-and rightly so in my opinion. For it is difficult to motivate the idea that hinges have no descriptive content, even though they may, in context at least, play a normative role. Moreover, they can clearly figure as antecedents in conditional statements, they would admit of meaningful negations and could occur in disquotational schemas. Of course, perceptive supporters of the non-propositional reading of hinges are aware of all that and typically claim that in all these cases we would have the same sentence-a doppelgänger of the hinge-playing no hinge role, though. 9 This defense, however, is problematic on multiple fronts. For instance, it would entail some kind of semantic ignorance on our part, since we would typically be oblivious to such a difference. It would also not fit well with at least some of Wittgenstein's own remarks on On Certainty, in which he contends that the very same proposition can be treated as a rule of testing and as something to be tested, depending on occasion (cf. OC 96-99). As noticed, Pritchard's solution is to retain the propositionality of hinges while making them the content of a peculiar attitude, different from belief, called "commitment". Now, one may legitimately wonder what commitments are. Yet, Pritchard is surprisingly silent on how exactly we should understand them. To be reminded, in a Wittgensteinian spirit, that commitments encode the idea that we bear to hinges a kind of animal, visceral certainty is fine as far as it goes, but it should not obscure the fact that we can and do conceptualize hinges and that they are the content of some kind of propositional attitude after all. Indeed, this seems to be a straightforward consequence of Pritchard's rejection of the non-propositional reading of hinges. Yet, if hinges are propositions and are the content of a specific attitude, then it is the very attitude at play that is doing all the philosophical work. Thus, the question remains as to what commitments really amount to. True, we are told that they are a kind of propositional attitude, and that they are different from beliefs. Now, other hinge epistemologists (see Wright 2004;Coliva 2015), have labored to develop a notion of acceptance, which is not based on evidence in favor of the proposition that constitutes it content. Acceptance, in turn, is a propositional attitude in which the subject takes or holds the propositional content to be true. 10 It is Footnote 8 continued with one another, as they apply to different kinds of argument, the philosophically more interesting one is in fact the latter. 9 See Moyal-Sharrock (2005, pp. 140-143). For a critical appraisal, see Coliva (2010, pp. 152-161). not clear in what way, if any, Pritchard's notion of commitment is any different from what other hinge epistemologists call "acceptance". According to Pritchard, this move would block the usual objections to deniers of Closure because Closure holds only for belief and thus it does not apply to commitments. As simple as the proposal looks like, I am not entirely convinced it would work. For, if "There is an external world" is a proposition and, as Pritchard holds, it is knowledgeably entailed by "Here is a hand", how come that we can know that we have a hand and yet not know that there is an external world? In fact, how come that we can know we have a hand and yet not even be allowed to form a belief with respect to "There is an external world", which is a proposition we know to be entailed by "Here is a hand"? Merely insisting that it is a commitment would sound ad hoc. One way to bring out the worry is to consider the following. I would grant Pritchard that we do not typically come to hold that there is an external world by going through an inference such as the previous one. That inference, that is, is not generative of knowledge. But why cannot it at least generate belief, ex post, if it is a valid inference, by Pritchard's lights? The problem is similar to the one usually raised against other hinge epistemologies, and known by the name of "alchemy". 11 Suppose you have a hinge commitment in "There is an external world", acquired one way or another (Pritchard does not say how), which allows you to have a justified belief (or even knowledge) that there is your hand where you see it. Now, consider realizing that "Here is a hand" entails "There is an external world". Why shouldn't this inference allow you to bolster your initial commitment in "There is an external world" and turn it into a belief (or even into a justified, or knowledgeable belief)? The answer is not clear and more would need to be said. Similarly, it would help to be more explicit about how the proposal fares with respect to the charge of licensing "abominable conjunctions" such as "I know I have a hand but I don't know there is an external world". 12 My hunch is that, in the end, Pritchard will have to concur with other hinge epistemologists that the scope of the Closure principle is limited to known entailments flagging ordinary empirical propositions on both sides of the conditional, or, more generally, propositions which are not hinges and can therefore be known and consequently (justifiably) believed. Furthermore, it is my hunch that in response to the charge of licensing abominable conjunctions, Pritchard would have to say, with other hinge epistemologists that the charge is ultimately based on not realizing a crucial, yet very subtle difference (see Harman and Sherman 2011), between belief and commitment, however Pritchard may want to spell out the latter notion. Once that crucial difference is appreciated, then the initial conjunctions will have to be substituted with something like the following "I know I have a hand, and I don't know that there is an external world, yet the latter is a hinge commitment of mine". Furthermore, since officially Pritchard's story has it that this in an a-rational commitment, he might have to say something more to minimize the effect of adding that specification in the relevant conjunct. For instance, he might point out how, even though a-rational, that commitment is in fact necessary for the acquisition of justified belief and eventually knowledge of the ordinary empirical proposition that figures in the other conjunct. 13 Thus, I do think Pritchard would have available to him various responses to the objections usually raised against people who deny the unrestricted validity of Closure, which will in turn have to be assessed on merit. Yet it is crucial that the very notion on which they would hinge-pun intended-namely, the notion of hinge commitment, be spelled out in more detail. On Certainty and underdetermination-based skepticism The other issue I would like to consider is whether it is really the case that Wittgenstein's remarks in On Certainty do not provide us with any useful element to counter Underdetermination-based skepticism. Pritchard painstakingly discusses this form of skeptical paradox. Here I will just offer a broad-brush characterization of it. If you hold that it is possible for a subject to have the same kind of evidence irrespective of whether she is actually perceiving a hand in front of her, or whether she is merely hallucinating having one, it follows that, whatever is rationally available to a subject actually falls short of providing her with knowledge of (or justified belief in) "Here is my hand". Now, Wittgenstein's own insistence on the locality of reasons is matched by the idea that knowledge and justification do take place. Yet, they are not direct or immediate, since they take place within a system of assumptions (or hinge commitments, if you will). The latter are not themselves known or justified, yet stand fast for all of us and actually allow us to form justified beliefs and to gain knowledge of (among others) ordinary empirical propositions about mid-size physical objects. Hence, on this reading of Wittgenstein's remarks in On Certainty, he would not be rejecting the idea that perceptual experiences are in principle indistinguishable whatever their causal origin might be. Yet, he would offer us considerations in favor of the idea that these experiences take place within a system of hinges, which are actually unassailable by skeptical attacks, and that allow us to acquire justification and knowledge about physical objects in our surroundings. In other words, once one is willing, as Pritchard is, to buy into the idea of hinges, which are impervious to skeptical assaults, then there seem to be enough elements in Wittgenstein's On Certainty to counter Underdetermination-based skepticism too. Let me clarify how: by means of hinges such as "There is an external world" and "Our senses are broadly reliable", by having a hand-like experience, absent defeaters, one would thereby possess a justification for "Here is a hand". That is, thanks to the hinge, one would be entitled to take one's perceptual experience at face value as favoring "Here is a hand" (as opposed to its skeptical counterpart, e.g. "I am a handless BIV hallucinating having a hand"). Thus, there is a sense in which, thanks to hinges, our perceptual experiences provide us with a better rational support than their subjectively indistinguishable skeptical counterparts, such as BIV-experiences. That is, thanks to hinges, they provide us with "decisive" reasons in favor of the corresponding ordinary empirical propositions. 14 Furthermore, on this picture, one can actually offer one's perceptions as reasons in support of one's claim that, say, there is a robin on a tree one is looking at. For, given one's available evidence and the suitable hinges, one can trust one's senses, absent defeaters, and justifiably believe that there is a robin on a tree in virtue of one's current perceptions. Hence, I am not sure why Pritchard thinks we should turn to disjunctivism to solve the Underdetermination-based skeptical paradox. 15 Still, let us assume for the sake of the argument that one needs to supplement Wittgenstein's position considerably in order to confront that paradox. The question now is whether epistemological disjunctivism would really help. Surely, epistemological disjunctivism can easily explain the commonsensical intuition that we justify our beliefs regarding specific physical objects in our surroundings by saying "Because I see that such-and-so". As we saw, however, a Wittgenstein-inspired view would be equally well placed to explain that intuition. Moreover, it would be equally well placed to account for the intuition that our perceptual experiences would give us a better rational support to believe ordinary empirical propositions rather than their skeptical counterparts, thanks to the relevant hinges. Hence, those considerations are not enough to motivate the endorsement of disjunctivism. Pritchard then claims that when one is actually seeing a robin on a tree, say, one would be able to favor the good-case scenario over a bad one by appealing to further considerations regarding the absence of defeaters. So one would be able to favor the former scenario even if one could not perceptually tell the difference between the two cases. 16 While we should happily grant all that, these further considerations, as Pritchard notices, would be impotent to help one dismiss a radically skeptical scenario, like the Cartesian one, since they would be compatible with the obtaining of it. Hence, according to Pritchard, we should remember that radically skeptical hypotheses are "by their nature bare-that is, rationally unmotivated-error possibilities" (p. 140), to which we can respond just by appealing to the fact that we are seeing a robin on a tree. Still, we cannot thereby conclude that we know the denial of radically antiskeptical hypotheses. Here-I take it-is where Wittgenstein's considerations become germane again, for it is only by reverting to them that radically skeptical scenarios can be dismissed and yet we do not reach a position where we can actually know their denial. The worry, however, is that responding this way to the Underdetermination-based skeptical paradox is dialectically suspicious. For either there is no such radically skeptical paradox at all, and the problem is merely one of vindicating the idea that we can use perceptions to justify (or claim knowledge of) our ordinary empirical beliefs. Or else, there is a radically skeptical paradox, which is different from its Closure-14 Cunnigham (2016) uses the adjective "decisive". 15 For another account sympathetic to Wittgenstein's position in On Certainty, and skeptical of the need of complementing it with disjunctivism, see Ashton (2015). 16 For a critical appraisal of Pritchard's disjunctivism, see Ranalli (2018). For the relationship between Pritchard's epistemological disjunctivism and metaphysical disjunctivism, see Cunningham (2016). based counterpart, and that crucially depends on the indistinguishability thesis. If the former is the case, disjunctivism is right to respond that we may appeal to various considerations concerning the absence of real (as opposed to purely skeptical) defeaters to favor a good-case scenario, even if it is subjectively indistinguishable from a badcase one. Yet, as we saw, a Wittgenstein-inspired response would be equally well placed. In the latter case, in contrast, I am not sure, first, that Pritchard has really differentiated the Underdetermination-based paradox from the Closure-based one. After all, he seems to be saying that what needs to be shown is how we can dismiss radical skepticism about hinges, once we realize we cannot acquire or vindicate knowledge of those through an inference, such as a Moorean one, which starts out with the knowledgeable premise "Here is a hand" and allegedly derives the knowledgeable conclusion that there is an external world. That paradox looks to me identical to the Closure-based one. Second, whether or not the Underdetermination-based paradox really differs from the Closure-based one, it remains that it cannot be dispelled by appealing to further considerations regarding the absence of real defeaters, and disjunctivism turns out to be unable to respond to it. Rather, all the anti-skeptical work seems to be done by Wittgenstein-inspired considerations regarding the dubiously rational status of skeptical hypotheses. For it is part of that work that only motivated doubts, which have consequences in practice and that can be resolved at least in principle, are rational. (cf. OC 154,231,255,256,339). That entails that hinges have to stay put (310)(311)(312)(313)(314)(315)(316)(317)(318)327,505,(522)(523)). Yet, once they are in place, they ground our justification for, or even knowledge of, ordinary empirical propositions. If so, however, embracing disjunctivism would not represent any clear advantage over an account of perceptual justification, and knowledge, more in keeping with Wittgenstein's own views in On Certainty. One might then appeal to a more traditional variant of disjunctivism, which incorporates the claim Pritchard is uncommittal about that perceptions and delusionary experiences are metaphysically distinct mental states. By so doing, one may hold that, in the good case, when one does perceive, one knows that there is a robin on a tree, while in the bad case one doesn't. One might then hold that, in the good case, one would thereby also know that one is not a BIV, even if one is unable to tell how one knows. Yet, it should be noted that this move would sever the desirable connection between knowledge and its rational availability to a subject Pritchard is concerned to maintain. Alternatively, one might deny the entailment between one's knowing that there is a robin, when one is in the good case scenario, simply on the basis of one's perception, and knowing that one is not a BIV. This move, however, would be based on a rejection of the unrestricted validity of Closure, which again, as we saw in the previous section, is something Pritchard aims to avoid. Let us take stock. I have argued that disjunctivism per se -especially in the internalist more friendly version favored by Pritchard-does not have any advantage over Wittgenstein's own position in On Certainty, because the latter too can accommodate the fact that we give (and have) perceptually-based reasons for our claims to knowledgeable or justified belief for ordinary empirical propositions, rather than for their skeptical counterparts. Furthermore, it is in On Certainty that we find the means to dismiss the rational status of radically skeptical scenarios, like the ones exploited in the Closure-based version of the skeptical paradox as well as in its Underdeterminationbased counterpart. In particular, and to reiterate, it is part of On Certainty that only motivated doubts, which have consequences in practice and that are at least in principle resolvable, are rational. For those rational doubts to be possible, however, hinges have to stay put. Connectedly, thanks to the fact that hinges stand fast for us, we can have, and can claim to have perceptually-based (or even testimonial) justifications for our empirical beliefs. Thus, I think it is safe to say that turning to disjunctivism to supplement On Certainty, since the latter would allegedly be defective in its own right, is a moot move. Disjunctivist perception? In closing, I would like to consider another worry that disjunctivism raises independently of its possible interplay with ideas to be found in On Certainty. Namely, the account of perception it provides. This is an issue that Pritchard himself, in keeping with other mainstream epistemologists, does not consider, for it traditionally falls in the remit of the philosophy of mind. Yet, boundaries need to be crossed to provide theories, which, while prime facie plausible in one realm, would actually falter if considered from a different, wider perspective. This is precisely the case with disjunctivism, which, while at least prima facie plausible if considered merely from an epistemological point of view, does not do equally well when apprised from the point of view of the philosophy of perception. Here I will not have the space to develop the point in any detail. Yet, it is generally agreed upon that if we look at cognitive psychology, and take vision as our leading example, it turns out to provide representations of distal stimuli according to complex (subpersonal) algorithms, which cognitive scientists investigate. The information processing is subpersonal (and indeed the algorithms just model it). Still, the resulting representational state is a person-level mental state, either because it often involves consciousness, or else because, even when it does not, it drives purposive behavior. Proximal stimuli, however, under-determine the distal conditions that cause them. Thus, the resulting representations are never a direct taking in of what caused them, not even in the veridical case. Moreover, the same kind of representations can be produced by stimulating the relevant areas in the brain by means of electrodes. Thus, visual representations as such are insensitive to their causes, and reference to the latter is needed to distinguish between veridical and illusionary (or even hallucinatory) ones. In other words, visual perceptions and hallucinations are not two different kinds of mental state, as they do have a "highest common factor", 17 generated through two different causal chains. Were one to insist that, nevertheless, they are two different kinds of mental state because their individuation should be wide and take into account the relevant causes, one could then object that the wider individuation seems to be entirely ad hoc, and that are plenty of good reasons to do otherwise. In particular, the narrower individuation seems to account better for the fact that, assuming that at least in some cases perception and hallucination provide a similar (or even identical) representation, that would lead subjects to act in similar (or identical) ways, ceteris paribus. Moreover, in perceptual psychology, which is the most developed psychological science, "factive-type states are less fundamental". 18 That is, "explanations in psychology center on kinds of psychological states that do not entail veridicality". 19 Furthermore, these psychological states, even when they are veridical, are representations of objects and properties out there (and are representations as of those objects and properties). Yet, they should not be conflated with their representata. Contrary to McDowell-style disjunctivism, therefore, contemporary perceptual psychology does not allow for the possibility of taking in facts-understood as properly arranged combinations of objects and properties, or even relations-through perception. 20 Thus, in keeping with our best science of perception, there seem to be very little reason for going disjunctivist. True, as previously remarked, Pritchard is uncommittal about the metaphysical status of perceptions and hallucinations. Yet, as we have all too briefly remarked, there are good reasons not to follow other, more traditional disjunctivists 21 on that score. If so, however, it is unclear what the supposed advantages of disjunctivism over the view of perceptual justification elicited from On Certainty would be. For one could not insist, at this point, that, if all goes well and we are in fact perceiving that P, then we are immediately in touch with the corresponding fact in the world and, provided we are capable of forming the corresponding belief, we thereby have direct knowledge of it. Thus, one important anti-skeptical move traditional disjunctivists would be able to make, would no longer be available. 22 As we have seen, moreover, there are good reasons to hold that disjunctivism per se is impotent against the Underdetermintion-based version of the skeptical paradox and that it is not, as such, particularly better placed, vis-à-vis Wittgenstein's own views in On Certainty, to vindicate the intuition that perception provides us with reasons in favor of our claims to knowledge or justified belief. Thus, I confess, I ultimately see no reason for contaminating hinge epistemology with it. Yet, it remains that the attempt is bold and interesting. Thus, it certainly deserves full consideration and further discussion.
7,567
2018-12-11T00:00:00.000
[ "Philosophy" ]
Use of the OSCAR-Fusion V1.4.a code for a preliminary assessment of the ACP contamination within the main ITER water cooling circuit One of the main objectives of ITER is to produce 500 MW of power from a deuterium-tritium plasma for several seconds. This goal presents two inherent challenges: firstly, in-vessel components will require active cooling to remove the heat coming from the fusion reaction (i.e., mainly fast neutrons and alpha particles). Secondly, the materials exposed to the neutron flux will yield activated corrosion products (ACPs) in all primary cooling circuits of ITER. From a safety point of view, ACPs are one of the contributors to the Occupational Radiation Exposure (ORE), they represent a source of radiological waste and also contribute to the source term for accidental scenarios involving the loss of primary confinement. Therefore, ACPs assessment is key to estimate radiological impact for nuclear workers and the public. ITER nuclear safety engineers adopted OSCAR-Fusion v1.4.a code to assess the ACPs inventory in the Integrated Blanket ELMs and Divertor (IBED) cooling loop. This paper describes the selection of input data, the modelling of the circuits and the operational scenarios used in OSCAR-Fusion calculations. This study also examines the outcomes of such calculations, notably in terms of ACPs inventory, emphasizing the impact on the ORE and highlighting its driving parameters. Additionally, the paper offers recommendations for better ACPs management in the context of the ITER project and in accordance with the ALARA principle. Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence.Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Introduction ITER D-T pulsed plasma operations imply huge fast neutrons generation 4 and active cooling of the components exposed to high and continual heat fluxes [1].Hence, the combination of activated materials and transport phenomena occurring within the water coolant leads to the accumulation of Activated Corrosion Products (ACPs) within the primary cooling systems.Among the many isotopes present in the ACPs inventory, the gamma emitters constitute not only one of the main hazards to the Occupational Radiation Exposure (ORE) [2,3], but also a radiological source term for accidental scenarios involving the loss of primary confinement [4,5]. Recent work performed in the framework of the Nuclear Integration Engineering (NIE) program [6] highlighted that the major impact on the ORE relates to activities on the primary cooling systems.Consequently, the need to update the ACPs source term for future analyses remains a crucial objective [7].The ITER Radiation, Safety and Environment (RSE) group decided to re-evaluate the inventory of ACPs.This new assessment targeted activity levels during the machine life cycle, and its distribution within the systems and buildings designed to provide the radiological confinement functions. A precise ACPs estimation must also encompass several aspects [8]: e.g.materials composition, piping and equipment surface finishing, activation rates in the regions under neutron 4 The ITER project, www.iter.org.flux, coolant chemistry, thermal-hydraulics, geometry of the cooling loop, coolant cleaning and filters efficiency.The RSE group secured the support of CEA-Cadarache experts on ACPs and used a bespoken software developed by CEA, the OSCAR-Fusion v1.4.a code.OSCAR-Fusion [9] is a fusion adapted version of the OSCAR5 code [10] already in use in the French Nuclear Industry [11] and validated against fission operational data [12].The goal was to perform the most accurate estimate of the ACPs inventory, identify the key contributors to the loop contamination and propose dose reduction measures to limit the impact on the ORE.Good practices from the fission nuclear industry were also considered [13]. This interdisciplinary activity aims at validating the methodology for ACPs source term assessment through OSCAR-Fusion code while providing a collection of robust and comprehensive results to be used in future ITER safety analyses.The rationale for this investigation was to revise the ACPs source term used in past ITER safety analyses [14,15] and not yet fully updated nor revised.The scope of this activity is the Integrated Blanket Edge-Localized Modes (ELMs) and Divertor (IBED) loop [16], which is the main primary cooling system of ITER in terms of both coolant volume and total wetted surface. In terms of the organization of this study, section 2 provides a brief overview of the IBED loop, its 'clients', the primary and auxiliary systems, and the operational modes.Section 3 describes the characteristics of the OSCAR-Fusion code. Section 4 reports on the development of the OSCAR-Fusion input in the context of the ITER project, with emphasis on three aspects: neutron reaction rates, cooling circuit parameters and operational scenarios. Section 5 provides the calculation output, focusing on both ACPs masses and activities in different regions of the circuit. Finally, section 6 draws the overall conclusions and offers recommendations for future studies. Description of the IBED loop ITER in-vessel components require active cooling to effectively transfer the heat generated by the products of the D-T fusion reaction i.e. mainly alpha particles and neutrons.The main in-vessel components are the First Wall-Blanket6 (FW-BLK) [17,18] and the divertor 7 (DIV) [19,20].Additional in-vessel components are the In-Vessel Coils (IVCs), mounted on the Vacuum Vessel8 (VV) surface right behind the FW-BLK, the Diagnostics and Auxiliary Heating Systems installed in the equatorial and upper ports.The Integrated Blanket ELMs and Divertor (IBED) Primary Heat Transport System (PHTS) [16,21] is designed to provide cooling to all the above mentioned in-vessel components, adequately removing the heat coming from the plasma and hence avoiding unwanted high temperature transients.This system can operate at higher pressure and temperature than those required in plasma mode [22] to enable water baking and purge the IVCs from tritium and other impurities.The baking loop foresees a heater and an heat exchanger and it is connected in parallel to the main loop.A Chemical and Volume Control System (CVCS) is connected to the PHTS through two linear headers to remove ions and particles in the coolant through resin beds and mechanical filters respectively and enable coolant chemistry control.The IBED loop considered in this work includes the above mentioned systems providing a barrier to both water and ACPs release: figure 1 below shows a simplified flow diagram of the IBED loop with the details of the ratio of the total flow rate in different branches of the circuit during both dwell/plasma burn and baking operations.Additional auxiliary systems of the IBED PHTS are the draining and drying circuit, not considered for the present study since OSCAR cannot simulate the discharge of one circuit into another.However, the transfer of ACPs to the drain tanks through the draining connections can and shall be addressed for both maintenance and accidental scenarios; the results in terms of ACPs concentration in the coolant shown in section 5 can enable such estimation.As shown in figure 1 below, the IBED loop can be sub-divided in 4 areas: -In-flux regions, i.e. the regions of the loop within the bioshield activated by the neutrons (inside the red rectangle in figure); -Out-of-flux regions, i.e. portions of the IBED loop belonging to the PHTS outside the bioshield, including piping, valves and main Heat eXchangers (HXs); -The baking circuit connected in parallel to the PHTS (inside the orange rectangle in figure 1); -The CVCS circuit connected in parallel to the PHTS (inside the green rectangle in figure 1).It is also worth recalling that IBED is the main system of the Tokamak Cooling Water System (TCWS), both in terms of coolant inventory, flow rate and power to be transferred to the heat rejection system. The IBED is a rather large and complex assembly of piping and cooling equipment (e.g.valves, heat exchangers) containing activated coolant and corrosion products to represent an unprecedented challenge in terms of inspection and maintenance activities [8].Typical maintenance operations in ITER will be performed after 12 d from the plasma shutdownin such scenario, the ACPs represent the main source term for the ORE for the areas in which primary cooling systems are installed [2].This is due to the presence of gamma emitters radio-isotopes, such as Co-60, in the deposits of corrosion products on the inner surfaces of pipes, valves, pumps and heat exchangers.For such components, hands-on operations are considered in the ORE assessment.In addition, the ACPs contribute to both generation of radiological waste and source term for accidental scenarios (e.g.loss of coolant event).Therefore, the ACPs assessment for IBED is one of the main indicators for the overall safety performance of the ITER project. The supply and return pipelines for the FW-BLK outside the bioshield, the so called 'Jungle Gym' constitutes a good example of an ORE hazard.There are 18 Jungle Gyms around the tokamak and each one is equipped with several valves.This complexity could potentially yield ACP-contaminated wet surface and significant exposure time to perfom inspection/maintenance tasks. Therefore, the Jungle Gym return region is selected as one of the representative areas for the ORE impact in following section 5. Description of OSCAR-Fusion The OSCAR-Fusion code is the TCWS PHTS version of the OSCAR code initially developed for the PWR reactor cooling system [9].Since the coolant used in the PHTSs and PWRs is the same and the main differences concern the materials (presence of copper alloy) and the neutron spectrum and flux (see also section 4.1 and [23]), the ACP transfer modelling in the OSCAR-Fusion code is identical to that in the OSCAR code.Therefore, the OSCAR-Fusion code adopts the same control volume approach for modelling, which can be summarized as follows: -The systems are divided into discrete control volumes or regions based on their geometric, thermal-hydraulic, neutronic, material and operational characteristics; -Each control volume can encompass six media: metal, inner oxide, outer oxide/deposit9 , particles, ions and filter (including ion exchange resins and particle filters); -The code considers the following elements: Cr, Mn, Fe, Co, Ni, Zn, Zr, Ag, Sb and Cu along with their respective radioisotopes; -For each isotope (both stable and radioactive) in each medium of each region, a system of mass balance equations is solved using the following equation: transfer where m i is the mass of isotope i in a given medium (kg), t the time (s), the advection term of isotope i (kg s −1 ) and J i transfer a transfer mass rate of isotope i between 2 media or 2 isotopes (kg s −1 ). The main transfer mechanisms accounted for in the code include corrosion-release, dissolution, precipitation, erosion, deposition, advection, purification, activation and radioactive decay (see figure 2). The dissolution-precipitation model was enhanced in version 1.4, enabling OSCAR to more accurately calculate the incorporation of minor species (e.g. 60 Co) into oxides (see [12] for a detailed description of this new model and the other main ones). In past ITER analyses, PACTITER [24] and OSCAR version 1.3 codes have been used, whereas IO has been using OSCAR-Fusion v1.4.a since June 2022. Differences in the parameters of the datasets The main difference concerns the corrosion rates of SS and Cu alloy.For the calculations performed in the past at ITER, the MOOREA law was used for both SS and Cu.The MOOREA law is an empirical model that depends on the chemistry, temperature, material, manufacturing process and operational time.It was defined under PWR conditions for SS and Ni base alloys.The model assumes an initial constant value for the first two months, followed by a gradual decrease over the next ten months, reaching a constant value thereafter.For SS, the MOOREA law depends on pH, saturation, temperature and time.For Cu alloy, it depends on saturation, temperature and time and assume the pH T to be kept constant at 7. To reduce conservatism and to use corrosion laws defined in ITER conditions, e.g [25].OSCAR offers the possibility of using a time power law for corrosion rate that consistently decreases over time instead of the constant corrosion rate after 1 year of operation of the MOOREA law (see section 3.3).The power law in OSCAR can also depend on saturation and temperature, similar to the MOOREA law. The differences in the other parameters of the datasets have an impact of the second order. Validation of OSCAR-Fusion While no operating experience (OPEX) exists for fusion reactors, the OSCAR-Fusion code benefits from the validation of the OSCAR code, which is based on a unique worldwide OPEX: the EMECC expertise assessments of the PWR circuit contamination [26].To date, approximately 430 campaigns of the gamma surface activity measurements of the PWR primary and auxiliary systems have been conducted in 76 different units in France and abroad since 1971 using the EMECC system.In addition to the gamma surface activities measurements, the OSCAR results have been compared to other on-site measurements [27]: volume activities and chemical element concentrations. The validation of the OSCAR code encompasses a wide range of conditions including water temperatures ranging from 20 • C to 350 • C, both laminar and turbulent flow regimes, reducing and oxidizing environments, as well as alkaline and acidic conditions [28].Considering the similarities between the PHTS conditions in fusion reactors and PWRs, the OSCAR-Fusion code can benefit from the validation of the OSCAR code. Corrosion and release rates for AISI and Cu-alloy For both materials, the corrosion rates considered are time power laws.Their time dependence is defined by Belous [25].The ratio between the release rates and the corrosion rates is considered to be between 0.25 and 0.3 (it is based on the ratio defined for the MOOREA law under PWR conditions).For AISI, to be consistent with the MOOREA corrosion rate used for PWR simulations under reducing conditions, the corrosion rate and the release rate in g•m −2 •s −1 considered for ITER simulations are: where t is the time in s, −0.548 is defined by Belous [25], f T = 124100e −51000 RT the temperature correction factor for T ⩽ 250 • C (defined for PWRs) and f sat the saturation correction factor (f satCor = 3 and f satRel = 50 defined for PWRs).Figure 3 presents the comparison of the corrosion rates for AISI steel at 250 • C under unsaturated conditions (f sat = 1) between the power law used for ITER simulations and the MOOREA law at a pH T of 7.0 used for PWR simulations.For Cu-alloy, the corrosion rate and the release rate in g•m −2 •s −1 considered for ITER simulations are: where t is the time in s, −0.721 is defined considering Belous [25], f T = 700e −28500 RT the temperature correction factor for T ⩽ 250 • C and f sat the saturation correction factor (f satCor = 3 and f satRel = 50 considering the same values as those for AISI). The temperature correction factor, f T , is defined to have a reduction factor of 5 between 180 • C and 100 • C which is consistent with the reduction factor of 3 on the release rate between 150 • C and 100 • C measured by [29].The release rate is defined to have the same values as those measured by [29] (see figure 4). The corrosion rate calculated by OSCAR-Fusion is compared to Belous' data [25] and Obitz' measurements [30] in figure 5. Compared to Belous' data at 100 • C and 180 • C, the corrosion rate in OSCAR-Fusion is higher by a factor of 3-7.The difference could be justified by the conditions of Belous' experiment, where the water in the autoclave is static and probably saturated.As reported by Belous et al [25], corrosion varies by a factor of 15 with changes in flow velocity from 0 to 4 m s −1 .In comparison to Obitz' data at 250 • C, the corrosion rate in OSCAR-Fusion is higher by a factor of 2-10, depending on the fluid velocity.However, the duration of the Obitz' experiment is very short (1 week), and its extrapolation to the long-term is uncertain.It should be noted that the corrosion and release rates used for AISI and Cu-alloy in this study do not depend on pH (the pH T considered is 7.0), nor on the potentially cyclic slightly oxidizing conditions due to radiolysis species not totally recombined by H 2 , nor on the fluid velocity.As reported by Belous [25], corrosion rate may vary by a factor of ∼20 depending on the water chemistry and test conditions.The current IBED design considers a pH at 25 • C varying between 7 and 9 for plasma and baking operations, respectively.The pH is controlled through the injection of chemicals into the coolant, thus preventing excessive acidity and mitigating the risk of corrosion, especially during the baking operations. Building of OSCAR-Fusion input As for every model, building the input for OSCAR studies implies a data selection/simplification: to ensure the validity of the results, it is therefore paramount to define which data are relevant.Therefore, it is first necessary to define what relevant means in the three main subdomains of the OSCAR input preparation, which are the neutron reaction rates, the loop modelling (geometric, thermal-hydraulic and material characteristics) and the operational scenario specification. Among all the elements exposed to the neutron flux, only the ones transported outside of the bioshield can be defined as relevant; similarly, among the many reaction rates occurring in the under-flux materials, only the ones contributing to the activity or to the dose due to ACPs exposure can be selected.This led to the selection of seven elements: Co, Cr, Cu, Fe, Mn, Ni and Zr on the basis of the contribution to the dose rates in the regions outside the bioshield, i.e. the elements that can be transported outside the irradiated area.Table 1 shows the preliminary list of isotopes considered based on elements selection. This list of isotopes can be further reduced in function of the study by considering additional selection criteria, such as the relatively low half-life and/or negligible contribution to the dose.Isotopes with negligible contribution to the equivalent gamma dose and/or with relatively short half-life are excluded for ORE studies.Additional criteria for accidents and waste management scenarios might apply. Neutron reaction rates assessment The NUCLEO section of the input file for OSCAR-Fusion code contains the data related to the interactions of the neutrons with the materials; it is broken down in three sub-sections: -Element, -Decay chain, -Reaction Rates. The first two sub-sections are 'standard' ones, i.e. they contain isotope-specific information such as type of element, mass number, natural abundance and, for radioisotopes, decay type and decay constant.The reaction rate sub-section shall be adapted to the type of study, providing the specific reaction rates for all the relevant isotopes.In particular, the nuclides inventory variation during the entire simulation shall consider the contributions of both activation and de-activation rates, i.e. the disappearance of some relevant radionuclides by the interaction with the neutrons or by decay.To provide an adequate input for NUCLEO, the ITER RSE group performed dedicated calculations to assess the neutron spectra during FPO in the FW-BLK cooling channels and hence calculated the reaction rates in four different regions of a FW-BLK module along the radial coordinate with respect to the plasma.The neutron energy spectra (figure 6) in TRIPOLI 315 energy group structure in beryllium layer of the first wall panel and in the cooling channels of the FW-BLK outboard The spectra were considered in the isotope inventory analysis for the ACP production in the cooling channels of FW-BLK outboard equatorial module. FISPACT code [31] was used to provide the reaction rates for different materials in FW-BLK regions with a good level of detail in terms of both isotopes activation and spatial distribution.In addition, the disappearance of four radioactive isotopes (Co-58, Co-60, Cu-64 and Mn-56) were also investigated and the respective disappearance rates included in the reactions list. The method of the assessment of the rates of the single-step neutron-induced nuclear reactions per atom of a target nuclide based on the use of the collapsed cross-sections-group averaged cross-sections folded with the group flux. The nuclide balance equations after folding the crosssections with the group flux Y ik -the yield of nuclide i from the fission of nuclide k. Regrouping the terms in (1) to separate the contributions of the single-step neutron-induced nuclear reactions with the target nuclides The last two underlined terms are considered to derive: -The rate of the specific single-step neutron-induced nuclear reaction A j (n, F)A i per atom of a target nuclide A j dNi dt -The rate of the specific single-step neutron-induced fission reaction A k (n, F)A i per atom of a target nuclide A k dNi dt The rates ϕ σ have been assessed with FISPACT-2007/EAF2007/EAF2010. As a result, the reaction rates lists were selected for OSCAR-Fusion calculation.Simulation of activation of the materials during hydrogen/helium plasmas foreseen in the PFPO is not considered in the present work due to the expected extremely small contribution to the overall ACPs source term in comparison to FPO.However, the PFPO campaigns are simulated to evaluate the overall evolution of the mass of the CPs within the loop prior the start of the FPO. IBED loop regions and materials The OSCAR-Fusion input model for the IBED loop used in past calculations (i.e. with OSCAR-Fusion v1.3) has been updated on the basis of new data available from TCWS designers.The main improvements of this update consist of: -The addition of equatorial ports clients and their piping distribution at equatorial level; -The addition of the IVCs and upper port clients at upper level; -An increased detail of the HXs volumes and wet surfaces (from one region to six); -The introduction of the inlet and outlet isolation valves regions between the upper rings manifold and the jungle gym regions; -The checking and updating of the wet surfaces and hydraulic diameters on the basis of input from TCWS section for all the out-of-flux regions of the IBED loop; -Optimization of the FW-BLK regions with respect to previous studies (smaller regions number). Because of the listed improvements, the updated model provides a general higher level of detail compared to previous analyses.The review and support from the CEA code Operational scenario The operational scenario for the OSCAR-Fusion model is based on the ITER Research Plan 2018 [34]: such a scenario considers two PFPO campaigns and six FPO campaigns; it also specifies that 30 d of baking operation will occur prior and after each campaign. Pre-fusion power operation. Two PFPO campaigns including baking are simulated in the first cycle when the plasma power and activation level are considerably lower.The purpose is to simulate the initial condition of the circuit and track the evolution of the non-ACPs prior the start of FPO.Therefore, the first cycle simulation relies on a simplified approach: constant temperature (70 • C during PFPO and 240 • C during baking) and no activation of the materials.The very first day of operation is simulated as a 'zero power' burn (i.e.no activation in the in-flux regions), followed by 29 d of baking to correctly initialize the calculation.Table 3 below summarizes the operational duration for PFPO. Fusion power operation. The overall FPO is simulated through six identical FPO campaigns each one generating 5 × 10 26 neutrons corresponding to an overall 4700 h of D-T plasma pulse duration and 3 × 10 27 generated neutrons [1].The sum of the neutron generation during FPO 1, 2 and 3 corresponds to 5 × 10 26 neutrons-this value corresponds to the neutron generation in each subsequent FPO campaigns (i.e.FPO 4-8) [34]; to reduce the computational time and simplify the model, the first three FPO campaigns are grouped in one.At the end of the last FPO campaign plus baking, additional 12 d of cold shutdown are simulated to track down the evolution of the ACPs due to natural decay after shutdown.Hence the overall duration of the simulation amounts to 5549 d, corresponding to approximately 15 years of continual operation. Table 4 below summarizes the operational duration for FPO. Period parameters used in OSCAR. The thermal hydraulic parameters of the IBED PHTS have been modified based on input from the TCWS design team; additionally, lithium injection is simulated during FPOs and baking operation to keep the pH T at 7 (or slightly above 7 during FPOs dwell time).Table 5 below summarizes the main thermal-hydraulic parameters used in the model. Preliminary studies 4.4.1. ORE contribution-'the Zirconium case'. A preliminary calculation has been performed to estimate the contribution of the different elements to the gamma dose rate as function of the time.In particular, the Zr element contribution to the total dose rate was checked since previous OSCAR-Fusion studies for ITER did not include Zr in the list of elements undergoing neutron activation. For such test, gamma contact isotope dose rate coefficients for a DN250 Schedule 80 (i.e. one manifold of the jungle gyms) were primarily calculated by CEA through the MERCURE code [35] and entered in the OSCAR-Fusion input dataset. Then, the OSCAR-Fusion code calculated the dose rate generated by each isotope by multiplying the isotope dose rate coefficient by the isotope surface activity of a jungle gym pipe during the operation; the isotope dose rates at the last day of the D-T operation, or EoL are considered.Figure 8 below shows the normalized results of the isotope contributions to the maximum dose rate from one second to 100 years after the EoL and with a cut off at 10 −6 ; we can observe the constant dominant contribution of Co-60 to the overall dose rate and the absence of Zr-93 and Zr-95 from the list of main contributors. Thus, for ORE studies: -Among all ACPs isotopes, Co-60 is the key contributor to the dose rates; -Zirconium can be excluded from the list of elements to be considered for ORE studies to optimize calculation time. However, in the case of significant corrosion-erosion of CuCrZr, this simplification might need to be revisited. Homogenization of the operation. The simulation of ITER operation in OSCAR-Fusion represents a challenge in terms of computational needs.The complete sequence of operation requires many periods in the OSCAR input to negatively affect the ergonomics of the input/output (slowing down of pre and post processing actions) as well as the required time to perform a complete calculation. To overcome this issue, the current approach simplifies the reference operational scenario by grouping different modes, significantly reducing the number of periods and hence optimizing the calculation time. However, such a simplification implies the potential loss of information that might influence the results.Therefore, we performed a preliminary parametric study to evaluate the impact of the scenario homogenization on the results.Three operational scenarios were investigated with OSCAR: B (Brutal), C (Compromise) and D (Detailed).All the scenarios consider: -A loop 'conditioning' of 1320 d to simulate the deposit and oxide formation in a brand-new loop operating for both PFPO campaigns and baking operations; -One typical FPO campaign (32 plasma sessions corresponding to a neutron budget of 5 × 10 26 neutrons), including 30 d of baking at the end of the plasma operation -A final CS with no plasma and coolant flowing at 70 • C of 100 d. The details of the FPO operation for the three scenarios are shown in table 6. As shown in figures 9 and 10, compared to scenario D, both B and C show higher activity values during the plasma pulse due to the 'compression' of the burn phase at the end of campaign and sessions respectively, and hence to an underestimation of the natural decay of the isotopes.However, scenarios C and D show a very good agreement since start of the baking operation and consequent final cold shutdown phase, whereas the B scenario significantly overestimates the activity in the loop. Hence, scenario C is selected for the simulation of the entire operational campaign corresponding to the production of 3 × 10 27 neutrons and 4700 h of D-T plasma.For completeness, figure 11 shows that in terms of mass, the three scenarios show practically perfect agreement: because the mass of CPs dominates the overall mass concerning the mass of ACPs (see also section 5.2) for both in-flux and out-of-flux regions. OSCAR-Fusion results: ACPs This section shows the results obtained with OSCAR-Fusion code in terms of mass and activity in the system due to the ACPs.Finally, also the contribution to the ORE for some relevant isotopes is shown as example.In the following, we will refer to EoL as the last day of D-T plasma operation, whereas EoL + Baking represent the last day of the final baking operation. Corrosion and release rates To illustrate the corrosion and release rates using OSCAR-Fusion (see section 3.4), figures 12 and 13 show the corrosion and release rates for Cu-alloy (HV_CuAlloy) and AISI (HV) respectively.In general Cu-alloy corrosion rate is higher than AISI one; furthermore, AISI corrosion and release rates show relatively higher fluctuations in correspondence of the plasma pulse operations, due to the temperature changes. ACPs inventory in the IBED PHTS The comprehensive inventory of CPs can be categorized into two distinct classes: the immobile media comprising deposit contamination also referred to as surface activity.In the context of ORE, these layers predominantly contribute to the radiation dose originating from ACPs and to the inventory when managing radiological waste; their removal might require challenging engineering measures, such as the installation of a bespoken chemical cleaning loop.Conversely, the mobile media contribute to coolant contamination, also referred to as volume activity.Moreover, they contribute to the accumulation of radioactivity within the resins and filters present in the CVCS [36].however, its concentration is about a factor 10 3 lower than the total mass of cobalt deposit; for Co-57 and Co-58 the ratio is even higher, 6 and 7 orders of magnitude lower respectively to cobalt element.Similar ratios apply also for other types of elements and radioisotope (e.g.Fe, Cr).Thus, we can state that the overall mass of the CPs is primarily driven by the non-activated inventory; hence defining the total mass of CPs as 'ACPs mass' it is misleading from a scientific point of view and simply wrong from an administrative one (i.e.safety objectives or limits on ACPs shall refer to activity rather than mass).For this reasons, ACPs limit shall be generically defined based on overall and/or isotope specific activity.Nevertheless, the overall CPs mass is a safety relevant information, for: -some maintenance operations (e.g.frequency filter replacement and resins regeneration) as they are affected by the overall mass of CPs; -monitoring the mass transport within the circuit before nuclear operation will give relevant information on the expected level of contamination during FPO; -Enabling test and optimization of established, bespoken good practices to reduce the contamination during the D-T operation. Figure 16 below shows the concentration of the elements as ions and particles into the coolant (the CVCS region upstream the filters and resins is chosen as representative one): Cu concentration is the highest during the PFPO and FPO operations, while Fe element dominates during the baking.The ion concentration increases during the baking due to general higher release rates-section 4.3 shows this phenomenon for Co-60. Activity. Figure 17 shows the total activity (inner oxide + deposit) for the in-flux regions and out-of-flux regions: the observable fluctuations occurring in plasma operation are due to the short living radioisotopes natural decay during dwell time (i.e.mainly Cu-64 and, secondarily Mn-56); the rather stable behavior of the overall activity in the out-offlux regions is due to the contributions of medium (Co-58 and Fe-55) and long living (Co-60) isotopes. Among all the isotopes considered in this study, the dominant contributor to the ORE is Co-60 (see also section 4.2).Therefore, it is important to understand the contribution of Co-60 loop contamination (i.e.inner oxide and deposit) to the overall activity in the out-of-flux regions.Figure 18 compares the Co-60 contribution to the overall activity (all the radioisotope considered in this study) in the out-of-flux regions for both deposit and inner oxide: in general, the deposit contribution to the activity is 2 orders of magnitude higher than the inner oxide; furthermore the Co-60 deposit to the overall activity in the out-of-flu region at EoL is significant (∼50%). As figure 19 shows, the surface activity in the in-flux regions is obviously significantly higher than in the out-offlux regions.For the latter, a significant higher surface activity value is observed for the regions exposed to the baking operation (light green) in comparison to the bypassed ones (in blue Table 7 shows the comparison for different out-of-flux regions of the IBED loop: even though the surface activity for the HXs regions is more than a factor of 2 lower than the corresponding Jungle Gym and in-bioshield piping for both hot and cold legs, the overall activity in the HXs regions is significantly higher (one order of magnitude) due to the huge wet surface.However, the contribution to the ORE of the HXs shall consider also the self-shielding effect due to the tubes material (i.e.stainless steel) which reduces significantly the gamma irradiation [37]. Furthermore, the difference in surface activity between HXs and Jungle Gym regions is due to the exposure of the latter to the baking temperature and consequent higher contamination because of the precipitation of ions onto the deposit.Such a phenomenon contributes significantly to the overall contamination of the piping and equipment; since the cooling trains, and hence also the HXs, are by-passed during the baking, the absence of the precipitation of ions results in a lower surface contamination. Coolant Activity. The activity into the coolant, or volumetric activity, is due to the concentration of ions and particles transported by the coolant.As figure 20 shows, such a concentration is homogenous within the loop in which the coolant flows during a specific operational mode.As example, figure 16 shows the coolant activity in the CVCS piping (i.e.upstream the CVCS filters and resins); during burn mode overall activity is significantly higher (two orders of magnitude) than during baking and shutdown operations; also, the contributors to the overall activity differ for this two modes.This is mainly due to the erosion of the copper deposit during the plasma burn phases (see section 5.4) and the subsequent release of particles to the coolant.Such a phenomenon is dependent on the relatively high velocity of the coolant flowing on copper surfaces during plasma burn modes.During burn (see figure 21), Cu-64 is the dominant contributor to the volumetric activity followed by Cu-66 and Mn-56, whereas in baking and shutdown modes, Fe-55 and Mn-54 are the two most important contributors, followed by Co-60.The predominance of Fe-55 and Mn-54 during the off-plasma mode is the consequence of a large wet surface of stainless steel simulated in the model: in fact, both Fe-55 and Mn-54 are generated by the activation of stable Fe atoms.Figures 22 and 23 show the dominant contribution of Fe-55 and Co-60 to the total activity in the ion exchange resins and particle filter respectively. Impact on the ORE Figure 24 shows the calculated impact of the main contributors to the gamma dose rates for a study case: the Jungle Gym return region, assumed as a DN250 Schedule 80 pipe.The total dose rate increases with the performance of the FPO thus to reach a maximum at EoL of 1 mSv h −1 .Although the ORE calculation requires more detailed studies [6,38,39] encompassing the overall maintenance scenario, the layout of the surrounding system, the duration and type of maintenance operation and the position of the operators, it is possible to state that such dose rate levels represent a safety concerns.In fact, Co-60 represents 99% of the total contribution to the dose rate after the second FPO since: -short living isotopes, as Cu-64 and Fe-59, rapidly decay after the end of the plasma operation; -medium living isotopes, such as Co-58, Mn-54 and Fe-59, contribute to the total at EoL for less than factor 1/100. Hence, after 12 d from the plasma shutdown, the dose rate for the cooling equipment outside the bio-shield due to the ACPs is such to represent a challenge in terms of ORE.As shown in figure 25, the dose rate coming from ACPs at the EoL, in the out-of-flux regions can be estimated by considering only the surface activity due to Co-60.The maximum Co-60 concentration is reached in the divertor and equatorial return manifolds, about 530 MBq m −2 , whereas the minimum is reached in the CVCS piping, 24 MBq m −2 .The surface activity of the in-flux regions is shown in black since it is generally higher than the maximum out-of-flux regions. Contamination transfer Departure of ions and particles from the in-flux wet surfaces leads to the spreading of contamination within the loop; three mechanisms contribute to the spreading of contamination in the circuit: particles erosion, ions dissolution (inner oxide and deposit) and metal release of ions directly into the coolant.In addition, the abrasion mechanism for components with moving parts (like valves) can also contribute to the contamination transfer; however, since the valves are not supposed to contain stellites, abrasion is not considered in the present work.Contaminating ions and particles are transported in the coolant and transferred to the out-of-flux regions through different mechanisms, such as deposition and precipitation.In the following section, we describe the contamination transfer from reference in-flux regions to out-of-flux regions focusing on Co-60 since its predominance on the ORE. Origin of Co-60 and impact of cobalt concentration in contaminating the out-of-flux regions. To investigate the impact of cobalt content in both in-flux and out-of-flux regions, a parametric study was carried out: the reference case has a Co concentration of 500 ppm for the in-flux regions and 2000 ppm for the out-of-flux. This was compared to five additional cases: - For in-flux regions, the increase in Co to 2000 ppm for both AISI and Cu-alloys (SFCo20-20) results in an increase of about 90% in the Co-60 surface activity, whereas it is only 15% for the increase in Co only for AISI (SFCo20).The decrease in Co to 200 ppm for both AISI and Cu-alloys (SFCo2-2) results in a decrease of 40%, compared to 25% for a decrease in Co only for AISI (SFCo2). For out-of-flux regions, the change in the Co content of AISI has an unexpected impact.An increase in Co to 2000 ppm (HFCo20) results in a slight decrease of 5% in the Co-60 surface contamination, and a decrease to 200 ppm (HFCo2) results in an increase of 40%.This unexpected effect occurs during the baking phases.The Co-60 released during baking precipitates much more when the Co out-of-flux content is lower (lower Co equilibrium concentration).On the other hand, as expected, during plasma operation, the Co-60 increase slope is lower when the Co content of AISI is lower. Thus, we can conclude that most of the Co-60 is generated by the neutron activation of the Cu-alloy material, according to two activation reactions Cu-63(n, α)Co-60 and Co-59(n, γ)Co-60 in the in-flux regions. Co-60 departure mechanisms from in-flux regions. Figure 27 shows the dominant phenomena for the HV region and figure 28 those for the HV-Cu alloy region. For the stainless-steel regions under neutron flux, the metal release of ions and the deposit dissolution drive the contamination.For the copper alloys regions under flux, the erosion dominates the particle transfer to the coolant during plasma pulses, whereas during baking the metal release of ions drives the contamination, similarly to the stainless steel regions under flux. Co-60 transfer to the out-of-flux regions. Figure 29 shows the dominant Co-60 transfer phenomena for the Jungle-Gym return region.Co-60 contamination in the Jungle-Gym return region is mainly due to ionic precipitation onto deposit during baking; in pulsed operation, the driving mechanisms are particles deposition and ions precipitation on the inner oxide. It is worth focusing on the ionic precipitation onto deposits, since it is the driving mechanism of the dose estimated for the ORE.The current work is showing that precipitation occurs at the isotopic level, i.e. it is limited to Co-60, whereas the elemental Co ions concentration in the coolant never reaches the Co equilibrium concentration with respect to the deposit.Figure 30 highlights this phenomenon of isotopic precipitation in unsaturated conditions of the chemical element (isotopic dissolution-precipitation model described in [12]). By comparing the Co-60 ion concentration and the Co-60 equilibrium concentration in the deposit of the Jungle Gym return, it is possible to observe in figure 30 that the former overcome the latter only during the baking phases: this means that most of the Co-60 contamination occurs during baking. Conclusions and recommendations This report provides the updated assessment of the mass and the activity of the IBED loop CPs and ACPs.These new data with regard to CPs masses and ACPs activities show the importance of specifying which portion of the CPs are activated and the necessity to review the strategy for administrative limits of the ACPs inventory. The present work confirms the predominant contribution of Co-60 to the ORE and highlights the significant impact of the baking operation in spreading Co-60 in the circuit because of precipitation. The focus is also on the origin of Co-60, primarily due to the presence of copper as base material for the in-vessel components, i.e. in regions under neutron flux.Therefore, for future activities, measures aimed at reducing the dose to the workers should include efforts to limit or prevent such contamination mechanisms.This entails reducing the spreading of Co-60 outside the bioshield, which can be achieved by optimizing the water baking operation in terms of frequency and duration. The corrosion law used in this study for copper is supported by experimental data in the temperature range 110 • C-250 • C for limited duration (i.e.<200 h).We strongly recommend establishing bespoken experiments to validate the corrosion and erosion rates for Cu and Cu alloy at ITER relevant conditions, i.e., baking temperature and duration, fluid velocity, reducing, even slightly oxidizing, environment and pH T equal or higher than 7.Such experiments will yield reliable data on the loop contamination transfer mechanism due to corrosion of the Cu-alloy during baking and will enable a refinement of the OSCAR-Fusion input parameters and validation of the results.The authors believe that the success of these endeavours will depend, in part, on the ability to distribute the experimental workload through collaborative international efforts among various organizations, such as EUROfusion. Finally, the simulated injection of Li for pH management purposes instead of ammonia represents another recommendation to operate the IBED PHTS based on the relevant experience from PWR operation. Figure 1 . Figure 1.IBED loop simplified flow diagram; the dotted lines represent the branches of the circuit in which 240 • C water circulates during baking. Figure 3 . Figure 3.Comparison of the corrosion rates for AISI between the power law used for ITER and the MOOREA law used for PWRs at T = 250 • C. Figure 4 . Figure 4. Comparison of the release rates for Cu-alloy between OSCAR and an experiment in the CORELE loop [29]. Figure 5 . Figure 5.Comparison of the corrosion rates for Cu-alloy between OSCAR and experiments [25, 30]. Figure 6 . Figure 6.Neutron spectra in TRIPOLI 315 energy group structure in the cooling channels of the blanket module BLK15_S02 at the flattop of the nominal plasma shot of the fusion power of 500 MW; the spectrum in the beryllium layer is added as a reference. Figure 7 . Figure 7. IBED PHTS OSCAR-Fusion input model: the orange rectangle frames the baking loop regions, the green rectangle frames the CVCS regions and the red rectangle frames the in-flux regions. 4. 2 .1.Regions.The IBED loop can be broken down in 4 areas: -In-flux regions, i.e. the regions of the loop within the bioshield activated by the neutrons; -Out-of-flux regions, i.e. portions of the IBED loop belonging to the PHTS outside the bioshield; -The baking circuit connected in parallel to the PHTS; -The CVCS circuit connected in parallel to the PHTS.Each area encompasses several regions, each region being defined in function of geometric (e.g.wet surface), thermalhydraulic (e.g.temperature) and material (e.g.AISI316 or Copper, roughness) parameters.The current version of the input has 60 regions: 28 for the out-of-flux, 26 for the in-flux, 3 for CVCS and 3 for the baking circuit.Figure 7 below gives an overall view of the IBED loop model elaborated through the OSCAR GUI. Figure 10 . Figure 10.Out-of-flux activity (TBq)-comparison of scenarios B, C and D. Figure 11 . Figure 11.Corrosion products mass (kg)-comparison of scenarios B, C and D. Figure 14 shows the results in terms of total mass of inner oxide and deposit of CPs (metallic elements) for the in-flux and out-of-flux regions, as well as the total mass of CPs trapped by the CVCS resins and filters.At the very beginning of the operation, there is a sharp increase in the mass of both in-flux and out-of-flux regions: this is due to the 'passivation' of the circuit, meaning the formation of the inner oxide and deposit.During baking operation, the thickness of the deposits increases significantly in the in-flux Figure 14 . Figure 14.Corrosion products mass (kg) in different regions of the IBED loop. Figure 15 . Figure 15.Mass comparison (g) between co element (total) and its stable and radioactive isotopes in the out-of-flux regions. Figure 17 . Figure 17.Total activity (TBq) for the in-flux and out-of-flux regions. Figure 18 . Figure 18.Activity comparison in the out of flux regions-All isotopes vs Co-60. and light blue) (NB: the regions in light grey represent the CVCS filter and resins). Figure 19 . Figure 19.Surface activity in the IBED loop at EoL. Figure 20 . Figure 20.Coolant volumetric activity 12 d after the EoL plus 30 d of baking. Figure 21 . Figure 21.Coolant activity-detail of main contributors to the total activity (MBq/t)-Detail of the last FPO campaign. Figure 26 Figure26shows the results of such comparison in terms of Co-60 contamination of the Jungle-Gym return region.For in-flux regions, the increase in Co to 2000 ppm for both AISI and Cu-alloys (SFCo20-20) results in an increase of about 90% in the Co-60 surface activity, whereas it is only 15% for the increase in Co only for AISI (SFCo20).The decrease in Co to 200 ppm for both AISI and Cu-alloys (SFCo2-2) results in a decrease of 40%, compared to 25% for a decrease in Co only for AISI (SFCo2).For out-of-flux regions, the change in the Co content of AISI has an unexpected impact.An increase in Co to 2000 ppm (HFCo20) results in a slight decrease of 5% in the Co-60 surface contamination, and a decrease to 200 ppm (HFCo2) results in an increase of 40%.This unexpected effect occurs during the baking phases.The Co-60 released during baking precipitates much more when the Co out-of-flux content is lower (lower Co equilibrium concentration).On the other hand, as expected, during plasma operation, the Co-60 increase slope is lower when the Co content of AISI is lower.Thus, we can conclude that most of the Co-60 is generated by the neutron activation of the Cu-alloy material, according to two activation reactions Cu-63(n, α)Co-60 and Co-59(n, γ)Co-60 in the in-flux regions. Figure 23 . Figure 23.Activity (TBq) in the particles trapped in the CVCS filters. Figure 25 . Figure 25.Co-60 surface activity at EoL + Baking-the regions in black and grey represent loop regions with values above and below the selected color scale, respectively. Figure 26 . Figure 26.Impact of cobalt concentration in the material for both in-flux and out-of-flux regions on the Co-60 contamination in the Jungle Gym return. Figure 30 . Figure 30.Comparison of the Co and Co-60 ion concentrations with Co and Co-60 deposit equilibrium concentrations. Table 2 . Material composition and properties used in IBED model. [32]2.IBED materials.The coolant circulating in the IBED loop interacts with different materials[32]: AISI316, AISI304, oxygen free copper and copper alloy.To better simulate the variety of materials used in different regions of the loop, it was chosen to simulate 4 types of AISI316, one type of copper alloy and one type of pure copper.AISI304 (typically used for the out-of-flux piping) has not been simulated since its very similar material composition to AISI316; therefore, all the regions with stainless steel are simulated as AISI316.Materials composition and properties are summarized in table 2. Table 5 . Thermal-hydraulic parameters during the different operational phases. Figure 8. ACPs normalized contribution to the contact dose rate of a jungle primary pipe. Table 6 . Detail of the operational scenarios to study the impact of the period homogenization. Table 7 . Surface and overall activity at EoL + Baking comparison for hot and cold leg regions.
11,174.8
2024-05-17T00:00:00.000
[ "Engineering", "Physics", "Environmental Science" ]
Coulomb excitations of monolayer germanene The feature-rich electronic excitations of monolayer germanene lie in the significant spin-orbit coupling and the buckled structure. The collective and single-particle excitations are diversified by the magnitude and direction of transferred momentum, the Fermi energy and the gate voltage. There are four kinds of plasmon modes, according to the unique frequency- and momentum-dependent phase diagrams. They behave as two-dimensional acoustic modes at long wavelength. However, for the larger momenta, they might change into another kind of undamped plasmons, become the seriously suppressed modes in the heavy intraband e–h excitations, keep the same undamped plasmons, or decline and then vanish in the strong interband e–h excitations. Germanene, silicene and graphene are quite different from one another in the main features of the diverse plasmon modes. The IV-group single-layer systems exhibit the unusual electronic properties. The low-lying electronic structures near the K and K′ points (the first Brillouin zone in the inset of Fig. 1(b)) are dominated by the outermost p z orbitals 16,17 , although the buckled germanene and silicene are built from the weak mixing of sp 2 and sp 3 chemical bondings. They are characterized by the gap-less or separated Dirac-cone structures, depending on the existence of SOC. Graphene, without SOC, has two linear valence and conduction bands intersecting at the Dirac point 1,18 . The SOC's in germanene and silicene can separate the merged Dirac points; that is, the intrinsic systems are narrow-gap semiconductors (E g ~ 93 meV for Ge & ~7.9 meV for Si) 37 . The application of gate voltage (V z ) further induces the splitting of spin-related energy bands 38,39 . Moreover, the band structures, with a strong wave-vector dependence, behave the anisotropic dispersions at sufficient high energy (~0.2 eV for Ge; ~0.4 eV for Si) 37 . This is closely related to the hopping integral (t) of the nearest-neighbor atoms. However, the effects of (SOC, V z , t) on energy bands are more obvious in germanene, compared with silicene. Such critical differences lie in the fact that the larger 4p z orbitals lead to the bigger SOC and buckling, but the smaller t. The main characteristics of energy bands in germanene are expected to create the diverse Coulomb excitations. The tight-binding model, with the SOC, is used to calculate the energy bands of monolayer germanene, and the random-phase approximation (RPA) to study the π-electronic Coulomb excitations. The dependence on the magnitude (q) and direction (θ) of transferred momentum, the Fermi energy (E F ) and the gate voltage is explored in detail. This work shows that there exist the diverse frequency-momentum phase diagrams, directly reflecting the significant SOC and buckled structure. The single-particle excitations (SPEs) are greatly enriched by (q, θ, E F , V z ), so that germanene is predicted to have four kinds of collective excitation modes. The main features of these plasmons are mainly determined by the strength of intraband and interband Landau dampings and the existence of extra modes. Germanene quite differs from silicene and graphene in excitation spectra. The theoretical predictions could be verified from the experimental measurements using the EELS 56-62 and the inelastic light scattering spectroscopy [63][64][65][66] . The tight-binding model Monolayer germanene has a low buckled structure with the weak mixing of sp 2 and sp 3 bondings; furthermore, the low-lying electronic structure is dominated by 4p z orbitals. There are two equivalent sublattices of A and B atoms ( Fig. 1(a)), being separated by a distance of l = 0.66 Å. Each sublattice has a hexagonal lattice with the Ge-Ge bond length of b = 2.32 Å. Moreover, the SOC plays an important role in the electronic properties. The Hamiltonian built from the sub-space spanned by the four spin-dependent tight-binding functions is expressed as where θ is the angle between  q and k x (along Γ M in the inset of Fig. 1(b)). The π-electronic excitations are described by the magnitude and direction of the transferred momentum and the excitation frequency. θ in the range of [0°, 30°] is sufficient to characterize the direction-dependent excitation spectra because of the hexagonal symmetry.  = . 2 4 0 is the background dielectric constant, and V q = 2πe/q is the bare Coulomb potential energy. is the Fermi-Dirac distribution, where k B is the Boltzmann constant, and μ the chemical potential corresponding to the highest occupied state energy in the metallic systems (or the middle energy of band gap in the semiconducting systems). Γ is the energy width due to various de-excitation mechanisms, and is treated as a free parameter in the calculations. Results and Discussion Germanene exhibits the feature-rich band structure which strongly depends on the spin-orbit coupling, the wave vector, and the gate voltage. The unoccupied conduction band is symmetric to the occupied valence one about the zero energy. Electronic states in the presence of SOC are doubly degenerate for the spin freedom ( Fig. 1(b)). They have up-and down-dominated spin configurations simultaneously. These two distinct spin configurations can make the same contribution to the Coulomb excitations. Moreover, the SOC can generate the separation of Dirac points and thus induce an energy spacing of E D = 93 meV near the K point (the blue curves). The energy bands have parabolic dispersions near the band-edge states and then gradually become linear ones in the increase of wave vector. However, at higher energy (|E c,v | > 1) eV, they recover into parabolic ones and have a saddle point at the M point with very high density of states. It is noticed that only the low-lying states with |E c,v | < 0.2 eV exhibit the isotropic energy spectra, as indicated from constant-energy contours in Fig. 1(c). The main features of low-lying energy bands are easily tuned by a gate voltage (or a perpendicular electric field). The Coulomb potential energy difference between the A and B sublattices further results in the destruction of mirror symmetry about the z = 0 plane. The spin-dependent electronic states are split; that is, one pair of energy bands changes into two pairs of ones. These conduction and valence bands are, respectively, denoted by 1 c,v and 2 c,v in Fig. 1(d), e.g., E c,v at V z = 0.5 λ so (the red curves), where the effective SOC is λ so = E D = 92 meV. As for the electronic states near the K point, the pair of energy bands for the spin-down-dominated configuration is relatively close to zero energy. However, the opposite is true for those near the K′ point. As V z grows, the first pair gradually approaches to zero energy and E D is getting smaller. E D is vanishing at V z = λ so (the green curves), where the intersecting linear bands are revealed, or the electronic structure has a pair of zero-spacing linear bands. With the further increase of V z (the dashed orange curves), the energy spacing of parabolic valence and conduction bands will be opened and enlarged. On the other hand, the second pair of energy bands for the spin-up-dominated configuration is away from zero energy. Apparently, two splitting spin-dependent configurations are expected to greatly diversify the single-particle and collective excitations. The SPEs are characterized by the imaginary part of the dielectric function ( 2  ). They strongly depend on the direction and magnitude of transferred momentum, Fermi energy, and gate voltage. The non-vanishing 2  corresponds to the interband and intraband SPEs consistent with the Fermi-Dirac distribution, and the conservation of energy and momentum. 2  exhibits an asymmetric square-root divergent peak at the threshold energy at E F = 0 and θ = 0° (the black dashed curve in Fig. 2(a); v F = 3tbq/2 is the Fermi velocity) 37,46 . This structure comes from the interband excitations which are associated with the valence or the conduction band-edge states (the separated Dirac points). From the Kramers-Kronig relations between the real and the imaginary parts of the dielectric function,  1 exhibits a similar divergent peak at the left-hand side ( Fig. 2(b)). It is always positive at the low-frequency range, implying that collective excitations hardly survive. The competition between the interband and intraband excitations is created by the changes of the free carrier density. The former are gradually suppressed by the latter in the increase of the Fermi energy. The former and the latter could coexist at E F = 0.2 eV; they, respectively, have a shoulder structure and a prominent square-root divergent peak in  2 (red curve). These two structures, which are closely related to the electronic excitations of the Fermi-momentum states, are separated by an excitation gap due to the energy spacing of Dirac points. The strong asymmetric peak appears at ω ≈ v q ex intra F , directly reflecting the linear energy dispersion. This specific excitation energy is the upper boundary of the intraband SPEs ( Fig. 4(a), discussed in detail later). Moreover, 1  exhibits the logarithmic divergent and square-root peaks, as shown in Fig. 2(b) 44 . The two zero points in 1  , if at where 2  is sufficiently small, are associated with collective excitations. The intraband excitations become the dominating channel in the low-frequency SPE spectrum at E F = 0.3 eV (blue curve). 1  and 2  have the peak structures, in which the second zero point of the former is located at where the latter is rather small. The intensity of intraband excitation peaks is getting stronger with the increasing E F . However, the zero point of 1  is revealed at the higher frequency, accompanied with the reduced 2  . That is, the further increment of E F will reduce the e-h Landau dampings and enhance the frequency of collective excitations. The change in the direction of transferred momentum leads to the anisotropic SPEs. The constant-energy contours are vertically flipped between the K (Fig. 1(c)) and K′ points. The energy variation measured from them along the direction of θ = 30° is, respectively, smaller and bigger, compared with the θ = 0° case. In the vicinity of the K (K′ ) point, the highest intraband and the lowest interband excitation energies become lower (higher). As a result, each above-mentioned structure is changed into double structures at E F = 0.2 eV and q = 0.075 Å −1 (red curve versus green curve in Fig. 2(a)). Furthermore, the higher-frequency intraband peak might overlap with the first interband shoulder, instead of the excitation gap associated with the K point. The distinct excitations near the two valleys are also reflected in  1 , e.g., the change of special structures. With the increasing θ, the widened frequency range of the e-h damping is expected to significantly affect the plasmon modes. The gate voltage can split energy bands and thus diversify the channels of SPEs. There are eight kinds of excitation channels, in which two and six kinds, respectively, belong to the intraband and the interband excitations. Specially, the four interband excitation channels, 2 v → 1 c , 2 c → 1 c , 1 v → 2 c and 1 c → 2 c in the low-frequency excitation range are negligible, mainly owing to the almost vanishing Coulomb matrix elements (the square term in Eq. (2)). The V z -dependent dominating e-h excitations come from the intraband excitations (2 c → 2 c , 1 c → 1 c ) and the interband excitations (1 v → 1 c , 2 v → 2 c ). They, respectively, cause  2 to exhibit two square-root divergent peaks and two shoulder structures, as shown at V z = 2λ so , E F = 0.2 eV, θ = 0° and q = 0.075 Å −1 (yellow curve). The second peak due to the 1 c → 1 c channel has the excitation frequency almost independent of V z . Moreover, the threshold excitation energies of the 1 v → 1 c and 2 v → 2 c interband excitations, which mainly come from the valence Dirac point and the Fermi-momentum state, are given by ω λ th inter so z F 2 2 2 37 . They will determine the enlarged boundaries of the interband SPEs (Fig. 5). The frequency of the former shoulder structure becomes lower as V z reduces from 2λ so to λ so and then higher in the further decrease from λ so to 0. Differently, that of the latter is monotonically declining with the decrease of V z . However, the zero-point frequency in 1  , corresponding to the 1 c → 1 c e-h peak, hardly depends on V z . This will induce the suppression of the interband Landau dampings on plasmon modes. The loss function, defined as exists a prominent peak at q = 0.045 Å −1 , θ = 0° and E F = 0.2 eV, as indicated by a yellow dashed curve in Fig. 3(a). It corresponds to the collective excitations of free carriers in conduction bands. With the increase of momentum, the plasmon frequency (ω p ) grows because of the higher-energy zero point in 1  , while its intensity is reduced by the stronger interband e-h damping. Moreover, a prominent peak is changed into a composite structure of a narrow sharp peak and a shoulder structure. The loss spectrum is enriched by the distinct directions of momentum transfer. An extra peak, marked by the triangle in Fig. 3(b) (q = 0.075 Å −1 and E F = 0.2 eV), comes to exist with the increase of θ. This arises from the free carriers in the K valley and it is damped by the SPEs due to the K′ valley. The enlarged frequency range of the e-h dampings will cover the excitation gap at larger θ's. As a result, the sharp peak is seriously suppressed and only the shoulder structure due to the K′ valley can survive. The variation in free carrier density could modify the SPE channels and thus drastically change the plasmon peaks in the loss spectrum. Without free carriers, only a shoulder structure corresponding to the interband SPEs is revealed in the excitation spectrum at q = 0.075 Å −1 and θ = 0° in Fig. 3(c) (green curve); that is, the plasmon peak is absent. When the free carrier density gradually grows, the loss spectrum exhibits a narrow sharp plasmon peak and a shoulder structure, being attributed to the strong interband e-h dampings. With the further increase of E F , those two structures are enhanced, and then only a prominent plasmon peak comes to exist. This is due to the fact that the interband excitations are progressively replaced by the intraband excitations. The gate voltage can diversify the spin-dependent SPE channels and thus enrich the loss spectra ( Fig. 3(d-f)). The sharp plasmon peak might change into the lower and broader one under the V z -enlarged SPE range ( Fig. 3(d)). This indicates the transformation between two different kinds of plasmon modes ( Fig. 6(a)). There exist more low peak structures induced by the extra excitation channels, especially for the large-θ case (Fig. 3(e)). With the increase of V z , the intensity of the prominent plasmon peak is greatly enhanced for the large E F (Fig. 3(f)). However, the monotonous V z -dependence is absent in the other cases. The variation of plasmon intensity is determined by whether the inter-band excitation energy is away from or close to the plasmon frequency. Free carriers could induce the unique SPEs and plasmon modes, as clearly indicated in the (w, q)-dependent phase diagrams. The boundaries of intraband and interband SPEs are mainly determined by the Fermi-momentum states and the Dirac points, as shown in Fig. 4(a-c). There exist three kinds of collective excitations with the relatively strong intensities, according to the Landau damping. First, the free-carrier induced plasmon is undamped at small q for E F = 0.2 eV and θ = 0°. The excitation gap, which is formed between the intraband and the interband SPEs, is responsible for this behavior (the white dashed and solid curves). In addition, the (q, ω)-range of the excitation gap grows with the increment of Dirac-point spacing. With the increasing q, this plasmon will enter the interband excitation region and experience the gradually enhanced damping. Specially, the partially undamped plasmon exhibits a sharp peak beyond a sufficiently large q; that is, this mode could survive in the frequency gap between the interband and the intraband excitation boundaries. Secondly, at large θ (θ ~ 30°) (Fig. 4(b)), the larger-q undamped plasmon is replaced by the seriously damped plasmon in the intraband excitation region (the red curve). The weak plasmon intensity is due to the replacement of excitation gap by the extended and overlapped excitation regions. Moreover, for a sufficiently low Fermi energy (E F = 0.1 eV in Fig. 4(c)), the plasmon remains as an undamped mode even at large q. This plasmon appears in the enlarged excitation gap determined by the comparable E F and E D . The above-mentioned plasmon frequencies have the q -dependence at long wavelength limit, a feature of acoustic plasmon modes. The similar mode has been observed in a 2D electron gas 67 . The gate voltage leads to more complicated SPE boundaries because of the splitting energy bands, as revealed in Fig. 5(a-f). I, II, III and IV, respectively, represent the 2 v → 2 c , 1 v → 1 c , 1 c → 1 c and 2 c → 2 c SPEs. Specially, V z can create the fourth kind of plasmon mode during the dramatic variation of energy gap. For V z = λ so , E F = 0.2 eV and θ = 0°, an undamped plasmon occurs at small q ( Fig. 5(a)). This plasmon is dominated by the first pair of the V z -induced energy bands without energy spacing. At larger q, it will compete with the interband SPEs and gradually disappear. The damping behavior is closely related to the vanishing excitation gap and the enlarged SPE regions. Moreover, the main characteristics of the (w, q)-phase diagrams are insensitive to the changes in E F and θ, as indicated in Fig. 5(b,c). A similar plasmon mode is also observed in an extrinsic graphene 47,48,55,56 , while germanene has the stronger intensity because of the weakened e-h damping. With the increase of V z (Fig. 5(d-f)), the main plasmon modes are similar to those of the zero-voltage ones (Fig. 4(a-c)). However, the SPE regions become more complicated, so that the extra plasmon modes could survive in phase diagrams, e.g., two weak plasmon modes are heavily damped by the intraband SPEs in Fig. 5(e). The V z -dependent excitation spectrum can provide more information on the collective and single-particle excitations. The plasmon modes and the e-h boundaries are dramatically altered during the variation of V z . The undamped plasmon at larger q will vanish within a certain range of V z , e.g., 0.5 λ so ≤ V z ≤ 1.5 λ so (Fig. 6(a)), being attributed to the serious suppression of 1 v → 1 c SPEs. It is replaced by the fourth kind of plasmon which occurs in the absence of 1 c → 1 c SPEs. With the further increase of V z , the undamped plasmon will be recovered. The dependence of plasmon mode on V z is very sensitive to the change in the direction of transferred momentum ( Fig. 6(b)). The undamped plasmon is absent at low and high voltages. However, the fourth kind of plasmon can survive in the middle V z -range. There are certain important differences among germanene, silicene and graphene, mainly owing to the strength of spin-orbit interaction and the buckled structure. In addition, the Hamiltonians of silicene and graphene are, respectively, given in refs 37 and 48. For the extrinsic systems, all of them can exhibit the similar 2D plasmon modes at long wavelength limit (Figs 5 and 7), in which the effects due to the SOI-induced energy spacing are fully suppressed. But at larger transferred momenta, the excitation gap can create an undamped plasmon mode in germanene. The first kind of plasmon is absent in silicene and graphene because of the rather small 7(b,d)). The third kind of plasmon is revealed in germanene and silicene, with the requirement of extremely low Fermi energy (~0.01 eV in Fig. 7(e)) for the latter. Also, silicene can exhibit the fourth kind of plasmon at the lower gate voltage, e.g., V z equal to the effective SOC (λ so = 17.5 meV/Å in Fig. 7(f)). In short, germanene possesses four kinds of plasmon modes only under the small variation in the Fermi energy. The high-resolution EELS can be served as the most powerful experimental technique to explore the Coulomb excitations in 2D materials. The experimental measurements on single-and few-layer graphenes are utilized to comprehend the collective excitations due to the free carriers, the π electrons, and the π + σ electrons. They have identified the low-frequency acoustic plasmon (ω <  1eV p ), accompanied with the interband Landau damping at larger momentum 56 . The interband π and π + σ plasmons are, respectively, observed at ω > . ∼ 4 8eV p and ω > . ∼ 14 5eV p ; furthermore, their frequencies are enhanced by the increase of layer number 57 . The feature-rich electronic excitations of monolayer germanene, the four kinds of plasmon modes, the E F -and E D -created excitation gaps, the interband and intraband SPEs, and the V z -enriched excitation spectra, are worthy of further examinations by using EELS. The detailed measurements on the loss spectra can provide the full information on the diverse (q, ω)-phase diagrams, especially for the strong (q, θ, E F , V z )-dependence. Also, they are useful in distinguishing the critical differences of excitation spectra among germanene, silicene and graphene. In addition, Silicene has been successfully synthesized on distinct substrates 33,68 . Its band structure will be affected by the atomic interactions between silicene and substrate. The theoretical calculations 69 predict that the destruction of inversion symmetry leads to the significant changes in energy bands, such as an enlarged energy gap and energy dispersions. This will be directly reflected in band-dominated Coulomb excitations. The single-and many-particle excitations are expected to exhibit the dramatic transition, especially for the interband excitations. For example, the frequency-and momentum-dependent excitation spectra will present very different features, e.g., an obvious energy spacing between the intraband and the interband e-h excitations, and the stronger doping-induced intraband plasmons in the absence of interband e-h dampings. Conclusion In this work, we have studied the low-frequency elementary excitations of monolayer germanene within the RPA. The calculated results show that the SPE regions are mainly determined by the Fermi-Dirac distribution, and the conservation of energy and momentum, and they are greatly enlarged by V z . Four kinds of plasmon modes are predicted to reveal in the (q, θ, E F , V z )-dependent loss spectra; furthermore, their main features are characterized by the undamped and damping behaviors. The differences and similarities among germanene, silicene and graphene lie in the existence of plasmon modes and excitation gaps. Similar studies could be further extended to the middle-frequency π-electronic excitations, the few-layer germanene, and the other IV-group systems (e.g., the single-layer Sn and Pb). The excitation properties directly reflect the characteristics of the low-lying bands, the strong wave-vector dependence, the anisotropic behavior, the SOC-created separation of Dirac points, and the V z -induced destruction of spin-configuration degeneracy. There exists a forbidden excitation region between the intraband and interband SPE boundaries, being attributed to the Fermi-momentum and band-edge states. The undamped plasmons could survive within this region with a prominent peak intensity. All the plasmons due to the free conduction electrons belong to 2D acoustic modes at small q's, as observed in an electron gas. With the increasing q, they might experience the interband damping and become another kind of undamped plasmons, change into the seriously suppressed modes in the heavy intraband damping, remain the same undamped plasmons, or gradually vanish during the enhanced interband damping. Specifically, the first kind of plasmon modes only appears in germanene with the stronger SOC. The fourth kind of plasmon modes in monolayer germanene are purely generated by V z , while they are frequently revealed in few-layer extrinsic graphenes without external fields 47,48,55,56 . The detailed measurements using EELS could examine the diverse (q, ω)-phase diagrams and verify the differences or similarities among the 2D group-IV systems.
5,748.4
2016-04-25T00:00:00.000
[ "Materials Science", "Physics" ]
Impacts of the Recent Expansion of the Sugarcane Sector on Municipal per Capita Income in São Paulo State The aim of this study is to evaluate the impacts of this expansion on the income of people in the state’s districts and towns. Beginning with a breakdown of the main determinants of per capita income, a spatial dynamic panel model is proposed. The proportion of adults in the municipal population, the labour force utilization rate, and the average labour income were used as control variables. Furthermore, to isolate the impacts of the expansion of the sugarcane sector on per capita gross domestic product (GDP), the share of farming in municipal areas, the share of agriculture within farming in general, the share of sugarcane farming within agriculture, and a dummy for districts and towns with an operational plant were included in the model. The series cover the 645 districts and towns of S˜ao Paulo State from 2000 to 2008. The results of the system generalized method of moments (system-GMM) showed a positive relationship of spatial and temporal dependence in the real per capita GDP. And the estimated direct and indirect effects indicate that the expansion of the sugarcane sector had a positive impact on per capita GDP, both in towns where the expansion took place and in their neighbouring towns. Introduction Over the past ten years, there has been a significant growth in sugarcane production in São Paulo State, increasing from almost 2.5 million hectares in 2000 to approximately 4.5 million hectares in 2008 [1]. From 2005 to 2008 alone, the area of harvested sugar cane in the state increased by over 1.8 million hectares, with 53.0% substituting pasture land and 46.7% substituting other crops [2]. This expansion was accompanied by increased capacity of preexisting plants and/or distilleries and also more new units throughout the state. According to a primary data survey for this study, the number of towns with an operational industrial plant and/or distillery increased from 109 in 2000 to 143 in 2008. But this rapid expansion of the sugarcane sector has raised a number of questions concerning its economic, social, and environmental impacts. From a socioeconomic viewpoint, one of the most important aspects to be dealt with is the effect of this growth on income (using the consumer theory, Deaton and Muellbauer [3] show that the level and distribution of per capita income enable an evaluation of social welfare). In 2008, the GDP of São Paulo State surpassed one trillion reais (or the equivalent to 546 billion dollars), accumulating a real growth of 13.7% in comparison with 2000. At the same time, the number of people living in the state rose by almost 9.3%, accounting for a population of 40.4 million habitants in 2008. Consequently, the real per capita GDP of the state rose by 3.7% between 2000 and 2008, reaching $ 13 364 in the final year. However, it is important to point out that the average real municipal per capita GDP in the state saw a growth of 7.9% during this time, rising from $ 7847 in 2000 to $ 8469 in 2008. A pioneer study for evaluating and measuring the socioeconomic impacts of the sugarcane sector on the towns and districts of São Paulo State was conducted by Silva [4]. The model proposed by this author used the municipal human development index (MHDI) as a proxy for socioeconomic conditions, with dummy explanatory variables to denote the presence of the sugarcane sector in the town and control 2 ISRN Economics variables. However, as the author herself affirms, the results of the study were not conclusive. Later, other studies sought to evaluate the relationship between the expansion of the sugarcane sector and the economic growth of towns. Walter et al. [5] showed that, in 2000, towns with plants or significant production of sugarcane (When defining "significant", the authors considered the production of the towns and regions that, in decreasing order, totaled 90% of the state's production.) had statistically higher per capita income than that of other towns. Spavorek et al. [6], quantifying the effects of a change in how land is used, found that towns involved in the sugarcane sector had higher growth in GDP than other towns. On the other hand, Deuss [7] found no evidence of a causal relationship between the growth of the sugarcane sector and the economy of towns in São Paulo State between 2002 and 2006. Oliveira [8] compared different socio-economic indicators in the towns of the main sugarcane-producing Brazilian states. Aspects related to education, income distribution, health/longevity, and development in the 1970s, 1980s, 1990s, and 2000s were analyzed. According to the author, there was no evidence that sugarcane resulted in disadvantages from a socio-economic viewpoint. On the contrary, in São Paulo State, all the indicators of the towns with significant sugarcane production were better than those of the control group, with a significance level of 5%. It is worth emphasizing that none of the aforementioned studies took into account the existence of a possible spatial dependence regarding the phenomenon in question. The pioneer work in this case was that of Chagas et al. [9], which analyzed the impact of the expanding sugarcane sector on the tax revenues of the state's towns using a dynamic panel model with spatial controls. Since that time, other works seeking to estimate the effect of sugarcane production on regional development have incorporated this spatial component. In Chagas et al. [10], a spatial propensity score matching model was used to estimate the effect of sugarcane production on the MHDI, and the results showed that the presence of the sector is not relevant when it comes to determining social conditions in regions where sugarcane is produced. The same methodology was applied by Chagas et al. [11] to evaluate the effect of sugarcane production on the growth in per capita GDP. In this case, the results showed that the regions which had seen an expansion in sugarcane plantations had also seen a higher growth in per capita GDP than other comparable regions (i.e., in regions where the expansion of sugarcane could have taken place but did not). Some recent developments in the literature on regional differences and income distribution have pointed out their main determinants. Barros et al. [12] relate the level of per capita income to seven determinants, including demographic dependence, that is, the proportion of adults in the population, the labour utilization rate in economic activities, the average income of work per occupied adult (which in turn is determined by the average bargaining power of the worker, average qualification of the workforce, and quality of jobs), and the income derived from other sources. Bearing in mind these recent theoretical and methodological developments and seeking to answer some questions in the existing literature, the aim of this study is to evaluate the impacts of the expanding sugarcane sector (taking into account the increased sugarcane plantations and the beginning of operations at new industrial units) on the average income of towns in São Paulo State, which accounted for more than 50% of the Brazilian sugarcane production and more than one-third of the country's GDP from 2000 to 2008. Special attention is given to the description of the counterfactual scenarios when interpreting the effects of the sugarcane expansion. This is often not done in other studies, and so, this paper may also provide a useful empirical framework for authors seeking to study similar questions in other contexts. Modelling per Capita Income to Gauge Variations between Towns in São Paulo State The starting point is an adaptation of the model developed by Barros et al. [12] to break down the variations in per capita income into its main determinants. Taken together, these determinants include all the variations in per capita income among towns in São Paulo State over time. For this reason, their inclusion in the model as control variables for dealing with regional differences in the state's economy is strategic when it comes to isolating the effects of the expansion of the sugarcane sector. Barros et al. [12] propose that per capita income in the family ℎ, ℎ , has only two immediate determinants: the average income per adult, ℎ , and the proportion of adult individuals in the family, ℎ , which may be expressed as Furthermore, the authors regard income per adult in the family ℎ, ℎ , as stemming from three different sources: labour income ( ℎ ), income transfers ( ℎ ), and income from other assets ( ℎ ). This may be expressed as However, as the authors point out, it is commonly seen that the major source of family income is labour income. Substituting (1) in (2) results in Meanwhile, labour income per adult in the family ℎ, ℎ , can be broken down into two direct determinants: the proportion of working adults, ℎ , and the average labour income of working adults, ℎ . This results in Combining (3) and (4) results in: where ℎ is the per capita income share from nonlabour sources. However, in the simplified version of the model, the authors accept that nonlabour income is negligible in comparison with labour income, that is, ℎ = 0. Thus, a function can be estimated so that The authors also claim that the relationships in (6) can be extrapolated to evaluate the per capita income of a group of families in a given region or country providing that the variables used are duly weighted. Thus, the function can be redefined at the municipal level: and, in this case, the per capita income of the town , , is expressed as a function of the average labour income per worker ( ), the labour workforce utilization rate ( ), and the proportion of adults in the municipal population ( ). In addition to the variables shown above, the average labour income per worker in agriculture ( ) and the labour workforce utilization rate in agriculture ( ) will also be used as explanatory variables in the model. Together, , , , , and will act as controls for diverse socioeconomic and demographic aspects that may also influence the municipal per capita nonlabour income share ( ). For instance, the relative difference between the total labour utilization rate ( ) and its equivalent in agriculture ( ) can be viewed as a proxy for the difference in the degree of urbanization of a town. Meanwhile, the differences in the economic composition of towns are handled simultaneously by the relative differences between and and between and . Higher levels of schooling and government transfers reflect on relatively higher levels of average income which, in turn, also imply in better access to healthcare and education. In order to isolate the impact of the sugarcane sector on the average per capita income of towns in São Paulo State, another four variables have been added to the model: (iv) : dummy variable with an assumed value of 1 when the town has an operational plant and/or distillery during the year in question. Data The per capita GDP (used as a proxy for per capita income), average labour income (total and in agriculture), proportion of adults in the population, and the number of jobs (total and in agriculture, used to estimate labour utilization) of all towns in São Paulo State from 2000 to 2008 were obtained from State System Foundation for Data Analysis (in Portuguese, Fundação Sistema Estadual de Análise de Dados (SEADE) [13]). Both the per capita GDP and labour income were deflated using a centred general price index, the IGP-DI, obtained from IPEADATA [14]. The total cultivated area of sugarcane and the total agricultural area of each town were obtained from the Brazilian Institute of Geography and Statistics (in Portuguese, Instituto Brasileiro de Geografia e Estatística (IBGE) [1]). The area of planted forests (eucalyptus and pine) and the area of pasture were obtained from the Farming Economy Institute (in Portuguese, Instituto de Economia Agrícola (IEA) [15]). The primary data regarding which plants were in operation between 2000 and 2008 were obtained from various sources, including the Daily Federal Bulletins (in Portuguese, Diário Oficial da União (DOU)), websites of plants and distilleries, the Ministry of Agriculture, Livestock and Supplies (in Portuguese, Ministério da Agricultura, Pecuária e Abastecimento (MAPA)) and the Union of Bioenergy Producers (in Portuguese, União dos Produtores de Bioenergia (UDOP)). Methodology Kukenova and Monteiro [16] explain that, in a spatial context, the main argument in favour of using an extended version of the system-GMM is the fact that this estimator, in addition to controlling the specific effects that do not vary in time and handling the potentially endogenous explanatory variables, also corrects the endogeneity of spatially lagged dependent variables. Definition of the Spatial Weights Matrix and Specification of the Spatial Dynamic Panel Model. The criterion for contiguity adopted for this study is that of the fifteen closest towns. In this case, with as the Euclidian distance between each pair of observations and , the , element of the spatial weight matrix has a value of 1 when where max is the distance between the th observation and its 15th closest neighbour; in the other cases, , has a value of 0. Moreover, to avoid an observation being used to explain itself, when = → , = 0 [17]. Elhorst [18] compared the different specifications of models that had already been used in the literature to analyze the dynamic of the variables in time and space. One of the less restrictive specifications and which, at the same time, satisfies the aims of this study may be expressed as where is an × 1 vector composed of observations of the dependent variable in each spatial unit ( = 1, 2, . . . , ) in the year ( = 1, 2, . . . , ), exo is an × matrix of the exogenous variables, endo is an × matrix of the endogenous variables, is the × 1 vector containing the specific spatial effects that do not vary over time, and is the error term. The stationarity condition for a spatial dynamic panel model in the form of (10) may be expressed as [18][19][20] where min and max are, respectively, the minor and major noncomplex characteristic roots of the matrix ( − ) −1 ( ). Interpretation of Results. According to Debarsy et al. [21], spatial dynamic panel models produce a situation where a variation of the th observation of the th explanatory variable in time period will result in contemporaneous and future responses of the dependent variable in all regions. This occurs due to the presence of a time lag term ( , −1 , which captures the temporal dependence), a spatial time lag term ( * , , which considers spatial dependence), and a crossproduct term ( * , , which reflects the diffusion of the phenomenon under study in time and space). In this type of analysis, the focus when interpreting results should be on the effects of the partial derivatives associated with the alteration of explanatory variables. For example, / is the direct contemporary, that is, short-term effect of an alteration in the -th explanatory variable of region on the dependent variable of the own region. But there is also a cross-partial derivative, / which measures the effect of the spatial spillover (designation used for the contemporaneous indirect effect) in region , ̸ = . In addition to the contemporaneous effects, the spatial dynamic panel model makes it possible to calculate the partial derivatives that quantify the responses of each region and alterations in the explanatory variables of period considering different time horizons + . The periods ahead response of the dependent variable in region to variations of the th explanatory variable of the own region is measured by , + / . And the diffusion effect, which reflects the impact of an alteration of the explanatory variable in region on the dependent variable of other regions over space and time, is measured by , + / . Debarsy et al. [21] derived the general expression for the period-ahead cumulative impact of a permanent alteration of the th variable in period . Taking into account the difference between the specification of the model estimated by the authors and the model used in this study (Debarsy et al. [21] define the general expression of cross-partial derivatives using the extended Durbin model. To make it compatible with the specification of the spatial dynamic panel model used in this study, it is necessary to impose a restriction that the parameter of −1 equals zero on the general expression developed by the authors.), the expression that represents the cumulative impact of a permanent variation is such that where denotes the th column of matrix ( × ), 1 and 2 are the defined parameters for variable in (10) and The main diagonal elements of the sum of matrixes × in (12) to time-horizon represent the observed impacts from changes in the explanatory variable of the own region that spread and continue to affect the dependent variable in the future due to the existence of spatial and temporal dependence. The sum of the elements of these matrixes outside of the main diagonal measures the spillovers (contemporaneous cross-partial derivatives) and diffusions (cross-partial derivatives in different time periods). Results There now follow the series used in the model estimate. Table 1 shows some descriptive statistics of the continuous variables, that is, municipal and spatially lagged real per capita GDP (Typically, * is the row-normalized matrix of spatial weights, and, in this case, * is the mean of in neighboring towns. Consequently, the estimated coefficient for * represents the impact of the variation of in the neighbors over the town under study. To facilitate the results analysis and discussion, the spatial weights matrix used in this study was normalized in the column. Thus, * cannot be understood as the mean of the variable in neighboring towns, but the estimated coefficient for this variable represents the impact of the variation of in the town under study over the neighbors.) (resp., and * ), average real income of all jobs ( ), average real income of jobs in agriculture ( ), proportion of working adults ( ), proportion of adults working in agriculture ( ), municipal and spatially lagged proportion of adults in the population (resp., and * ), and municipal and spatially lagged share of farming land in the total area of the town (resp., and * ), municipal and spatially lagged share of the area given to agriculture in total farmland (resp., and * ), municipal and spatially lagged share of land for sugarcane in agricultural land (resp., and * ). These series constitute a balanced panel containing the 645 towns in São Paulo State (according to IBGE's [1] municipal shapefile database of 2005) ranging from 2000 to 2008, with a total of 5805 observations. As this is a panel data analysis, the variables can be broken down into between and within dimensions. The between dimension is the variation of the mean in time between towns ( ), whereas the within dimension characterizes deviation in relation to each municipal mean ( − + , where is the total mean). The real per capita GDP, for instance, varied from $ 1864.77 to $ 125 706.23 during the period It is interesting to note that some towns had total labour utilization rate ( ) and the rate in agriculture ( ) of over 100%. This means that some towns have more jobs than the total adult population. A possible explanation for this phenomenon is that some workers in these towns have more 6 ISRN Economics than one job. Another plausible explanation is that many workers reside in one town and work in a neighbouring town. In this case, the labour utilization rate is reduced in the town where the worker resides but increases in the town where he works. Another peculiar point concerning the series under study is that the share of farmland (crops, pasture, and forestry) is also higher than 100% of the territory of some towns. But this can be accounted for by the fact that some agricultural and forestry systems combine forestry with crops and/or pasture in the same area. Besides that, some temporary crops can be combined so that the same area produces two or even three harvests in the same year. Table 2 shows some descriptive statistics related to the only discrete variable in the model, the dummy , which has the value of 1 when the town has an operational plant and/or distillery in the year in question and the value of 0 otherwise. The total frequency indicates that 18.54% of towns in the state had at least one operational industrial unit in the sugarcane sector during the time under study. The between frequency indicates that 143 towns had a = 1 value in some of the years of the sample, while 536 had a value of = 0. As the sample is made up of 645 towns, this means that was altered in 34 towns between 2000 and 2008. The within dimensions shows that, on average, the towns that had a value of = 1 at some time remained at this level in 83.61% during the time in question. To capture the indirect effect, that is, the impact on the fifteen closest towns, of making a plant and/or distillery operational, the spatial lag of the binary variable ( * ) was also included in the model. Although * is not a binary variable, some of its descriptive statistics are also shown in Table 2, merely to facilitate comparison with the original variable. For instance, whereas 143 towns at some time had sugarcane sector units in operation, 515 felt their influence (in accordance with the criteria adopted to define * ). The data in Table 3 show that 99.2% of the towns with = 0 at some time did not have an industrial unit for the sugarcane sector in operation the following year. Analogously, 98.8% of the towns with * = 0 at some time remained uninfluenced indirectly by a plant and/or distillery in operation the following year. Furthermore, it is important to point out that there were no cases of towns which at some time had an operational unit and later did not or were under its influence and then ceased to be. From 2000 to 2008, 34 towns were added to the group of 109 towns which were directly influenced by the sector, and fourteen were added to the group of 501 that were already indirectly influenced early on. The estimated parameters of the system-GMM obtained by the robust two-step estimator (In the two-step estimate, the covariance matrix is robust to the correlation and the specific heteroscedasticity of the panel, but the standard errors are biased (downwards). The robust two-step procedure corrects the covariance matrix for the case of the finite sample. However, it is worth pointing out that, in the latter case, it is not possible to conduct the Sargan test to verify the adequacy of the instruments used for estimation in the model.) are shown in Table 4. The coefficients associated with the variables that filter time ( −1 ) and space ( * ), both different from zero at a significance level of 1%, corroborate the existence of a moderate temporal dependence (0.568) and a small spatial dependence (0.306), both positive, in per capita GDP. The Wald 2 test shows that the set of explanatory variables is adequate to predict the behaviour of the dependent variable. The second-order autocorrelation test (statistic 2 , Arellano-Bond [22]) shows no serial autocorrelation in the level residuals, and the first-order correlation test (statistic 1 , Arellano-Bond [22]) confirms that the system-GMM is the most adequate model. Nevertheless, it is worth highlighting that the coefficients associated with the explanatory variables and their respective spatial lags cannot be directly interpreted. Unlike static models, they do not represent partial derivatives that measure the response of the dependent variable to alterations of the explanatory variables. In a spatial dynamic panel model, the partial derivatives are nonlinear functions of the estimated coefficients and the autoregressive terms in time and space that assume the form of × matrix for each time horizon. As the coefficients associated with the indirect effects of some variables were not significant (namely, * , * , and * ), these were not taken into consideration in this study. As the significance level of these coefficients was moderate, Satolo [23] discussed the same results, taking all the estimated coefficients into account. The direct and indirect effects on real per capita municipal GDP of a permanent increase of ten percentage points (p.p.) in the share of farmland ( ) are shown in Table 5. The direct contemporaneous effect indicates that, on average, an increase of ten percentage points in farmland raises the real per capita PIB by $ 75.54 in a town where this expansion occurred. Furthermore, the indirect contemporaneous effect shows that there is a small positive spillover, equivalent to an increase of $ 2.17 in the real per capita GDP of each of the 15 closest town. (Using the simplifying assumption that the indirect effects would be restricted only to the fifteen nearest towns, these were divided by 15 to make the results comparable to the direct effects at the municipal level. As the estimated spatial dependence coefficient for the independent variable is small, the impacts dissipate rapidly in space, with most of the indirect effect concentrated in the 15 nearest towns. But it is worth emphasizing that the indirect effects were calculated as the mean of the sum of the impacts of the change of an independent variable on a certain town over the dependent variables of all the 644 other towns in São Paulo State.) As the model is dynamic, this shock in the farming area spreads spatially over time, and, after ten years, the direct cumulative effect is equivalent to an increase of $ 182.33 in real per capita GDP of a town where the expansion of farming occurred, associated with an increase equivalent to $ 23.19 of the real per capita GDP of each of the 15 nearest towns. Table 6 shows the direct and indirect effects on the real per capita GDP of a permanent increase of ten percentage points on the share of agriculture in farming ( ). Contemporaneously, this relative growth in agriculture has a direct positive effect of $ 113.72 on the real per capita GDP of the town where this expansion takes place and a small positive indirect effect of, on average, $ 3.27 on the real per capita GDP of the 15 nearest towns. After ten years, the cumulative effect is equivalent to an increase of $ 274.50 in a town where the expansion occurred and an increase of $ 34.91 in each of the 15 closest neighbouring towns. Meanwhile, the direct and indirect effects of a permanent increase of ten percentage points of area given over to sugarcane within the area given over to agriculture ( ) can be seen in Table 7. Contemporaneously, real per capita GDP increases by $ 61.91 in the town where this expansion occurs and drops by an average of $ 29.10 in the 15 nearest towns. After ten years, the total cumulative effect is equivalent to an increase of $ 108.01 in real per capita GDP in the town where this expansion took place and a reduction of $ 129.62 in each of the 15 closest neighbouring towns. ISRN Economics Therefore, based on the results shown in Tables 5 and 6, it can be said that the expansion of farming and the increasing importance of agriculture for farming in towns in São Paulo State have positive impacts on real per capita GDP. But the increasing importance of sugarcane for agriculture in a town has a negative impact on real per capita GDP (Table 7) when one considers the total effect on São Paulo State (Impact = Total effect in São Paulo State = Direct Effect + 15 * (Average Indirect Effect)). This does not mean that the expansion of sugarcane in the state's towns has a negative impact on real per capita GDP. As the direct and indirect effects were calculated from a comparative static analysis, the findings and the variables , and need to be combined to evaluate better the impact of the expansion (in area) of sugarcane in the state's towns on real per capita GDP. On the one extreme is the case in which sugarcane expansion occurs only in places that had already been used for agriculture to substitute other crops (Δ | , = constant). In this case, the direct and indirect effects shown in Table 7 also represent the impact of sugarcane expansion on the town. On the other hand, if sugarcane expansion is accompanied by an expansion of other crops and its share of the state's agriculture remains unchanged (Δ | , = constant), the direct and indirect effects shown in Table 6 also represent the impact of sugarcane expansion on the town. The difference is that, in the first case, the sugarcane expansion is assumed to have taken place exclusively substituting other crops, whereas in the second case, it is assumed to have taken place exclusively replacing pasture land and planted forests. On the other extreme, there is a third case, in which sugarcane expansion takes place in areas that had not previously been used as farmland and there is no activity replacement (Δ | , = constant), as shown in the results of Table 5. In any other intermediate scenario imaginable for sugarcane expansion (in area) in the districts and towns of São Paulo State, the result will be a combination of the results shown for these extreme cases and depends on the relative variation of , , and . Thus, it can be said that the impact of sugarcane expansion (in area) on the real per capita GDP of a town can be either positive or negative, depending on to what extent this expansion takes place as a replacement of other farming activities. The direct and indirect effects of making an industrial plant and/or distillery operational on real per capita GDP are shown in Table 8. In the year when the industrial unit becomes operational, the real per capita GDP increases by $ 1901.44 in the town where the unit is located and by $ 54.73 in the 15 nearest towns. After ten years of operations of the plant and/or distillery, the total cumulative effect is equivalent to a rise of $ 4589.86 in real per capita GDP in the town where the unit is located and on average $ 583.78 in the 15 closest towns. Using the average real per capita GDP in the towns of São Paulo State in 2008 as a reference point ($ 8469.09), making a plant and/or distillery operational represents an increase ranging from 22.45% (in the short term) to 54.20% (after 10 years) in the income per capita of the town where the unit is located and an average increase of 0.65% (in the short term) and 6.89% (after 10 years) in the 15 nearest towns. Knowing the direct and indirect effects of sugarcane expansion and of making an industrial plant and/or distillery operational, it is possible to calculate the impact of the expansion of the sugarcane sector on the real per capita GDP of towns in São Paulo State. The results of the spatial dynamic panel model enable an evaluation of this impact when an industrial unit becomes operational in three different scenarios for the expansion of sugarcane: when it substitutes other crops (Tables 9 and 10), when it replaces pasture land and planted forests (Tables 11 and 12), and when it takes place in areas that had not previously been used for farming (Tables 13 and 14). These effects were obtained by a linear combination of the total effects presented by variables , , , and . It is important to emphasize that, as shown in Tables 9, 11, and 13, the direct effect of the expanding sugarcane sector (industrial plant + sugarcane) on real per capita GDP in the towns was positive in every scenario considered for sugarcane expansion. Regarding the indirect effect, some relatively small impacts on the real per capita GDP of the 15 closest towns can be seen if the expansion of the sugarcane sector occurs by replacing other crops in an area larger than 40% of the territory in towns with an operational industrial plant. Otherwise, the indirect effects of the expansion of the sector are also positive. Concerning the total effects (Tables 10, 12, and 14), the expansion of the sugarcane sector has a positive impact on the real per capita GDP of towns when the expansion of the land used for sugarcane, in conjunction with the initial operations of a new plant and/or distillery, does not take place as a replacement for other crops. If the sugarcane plantation expands in detriment of other crops, the impact can be either positive or negative, depending on how intensely this replacement is implemented. For instance, assuming that Δ = 0 and Δ = 0, the expansion of the sugarcane sector has a positive impact on real per capita GDP if the expansion of the sugarcane plantation is lower than 72.68 percentage points of municipal territory. Above this level, the indirect negative effects that probably originate from less diversification in local agriculture overlap the direct positive effects of the expanding sector and the impact turns negative. (It is very common for expansion not to be limited to the town where the industrial unit of the sugarcane sector is located and there is a spillover to the nearest towns. As the analysis is indifferent about where this expansion occurs, the impacts of the sugarcane sector are equivalent if the expansion of the planted area of sugarcane is percentage points of the territory of only one town or / percentage points of the territory of the nearest towns.) Considering the real per capita GDP as a proxy of the real per capita income, the total estimated effects of the proposed model lead to the conclusion that the sugarcane sector has a positive impact on the average per capita income level in São Paulo State when the expansion of sugarcane occurs in areas that had not been previously occupied by farming or to substitute areas of pasture and planted forests. If the expansion leads to a substitution of other crops in an area equal to a share that is equivalent to up to 72.68% of the territory of a town with a plant or distillery, the impact of the expansion of the sugarcane sector on the average per capita income is also positive. Conclusion Considering the impact of the expansion of the sugarcane sector as the sum of the total effect of the beginning of plants and/or distilleries operations and the total effect of the sugarcane expansion, one can say that sugarcane sector expansion had a positive impact on the real municipal per capita GDP when the expansion of sugarcane plantations did not occur exclusively to substitute other crops in São Paulo State. With the installation of a new industrial unit in the sector, if the expansion of sugarcane substitutes other crops (without there being an expansion of the total farmland and without variations in the total farmland), the impact is still positive if this expansion takes place in an area that occupies up to 72.68% of municipal territory. Between 2000 and 2008, the number of towns with operational plants and/or distilleries increased by 34 in São Paulo State. The area given over to sugarcane grew by the equivalent to 8.3% of state territory. As the state's agriculture grew by an area equivalent to 7.4%, sugarcane can be said to have substituted other crops in less than 1% of São Paulo State's area. Moreover, as the area occupied by farming remained practically the same throughout the period, most sugarcane expansion was found to have taken place on land previously used for pasture and planted forests. As operations began in industrial units of the sugarcane sector in 34 new towns, the total impact on São Paulo State can be said to be positive if the expansion of sugarcane took place exclusively as a substitution for other crops in an area equivalent to up to 3.83% of state territory. (The results of the estimated model show that making a sugarcane sector unit operational in one town has a positive impact if the expansion of the sugarcane associated with it does not exclusively substitute other crops up to a proportion of 72.68% of the municipal territory. As, during the period under study, this occurred in 34 of the 645 towns in the state, the threshold for this expansion to have a positive impact in São Paulo State is 34 × 72.68%/645 = 3.83%.) Furthermore, as the total positive effect is significantly higher when the expansion of sugarcane takes place on pasture land and in planted forests, the expansion of the sugarcane sector between 2000 and 2008 can be said to have had a positive impact on the level of real per capita GDP in the towns in São Paulo State. This paper is a first effort towards a better comprehension of the socioeconomic impacts of the sugarcane sector. In future works, the analysis will be extended to other Brazilian states where the expansion of sugarcane sector was also significant. Then, results will be checked for robustness with the utilization of alternative spatial weighting matrixes, and counterfactuals will estimate the hypothetical distribution of municipal per capita GDP under the assumption that sugarcane sector expansion has not happened.
8,804.6
2013-07-28T00:00:00.000
[ "Economics" ]
High third-order optical nonlinear performance in CMOS devices integrated with 2D graphene oxide lms We report enhanced nonlinear optics in complementary metal-oxide-semiconductor (CMOS) compatible photonic platforms through the use of layered two-dimensional (2D) graphene oxide (GO) films. We integrate GO films with silicon-on-insulator nanowires (SOI), high index doped silica glass (Hydex) and silicon nitride (SiN) waveguides and ring resonators, to demonstrate an enhanced optical nonlinearity including Kerr nonlinearity and four-wave mixing (FWM). The GO films are integrated using a large-area, transfer-free, layer-by-layer method while the film placement and size are controlled by photolithography. In SOI nanowires we observe a dramatic enhancement in both the Kerr nonlinearity and nonlinear figure of merit (FOM) due to the highly nonlinear GO films. Self-phase modulation (SPM) measurements show significant spectral broadening enhancement for SOI nanowires coated with patterned films of GO. The dependence of GO’s Kerr nonlinearity on layer number and pulse energy shows trends of the layered GO films from 2D to quasi bulk-like behavior. The nonlinear parameter of GO coated SOI nanowires is increased 16 folds, with the nonlinear FOM increasing over 20 times to FOM > 5. We also observe an improved FWM efficiency in SiN waveguides integrated with 2D layered GO films. FWM measurements for samples with different numbers of GO layers and at different pump powers are performed, achieving up to ≈ 7.3 dB conversion efficiency (CE) enhancement for a uniformly coated device with 1 layer of GO and ≈ 9.1 dB for a patterned device with 5 layers of GO. These results reveal the strong potential of GO films to improve the nonlinear optics of silicon, Hydex and SiN photonic devices. Introduction All-optical integrated photonic devices are of signi cant interest for high-speed signal generation and processing in optical communication systems, due to the fact that they don't need the complex and ine cient optical-electrical-optical conversion [1,2]. Triggered by a signi cant number of applications in telecommunications [3], metrology [4], astronomy [5], ultrafast optics [6], quantum photonics [7], and many other areas [8-10], high-performance platforms for integrated nonlinear optics has attracted much attention, and no doubt silicon-on-insulator (SOI) has led this eld for several years [11][12][13][14][15]. While SOI has shown itself to be a leading platform for integrated photonic devices, it suffers from strong two-photon absorption (TPA) at near-infrared wavelengths, which greatly limits the nonlinear performance [2,16], and this has motivated the use of highly nonlinear materials on chips. Other complementary metal-oxide-semiconductor (CMOS) compatible platforms including high index doped silica glass (Hydex) [17,18] and silicon nitride (SiN) [19,20] have a much lower TPA, but they hamper the nonlinear performance due to a comparatively low Kerr nonlinearity. To overcome these limitations, twodimensional (2D) layered graphene oxide (GO) has received much attention among the various 2D materials due to its ease of preparation as well as the tunability of its material properties [21][22][23][24][25][26][27][28][29]. Previously, we reported GO lms with a giant Kerr nonlinear response about 4-5 orders of magnitude higher than that of silicon and SiN [25] and demonstrated enhanced four-wave mixing (FWM) in doped silica waveguides and microring resonators (MRRs) integrated with GO lms [30,31]. Here, we demonstrate enhanced nonlinear optics in SOI nanowires [32] and SiN waveguides [33] integrated with 2D layered GO lms. Owing to the strong light-matter interaction between the integrated waveguides and the highly nonlinear GO lms, self-phase modulation (SPM) measurements are performed to show signi cant spectral broadening enhancement for SOI nanowires coated with patterned lms of GO. The dependence of GO's Kerr nonlinearity on layer number and pulse energy shows interesting physical insights and trends of the layered GO lms in evolving from 2D monolayers to quasi bulk-like behavior. We obtain signi cant enhanced nonlinear performance for the GO hybrid devices as compared with the bare waveguides, achieving the nonlinear parameter of GO-coated SOI nanowires up by 16 times, with the nonlinear gure of merit (FOM) increasing over 20 times to FOM > 5. We obtain a signi cant improvement in the FWM conversion e ciency (CE) of ≈ 7.3 dB for a uniformly coated SiN waveguide with 1 layer of GO and ≈ 9.1 dB for a patterned device with 5 layers of GO. These results con rm the strong potential of introducing 2D layered GO lms into CMOS compatible photonic platforms to realize high-performance nonlinear photonic devices. Figure 1a shows a schematic of an SOI nanowire waveguide integrated with a GO lm. The SOI nanowire was fabricated on an SOI wafer via CMOS compatible fabrication processes, with opened windows on the silica upper cladding so as to enable GO lm coating onto the SOI nanowire. The coating of 2D layered GO lms was achieved by a solution-based method that yielded layer-by-layer GO lm deposition. Our GO coating method can achieve precise control of the lm thickness with an ultrahigh resolution of 2 nm, which is challenging for spin coating methods. Further, our GO coating approach, unlike the sophisticated transfer processes (e.g., using scotch tape) employed for coating other 2D materials such as graphene and TMDCs [34,35], enables transfer-free GO lm coating on integrated photonic devices, with highly scalable fabrication processes as well as high fabrication stability and repeatability. Figure 1b shows a microscope image of a fabricated SOI chip with a 0.4-mm-long opened window. Apart from allowing precise control of the placement and coating length of the GO lms that are in contact with the SOI nanowires, the opened windows also enabled us to test the performance of devices having a shorter length of GO lm but with higher lm thicknesses (up to 20 layers). This provided more exibility to optimize the device performance with respect to SPM spectral broadening. Figure 1c shows the scanning electron microscopy (SEM) image of an SOI nanowire conformally coated with 1 layer of GO. Enhanced Kerr Nonlinearity In Go-coated Soi Nanowires Note that the conformal coating (with the GO lm coated on both the top surface and sidewalls of the nanowire) is slightly different to earlier work where we reported doped silica devices with GO lms only coated on the top surface of the waveguides [30,31]. As compared with doped silica waveguides, the SOI nanowires allow much stronger light-material interaction between the evanescent eld leaking from the waveguide and the GO lm, which is critical to enhance nonlinear optical processes such as SPM and FWM. Figure 1d shows the successful integration of GO lms which is con rmed by the representative D (1345 cm -1 ) and G (1590 cm -1 ) peaks of GO observed in the Raman spectrum of an SOI chip coated with 5 layers of GO. Microscope images of the same SOI chip before and after GO coating are shown in the insets, which illustrate good morphology of the lms. Figure 2a-i shows the normalized spectra of the optical pulses before and after transmission through the SOI nanowires with 2.2-mm-long, 1−3 layers of GO, together with the output optical spectrum for the bare SOI nanowire, all taken with the same pulse energy of ~51.5 pJ (i.e., ~13.2 W peak power, excluding coupling loss) coupled into the SOI nanowires. As compared with the input pulse spectrum, the output spectrum after transmission through the bare SOI nanowire exhibited measurable spectral broadening. This is expected and can be attributed to the high Kerr nonlinearity of silicon. The GO-coated SOI nanowires, on the other hand, show much more signi cantly broadened spectra as compared with the bare SOI nanowire, clearly re ecting the improved Kerr nonlinearity of the hybrid waveguides. Figure 2a-ii shows the corresponding results for the SOI nanowires with 0.4-mm-long, 5−20 layers of GO, taken with the same coupled pulse energy as in Figure 2a-i. The SOI nanowires with a shorter GO coating length but higher lm thicknesses also clearly show more signi cant spectral broadening as compared with the bare SOI nanowire. We also note that in either Figure 2a-i or 2a-ii, the maximum spectral broadening is achieved for a device with an intermediate number of GO layers (i.e., 2 and 10 layers of GO in a-i and a-ii, respectively). This could re ect the tradeoff between the Kerr nonlinearity enhancement (which dominates for the device with a relatively short GO coating length) and loss increase (which dominates for the device with a relatively long GO coating length) for the SOI nanowires with different numbers of GO layers. Figures 2b-i and b-ii show the power-dependent output spectra after going through the SOI nanowires with (i) 2 layers and (ii) 10 layers of GO lms. We measured the output spectra at 10 different coupled pulse energies ranging from ~8.2 pJ to ~51.5 pJ (i.e., coupled peak power from ~2.1 W to ~13.2 W). As the coupled pulse energy was increased, the output spectra showed increasing spectral broadening as expected. We also note that the broadened spectra exhibited a slight asymmetry. This was a combined result of both the asymmetry of the input pulse spectrum and the free-carrier effect of silicon including both the free carrier absorption (FCA) and free carrier dispersion (FCD) [36]. Since the time response for the generation of free carries is slower compared to the pulse width, there is a delayed impact of FCA on the pulse shape, which leads the spectral asymmetry of the optical pulses. The FCD further broadens the asymmetry induced by FCA, resulting in more obvious spectral asymmetry at the output. To quantitively analyze the spectral broadening of the output spectra, we introduce the concept of a broadening factor (BF, de ned as the square of the pulse'rms spectral width at the waveguide output facet divided by the corresponding value at the input [37]). Figure 2c shows the BFs of the measured output spectra after transmission through the bare SOI nanowire and the GO-coated SOI nanowires at pulse energies of 8.2 pJ and 51.5 pJ. The GO-coated SOI nanowires show higher BFs than the bare SOI nanowires (i.e., GO layer number = 0), and the BFs at a coupled pulse energy of 51.5 pJ are higher than those at 8.2 pJ, agreeing with the results in Figures 2a and 2b, The BFs of the output spectra versus coupled pulse energy are shown in Figures 2d-i and 2d-ii for the SOI nanowires with 1−3 layers and 5−20 layers of GO, respectively. The BFs increase with coupled pulse energy, re ecting a more signi cant spectral broadening that agrees with the results in Figure 2b. Figure 3a shows Kerr coe cient (n 2 ) of the GO lms versus layer number for xed coupled pulse energies of 8.2 pJ and 51.5 pJ, which is extract from the effective nonlinear parameter (γ eff ) of the hybrid waveguides using the following equation [30]: where λ c is the pulse central wavelength, D is the integral of the optical elds over the material regions, S z is the time-averaged Poynting vector calculated using Lumerical FDTD commercial mode solving software, n 0 (x, y) and n 2 (x, y) are the linear refractive index and n 2 pro les over the waveguide cross section, respectively. The picosecond optical pulses used in our experiment had a relatively small spectral width (< 10 nm), we therefore neglected any variation in n 2 arising from its dispersion and used n 2 instead of the more general third-order nonlinearity χ (3) in our subsequent analysis and discussion. The values of n 2 are over three orders of magnitude higher than that of silicon and agree reasonably well with our previous waveguide FWM [30] and Z-scan measurements [28]. Note that the layer-by-layer characterization of n 2 for GO is challenging for Z-scan measurements due to the weak response of extremely thin 2D lms [25,28]. The high n 2 of GO lms highlights their strong Kerr nonlinearity for not only SPM but also other third-order (c (3) ) nonlinear processes such as FWM, and possibly even enhancing (c (3) ) for third harmonic generation (THG) and stimulated Raman scattering [38][39][40]. In Figure 3a, n 2 (both at 51.5 pJ and 8.2 pJ) decreases with GO layer number, showing a similar trend to WS 2 measured by a spatial-light system [41]. This is probably due to increased inhomogeneous defects within the GO layers as well as imperfect contact between the different GO layers. Although the n 2 of GO decreases with layer number, the increase in mode overlap with GO more than compensates for this, resulting in a net increase in γ eff with layer number. At 51.5 pJ, n 2 is slightly higher than at 8.2 pJ, indicating a more signi cant change in the GO optical properties with inceasing power. We also note that the decrease in n 2 with GO layer number becomes more gradual for thicker GO lms, possibly re ecting the transition of the GO lm properties towards bulk material properties − with a thickness independent n 2 . To quantitively analyze the improvement in the nonlinear performance of the GO-coated SOI nanowires, we further calculated the effective nonlinear FOM (FOM eff ) for the GO-coated SOI nanowires. The resulting FOM eff (normalized to the FOM of silicon) is shown in Figure 3b where we see that a very high FOM eff of 20 times that of silicon is achieved for the hybrid SOI nanowires with 20 layers of GO. This is remarkable since it indicates that by coating SOI nanowires with GO lms, not only can the nonlinearity be signi cantly enhanced but the relative effect of nonlinear absorption can be greatly reduced as well. This is interesting given that the GO lms themselves cannot be described by a nonlinear FOM since the nonlinear absorption displays saturable absorption (SA) rather than TPA, and yet nonetheless the GO lms still are able to reduce the β TPA, eff ) of the hybrid waveguides, thus improving the overall nonlinear performance. Figure 4a shows the SiN waveguide integrated with a GO lm, along with a schematic showing atomic structure of GO with different oxygen functional groups (OFGs) such as hydroxyl, epoxide and carboxylic groups. SiN waveguides with a cross section of 1.6 µm × 0.66 µm were fabricated via annealing-free and crack-free processes that are compatible with CMOS fabrication [37,42]. Layered GO lms were coated on the top surface of the chip by a solution-based method that yielded layer-by-layer lm deposition, as mentioned in Sect. 2 [29,30,43]. Figure 4b shows a microscope image of a SiN waveguide patterned with 10 layers of GO, which illustrates the high transmittance and good morphology of the GO lms. The coupled continuous-wave (CW) pump and signal power (18 dBm for each) was the same as that in Fig. 5a-i. The SiN waveguides with patterned GO lms also had an additional insertion loss as compared with the bare waveguide, while the results for both 5 and 10 GO layers show enhanced idler output powers. In particular, there is a maximum CE enhancement of ≈ 9.1 dB for the SiN waveguide patterned with 5 layers of GO, which is even higher than that for the uniformly coated waveguide with 1 layer of GO. Enhanced Fwm In Go-coated Sin Waveguides This re ects the trade-off between FWM enhancement (which dominates for the patterned devices with a short GO coating length) and loss (which dominates for the uniformly coated waveguides with a much longer GO coating length) in the GO-coated SiN waveguides. Figure 5b shows the measured CE versus pump power for the uniformly coated and patterned devices, respectively. The plots show the average of three measurements on the same samples and the error bars re ect the variations, showing that the measured CE is repeatable. As the pump power was increased, the measured CE increased linearly with no obvious saturation for the bare SiN waveguide and all the hybrid waveguides, indicating the low TPA of both the SiN waveguides and the GO lms. For the bare waveguide, the dependence of CE versus pump power shows a nearly linear relationship, with a slope rate of about 2 for the curve as expected from classical FWM theory [44][45][46][47][48][49][50][51][52][53][54]. For the GO-coated waveguides, the measured CE curves have shown slight deviations from the linear relationship with a slope rate of 2, particularly at high light powers. Figure 5c compares the CE of the hybrid waveguides with four different numbers of GO layers (i.e., 1, 2, 5, 10), where we see that the hybrid waveguide with an intermediate number of GO layers has the maximum CE. This re ects the trade-off between γ and loss in the hybrid waveguides, which both increase with GO layer number. Table 1 compares the performance of SOI nanowires, SiN, and Hydex waveguides incorporated with GO lms. As we can see in the Table, the dimensions of the three CMOS compatible photonic platforms were quite different. The SOI nanowire had the smallest waveguide dimensions and the tightest mode con nement, resulting in signi cantly increased mode overlap with the GO lm. This resulted in a signi cantly increased nonlinear parameter γ, but also the largest excess propagation loss induced by the GO lm. Mode overlap is a key factor for optimizing the trade-off between the Kerr nonlinearity and loss when introducing 2D layered GO lms onto different integrated platforms to enhance the nonlinear optical performance. Conclusion We demonstrate enhanced nonlinear optics including Kerr nonlinearity and FWM in SOI nanowires, Hydex and SiN waveguides and ring resonators incorporated with layered GO lms. We achieve precise control of the placement, thickness, and length of the GO lms using layer-by-layer coating of GO lms followed by photolithography and lift-off. Owing to the strong mode overlap between the platforms and the highly nonlinear GO lms, we achieve a high nonlinear parameter of GO coated SOI nanowires up to 16 times and an improved nonlinear FOM of up to a factor of 20. We obtain a signi cant improvement in the FWM CE of ≈ 7.3 dB for a uniformly coated SiN waveguide with 1 layer of GO and ≈ 9.1 dB for a patterned device with 5 layers of GO. These results verify the enhanced nonlinear optical performance of silicon, Hydex and SiN photonic devices achievable by incorporating 2D layered GO lms.
4,223.8
2021-04-22T00:00:00.000
[ "Physics", "Engineering", "Materials Science" ]
Comparison Of The Consumption Of Resources Between HTTP And SIP Currently, the development of research around VoIP experience a tremendous growth. In the community of open source Asterisk represents a reliable alternative for a lower cost solution. In this same community as the SIP protocol is a supplement to the more asterisk PBX. to share the benefits claimed by proponents of free software co-existence with other Asterisk server is not yet proven. In this context this paper we show a comparison of the use of simplified resource material for the apache server using the HTTP protocol and server that uses the asterisk SIP. Intorduction The protocol which dominates the current Internet infrastructure is http. But technological convergence over IP [1][2] [3]networks leads us to see a large deployment of voice over IP on a global scale thanks to the many proprietary PABX. VoIP in general is from a server and most used in the free software world and the asterisk is most widely used protocol is SIP with Asterisk. But to get a better performance for both protocol and he needed to see what happens in the server ie their resource requirement and more specifically the level of memory consumption. To do so we'll at first briefly describe the environment of our experience, in a second you'll see the experience itself that is the course of the experiment and finally we will deduct from our experience a assumption on our experience. Material . The server: As a server we used a PC dell optiplex GX 110 with a Pentium III processor and a minimum memory 256Mo The client:we used a PC dell optiplex GX 110 with a Pentium III processor and a minimum memory 128Mo Software: We used the operating system Debian GNU / Linux[4] that is a computer operating system composed of software packages released as free and open source software especially under the GNU General Public License and other free software licenses.The primary form, Debian GNU/Linux, which uses the Linux kernel and GNU OS tools,[4] is a popular and influential Linux distribution.It is distributed with access to repositories containing thousands of software packages ready for installation and use. For the web server we used Apache. Apache is web server software notable for playing a key role in the initial growth of the World Wide Web.In 2009 it became the first web server software to surpass the 100 million website milestone.Apache was the first viable alternative to the Netscape Communications Corporation web server. For the server we used Asterisk VoIP Asterisk is a software implementation of a telephone private branch exchange (PBX); it was created in 1999 by Mark Spencer of Digium. Like any PBX, it allows attached telephones to make calls to one another, and to connect to other telephone services including the public switched telephone network (PSTN) and Voice over Internet Protocol (VoIP) services. Its name comes from the asterisk symbol, "*". Asterisk is released under a dual license model, using the GNU General Public License (GPL) as a free software license and a proprietary software license to permit licensees to distribute proprietary, unpublished system components. Originally designed for Linux, Asterisk now also runs on a variety of different operating systems including NetBSD, OpenBSD, FreeBSD, Mac OS X, and Solaris. A port to Microsoft Windows is known as AsteriskWin32. And for the evaluation of the consumption in memory we used TOP.The top command is a system monitor tool that produces a frequently-updated list of processes. By default, the processes are ordered by percentage of CPU usage, with only the "top" CPU consumers shown. top shows how much processing power and memory are being used, as well as other information about the running processes. Some versions of top allow extensive customization of the display, such as choice of columns or sorting method. top is useful for system administrators, as it shows which users and processes are consuming the most system resources at any given time. Conducting the experiment: At first we run the top command and evaluated the memory used by the operating system. Several parts of the memory is already used for reasons within the system and for different services such as the launch of all deamon. The following figure(Fig. 1) shows the state of our system during the initial phase of our experience. In a second time a customer accesses the apache server in our system and we do the sampling with the top command (Fig. 2) and after sampling it disconnects the client. In the third time we launch a call with a softphone, first you do not pick and we launch the top command (Fig.3 )and then pick up and watch as it happens (Fig. 4) . Results and commentary Results: According to simulations, the phase initila mamoire free 22220ko is then connected when the client http free memory becomes 7348ko ie an http client consomne 14872ko memory and if we therefore multiplied the number of client server may stop responding. when initiating the call from a softphone free memory is 21980ko (this when the other end does not answer yet) then when it picks up the free memory becomes 21,740 which means that even if the customer lifts the receiver called the initiation of a SIP call consomne already resource at the server and the client called to consumption picks up in the memory increases. ramarque: the level of free memory 22220ko spent during the initial 21980ko during the launch of the appeal is to say that the memory occupied by a non-call pick up is 240ko. Then when the called party has picked up the level of memory is increased from 21980ko to 21740ko ie that Emmis's call with a second client occupies the same memory that the caller 240ko so if a call is has 480ko memory used and this we can deduce that the memory used(Mu) during a call 2nK Sip it with K = 240ko conclusion We have presented experimental results obtained during the simulation of a VoIP call and also the comparison with the http protocol. The different figures and results showed us that the http protocol consomne lot of memory compared to SIP, but a call sip Equival has a double connection to the server. It should be noted that the increase of the call can sip unanswered generated also a saturation of the mathematical model proposed mémoire.Le would then determine the number of calls made in relation to the size of memory used by the server.
1,512
2011-08-14T00:00:00.000
[ "Computer Science", "Engineering" ]
The Clot Thickens: Recent Clues on Hematopoietic Stem Cell Contribution to Age-Related Platelet Biology Open New Questions Platelets provide life-saving functions by halting external and internal bleeding. There is also a dark side to platelet biology, however. Recent reports provide evidence for increased platelet reactivity during aging of mice and humans, making platelets main suspects in the most prevalent aging-related human pathologies, including cardiovascular diseases, stroke, and cancer. What drives this platelet hyperreactivity during aging? Here, we discuss how hematopoietic stem cell differentiation pathways into the platelet lineage offer avenues to understand the fundamental differences between young and old platelets. Recent advances begin to unravel how the cellular and molecular regulation of the parent hematopoietic stem and progenitor cells likely imbue aging characteristics on the resulting Plt progeny. The resulting mechanistic insights into intrinsic platelet reactivity will provide strategies for selectively targeting age-related pathways. This brief viewpoint focuses on current concepts on aging hematopoiesis and the implications for platelet hyperactivity during aging. ). Plts are anucleated cell fragments derived from megakaryocytes and are best known for their essential role in hemostasis, the process that stops bleeding while maintaining normal blood flow in the event of vascular injury. Upon damage, circulating Plts adhere to the site of injury, become activated, and promote aggregation to form a Plt plug ( Figure 2). These properties make Plts key players in thrombotic disorders. Hemostasis is frequently compromised with age, and both thrombocytosis (too many Plts) and thrombocytopenia (too few Plts) lead to increased mortality in the elderly [1][2][3][4][5][6]. Consequently, millions of people take prophylactic anti-Plt therapies for long periods of time to reduce Plt count (by reducing agents such as hydroxyurea, anagrelide) or to inhibit Plt function (by antithrombotic agents such as aspirin) in order to reduce the risk of morbidity and mortality associated with occlusive thrombosis [7][8][9]. Despite the success of anti-Plt therapies, thrombotic diseases remain a leading cause of death, in part due to complications of anti-Plt therapies [2]. Due to the essential role of Plts in hemostasis, patients treated with anti-Plt drugs also face a risk of increased bleeding; therefore, alternative or more refined anti-Plt therapies are needed to balance the thrombotic risk and the subsequent risk of serious bleeding [7]. Targeted therapeutics are also needed in the disease states that require an increase in Plt count or accelerated Plt activity. Current treatments may include prophylactic Plt transfusion, antifibrinolytic agents (such as aminocaproic acid or tranexamic acid), or factor replacement therapy. Together, the numerous Plt-related disorders along with the increasing life expectancy of human populations underscores the profound financial and clinical burden imposed by age-related Plt diseases. Thus, the advancement of therapies that control Plt production and function by an improved understanding of the mechanisms of age-related Plt biology is a critical patient and public health goal. DURING AGING Aging is accompanied by changes to Plt biology. Epidemiological studies of Plt numbers remain inconclusive: some studies observed no differences between the young and elderly populations, while others suggest that Plt numbers are decreased in the elderly population compared to their younger counterparts [10][11][12]. This inconsistency is perhaps in part due to intra-individual variation during human aging [13,14]. In murine studies, conversely, there is a consistently observed increase in Plt count in old mice [6,[15][16][17]. Despite these discrepancies between the abundance of Plts in aged humans and mice, both mouse and human aging are associated with Plt hyperreactivity. Therefore, translation of murine studies to human application will likely emerge from an understanding of the consequences of aging on Plt function. There are clear age-related changes to both Plt generation and function ( Figure 2). Several in vitro studies of human Plts have reported an increase in aggregability, which is inversely associated with decreased bleeding time in the elderly [1,18,19]. These observations suggest faster clot formation upon aging, and that this enhanced Plt activity is a biomarker of thrombosis in humans. There is also evidence for potential molecular regulators of age-associated changes to Plt function. For example, sequencing analysis revealed differences in mRNA and microRNA expression patterns between young and old human Plts [20]. As in other tissues, oxidative stress in Plts has also been shown to increase during aging [6,21]. Another important line of evidence demonstrated that the Poscablo physiological agonists presented by an inflammatory state enhances age-related Plt reactivity and thrombosis [15]. These prothrombotic mechanisms appear intrinsically propagated by aged Plts even in the absence of the agonists [17]. These findings highlight the need to understand how distinct cellular and molecular hallmarks of aging drive changes to Plt function. HEMATOPOIETIC STEM CELL CONTRIBUTION TO PLATELET AGING Aging of the hematopoietic system has been the focus of intense investigation for decades, yet there remains a significant need to link mechanisms of hematopoiesis to the biology of their Plt progeny. Advanced age is associated with dysregulation in both the number and function of blood and immune cells. Due to the short half-life of circulating hematopoietic cells, including Plts, they are continuously produced from Hematopoietic Stem Cells (HSCs) via progenitor cells that primarily reside within the bone marrow (BM) (Figure 1). The short lifespan of mature cells means that they likely do not undergo true aging themselves, but "inherit" age-related properties from their parent stem and progenitor cells. Given that HSCs are at the top of the hematopoietic hierarchy, much focus has been directed towards understanding how aging affects HSCs [15,[22][23][24]. Although HSC numbers increase with age, they exhibit functional decline when tested in transplantation experiments: old HSCs show less robust hematopoietic reconstitution efficiency in recipient mice relative to young HSCs. Despite comprising ~99% of all mature hematopoietic cells, Plts and red blood cells (RBCs) have been ignored in the vast majority of HSC transplantation studies due to technical limitations in distinguishing donor-from host-derived Plts and RBCs [25]. The Plt and RBC potential of HSCs and progenitor cells has therefore more often been investigated by in vitro differentiation assays rather than by in vivo studies. Development of fluorescent transgenic mice such as Ubc-GFP and KuO mice that harbor fluorescent Plts and RBCs has rectified the technical limitations with transplantation experiments because these models allow direct tracking of donor-derived Plts and RBCs [25][26][27][28][29]. Additionally, recent barcoding studies include endpoint analyses of the immediate progenitors of Plts and RBCs [30]. Implementation of these tools has led to exciting findings, including the reported existence of self-renewing Plt-restricted or Plt-biased cells [26,[31][32][33]. However, the differentiation paths, abundance, regulation, and functional significance of these putative, lineage-restricted but self-renewing, cells are currently unclear. At present, our understanding of HSC differentiation into Plts and RBCs lag significantly behind other lineages, particularly in the context of aging. IMPLICATIONS FOR AGE-RELATED PLATELET HYPERREACTIVITY Our recent discoveries on the mechanisms of aging megakaryopoiesis advanced our understanding of the hematopoietic BM origins of Plt biology during aging [15]. In this study, we investigated the regulation of Plt production modulated by HSCs and megakaryocyte-committed progenitor cells (MkPs). In old mice, the expansion of HSCs was accompanied by an expansion of MkPs and their descendant Plts. To determine how intrinsic and extrinsic factors influence Plt production from aged HSCs, we transplanted young HSCs into young and old recipient mice, and vice versa. Interestingly, these experiments revealed that old HSCs did not generate a selective increase in the number of MkPs and Plts, as in unmanipulated aged mice (Figures 2 and 3) [15,17,34]. These observations raised our curiosity around aging MkPs. Given the reconstitution deficit displayed by old HSCs, we reasoned that MkPs would also display functional deficiency. Surprisingly, old MkPs displayed a remarkable capacity to engraft into recipient mice by generating greater Plt numbers compared to young MkPs (Figure 3). Our in vitro analysis also demonstrated greater expansion capacity of old MkPs. Importantly, RNA sequencing of young and old MkPs revealed age-related changes in gene expression profiles, including changes in genes involved in Plt production and function and in bleeding disorders. Interestingly, a few of the genes upregulated in old MkPs are also implicated in acute myeloid leukemia, including Pbx3, Lair1, and Mllt3 [35][36][37]. These changes in the transcriptome of old MkPs support a model in which MkPs propagate age-related dysregulation of Plt biology. Therefore, our study revealed novel cellular and molecular mechanisms of age-related alterations to megakaryopoiesis. In addition to the chronic stress presented by aging, acute stress condition has been shown to influence MkPs. Upon infection, acute inflammation drives MkP maturation and increases Plt counts in young mice [32]. Together, these findings may help to explain intrinsic and extrinsic regulation of MkPs during youthful and aging megakaryopoiesis. Furthermore, given that both the increase in Plt numbers and hyperactivity appear to play a significant role in the increase of thrombotic risk during aging, an important next step is to determine how age-related alterations to MkPs may poise their descendent Plts for activation and aggregation. Collectively, the new insights from our group and others on aging megakaryopoiesis shed light on the mechanisms of aging by which Plts "inherit" their properties from their parent stem and progenitor cells (Figure 1). While HSC Plt potential declines during aging, MkPs gain a remarkable capacity to contribute to Plt production. These functional alterations may dictate the etiology of Plt-related disorders during aging and provide therapeutic avenues for manipulating hematopoietic stem and progenitor cells to control hemostasis throughout life. Unraveling the specific contributions of hematopoietic stem and progenitor cells to consequential Plt production and function will be critical to the success of ameliorating and treating life-threatening Plt disorders that accompany aging. This review focuses on recent reports that have shown that HSC and MkP aging influence the number and function of platelets. Poscablo Platelets are derived via Megakaryocyte Progenitors (MkP). Upon vascular injury, platelets adhere to the exposed subendothelial matrix, become activated, and aggregate with nearby platelets to form a clot. Mice, and possibly humans, have increased Plt numbers upon aging; both species display Plt hyperreactivity. During aging, MkPs likely give rise to deleterious platelets with functional defects, the culprits of potentially occlusive clots observed in thrombotic disorders that plague the elderly. Poscablo Transplantation experiments revealed that old Hematopoietic Stem Cells (HSCs) exhibited a decline in generation of all hematopoietic lineages, including Megakaryocyte Progenitors (MkPs) and platelets. However, old MkPs exhibited a remarkably greater expansion capacity upon aging, giving rise to significantly greater numbers of platelets compared to young MkPs. Poscablo and Forsberg Page 9 Adv Geriatr Med Res. Author manuscript; available in PMC 2022 January 14.
2,429
2021-10-28T00:00:00.000
[ "Biology", "Medicine" ]