text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Systematic Study on QCD Interactions of Heavy Mesons with $\rho$ Meson The strong interactions of the negative-parity heavy mesons with $\rho$ meson may be described consistently in the context of an effective lagrangian, which is invariant under isospin SU(2) transformation. Four coupling constants $g_{HH\rho}$, $f_{H^*H\rho}$, $g_{H^*H^*\rho}$ and $f_{H^*H^*\rho}$ enter the effective lagrangian, where $H$ $(H^*)$ denotes a pseudoscalar bottom or charm meson (the corresponding vector meson). Using QCD light cone sum rule (LCSR) method and, as inputs, the hadronic parameters updated recently, we give an estimate of $g_{H^*H^*\rho}$ and $f_{H^*H^*\rho}$, about which little was known before, and present an improved result for $g_{HH\rho}$ and $f_{H^*H\rho}$. Also, we examine the heavy quark asymptotic behavior of these nonperturbative quantities and assess the two low energy parameters $\beta$ and $\lambda$ of the corresponding effective chiral lagrangian. Introduction At present, we have the two well-established theoretical frameworks for describing a large class of two body hadronic decays of B mesons, that is, QCD factorization (QCDF) [1] approach and soft collinear effective theory (SCET) [2]. Long-distance parameters enter inevitably, however, as important inputs in their phenomenological applications. One is yet confronted with the difficult task to cope with the nonperturbative problems. Numerous theoretical works are devoted to this subject. Among all the existing nonperturbative approaches, QCD light cone sum rules (LCSR's) [3,4] have proved to be particularly powerful. This is due to the facts: (1) Contrary to conventional sum rule calculations [5] on the form factors for heavy to light decays, LCSR results turn out to be consistent in the heavy quark limit m Q → ∞ [6]; (2) LCSR approach allows us to consistently explore the dependence of the form factors on the momentum transfer q 2 in the whole kinematically accessible range [7][8][9][10], by combing its results, which are valid for small and intermediate q 2 , with the pole model description for the form factors at large q 2 . The LCSR method has extensively been applied to study the semileptonic [4,[6][7][8][9][10][11]12] and hadronic [13] decays of heavy mesons. A recent LCSR reanalysis of heavy-to-light transitions can be found in [12]. Along with the great progresses in the experiment on B physics, we are confronting a more formidable challenge to deal with the nonperturbative dynamics. The data accumulated at B factories and CLEO give a hint that there may be large contributions from the final state interactions (FSI's), which are typically nonperturbative, in some of two body hadronic B decays. In the absence of a rigorous approach to FSI's, one may either resort to Regge theory [14] to estimate their effects [15], or mimic them by the soft rescattering of two intermediate particles so that they could be viewed as a one particle exchange process calculable at the hadron level. In comparison, the latter is of more intuitive physical picture and thus is accepted more readily. Employing the one particle exchange model, calculations of the FSI's in both B and D decays have been undertaken many a time in the literature [16][17][18]. For a recent application of this approach, ones are referred to [17,18]. A precondition of doing such a calculation, however, is that the related couplings are supposed to be known, which parameterize the strong interactions among the underlying meson fields. The most interesting is the situation that heavy mesons interact with a light meson, where the corresponding couplings are also crucial to determine the normalization of heavy to light form factors at large momentum transfer in the pole dominance models [7][8][9][10]. These couplings have to be assessed adopting a certain phenomenological method, except that few of them can be extracted directly from the experimental data. In the case where the light meson concerned is a pseudoscalar meson, the related couplings have undergone a systematic investigation, in the frameworks of both LCSR's [7,10,19] and conventional sum rules [20]. In contrast, the existing discussion is incomplete about the interactions of heavy mesons with a light vector meson, despite some efforts being made [17,[21][22][23]. The strong interactions can be described between the negative-parity heavy mesons and ρ meson by constructing an effective lagrangian which respects the SU(2)symmetry in the isospin space. Letting us define an isospin doublet B composed of the pseudoscalar bottom meson fields B + and B 0 and the corresponding vector doublet B * µ : with the hermitian conjugate forms and representing the isospin triplet of the ρ meson field by we can build the effective lagrangian of the required symmetry as (1) and analogous one for the charm mesons. In the established effective lagrangian, four coupling constants g BBρ , f B * Bρ , g B * B * ρ and f B * B * ρ are introduced, as a result of SU (2) isospin asymmetry, to describe the strength of the strong interactions among the related meson fields. Whereas g BBρ and f B * Bρ serve as describing the B-B-ρ and B * -B-ρ interactions respectively, the other two as characterizing the B * -B * -ρ interactions. As explained clearly later, these couplings are of definite physical meaning and in the limit m Q → ∞, they coincide, up to a prefactor, with one of the two low energy parameters β and λ, which parameterize the effective chiral lagrangian for the heavy mesons and light vector resonances [23]. Such that the effective description formulated in (1) is consistent with the effective chiral lagrangian approach. In term of these couplings the relevant hadronic matrix elements are parameterized as: where the momentum and polarization vector assignment is specified in brackets. These hadronic matrix elements, as those parameterizing the strong interaction processes of heavy mesons and a pionic meson, would play a prominent role in the phenomenological study of heavy flavor physics. As aforementioned, however, for the time being we are devoid of an all-around knowledge of them. The previous LCSR calculation is just confined to the case of g BBρ and f B * Bρ [21], and the effective parameters β and λ are merely investigated on the basis of the vector dominance assumption [17,23]. On the other hand, the existing LCSR results call for a recalculation with an updated hadronic parameter. In this letter, we intend to give a LCSR estimate of g B * B * ρ (g D * D * ρ ) and f B * B * ρ (f D * D * ρ ), along with an improved numerical prediction of g BBρ (g DDρ ) and f B * Bρ (f D * Dρ ), and then make an investigation into m Q scaling behavior of the resultant sum rules, present our LCSR results for the effective parameters β and λ. LCSR Calculation on Strong Couplings We focus on the bottom case and begin with a discussion of the B * B * ρ coupling. For implementing a QCD LCSR calculation on g B * B * ρ and f B * B * ρ , it is advisable to make use of the following correlation function: where ellipses indicate the remaining Lorentz structures. The hadronic form of the correlation function (5) is easily obtained by saturating it with a complete set of intermediate states with the same quantum numbers as the interpolating current operators. However, we need to do it with care, for the vector current operators can also couple with the set of scalar bottom meson with positive-parity, besides that of vector bottom meson. On taking into account all the possible hadronic contributions to H µν (p, q, e), we find that the invariant functions H and H receive only the contributions from the set of vector bottom meson. Isolating the pole contribution of the lowest B * meson and parameterizing these from the higher states in a form of double dispersion integral starting with the threshold s 0 , we have the desired hadronic forms for H(p 2 , (p + q) 2 ) and H(p 2 , (p + q) 2 ): with f B * , as defined usually, being the decay constant of B * meson and ρ h (ρ h ) the hadron spectral function. QCD calculation of the correlator (5) can be carried out for the negative and large values of p 2 − m 2 Q and (p + q) 2 − m 2 Q , which render the operator product expansion (OPE) valid near the light-cone x 2 = 0. Since the underlying heavy quark is sufficiently far off shell in the kinematical regions, in terms of the light-cone expansion the soft gluon emissions from the heavy quark contribute just a higher twist effect, which is concerned with the quark-antiquark-gluon (qqg) components of the ρ meson distribution amplitudes. As verified by the numerous LCSR calculations, omitting the gluon emission contributions may be considered a better approximation. For the present calculation, we will use the free b quark propagator: Substituting (8) in (5) and using the γ algebraic relations and we are led to the light cone wavefunctions of the ρ meson defined by [24,25,26] ρ where f ρ stands for the usual decay constant of the ρ meson, and f T ρ is defined as 0|ūσ µν d|ρ = if T ρ e (λ) µ q ν − e (λ) ν q µ ; both ϕ (u, µ) and ϕ ⊥ (u, µ) denote the leading twist-2 distribution amplitudes, g ⊥ (u, µ) and h s (u, µ b ) refer to the twist-3 ones, and the others are all of twist-4. With all these expressions, a straightforward calculation yields the following QCD forms for H(p 2 , (p + q) 2 ) and H(p 2 , (p + q) 2 ): In the derivations of (15) and (16) we have introduced two auxiliary functionsf It is needed to convert both the QCD expressions into a form of double dispersion integral, for matching them onto the individual hadronic forms. However, we note that it is sufficient to do it only for the twist-2 and-3 parts. The relevant QCD spectral densities are easily obtained with the standard method [29]. To this end, the following formula is useful: where B M 2 1 and B M 2 2 are the Borel operators, the Borel parameters M 2 1 and M 2 2 are associated with p 2 and (p + q) 2 respectively, . Simultaneously, we can set M 2 1 = M 2 2 , because of the symmetry of the correlator, so that the distribution amplitudes entering the QCD spectral densities take only their values at the symmetry point u 0 = 1/2. Here we omit the final expressions for the QCD spectral functions to save some spaces. To proceed, we perform the double Borel transformation −p 2 → M 2 1 , −(p+q) 2 → M 2 2 for both the hadronic and QCD representations. The use of the quark-hadron duality results in the final sum rules for the products f 2 B * g B * B * ρ and f 2 B * f B * B * ρ : We proceed to the numerical computation of the sum rules. To consistently specify the input, we take [7] m b = 4.7 ± 0.1 GeV, m B * = 5.325 GeV, f B * = 160 MeV and s 0 = 35±1 GeV for the bottom channels. Some of the parameters related to the ρ meson are chosen as: m ρ = 770 MeV, f ρ = 216 ± 3 MeV and f T ρ (µ = 1 GeV) = 165 ± 9 MeV [27]. The most important sources of uncertainty are the light cone wavefunctions of ρ meson. It is demonstrated that the wavefunctions can be expanded in terms of matrix elements of conformal operators. Based on this expansion, the first attempt was made in [28] to understand the twist-2 distribution amplitudes of light vector mesons. Since a modified result was put forward [24] the model wavefunctions of light vector mesons, up to twist-4, have undergone a successive examination and improvement [25,26,27]. Very recently, a more systematic inspection was made of the existing model parameters and the updated results were reported in [27]. Here, we will make use of the findings of [27], for the related distribution amplitudes of ρ meson. Certainly, in the present applications the appropriate normalization scale should be set at the typical virtuality of the b quark: At this scale, the numerical values of the nonperturbative quantities involved, which contain the model parameters and f T ρ , can be reached by use of the renormalization group equations. Using the inputs fixed, the range of the Borel variable M 2 can be determined by demanding that the 4-twist parts contribute less than 10%, while the higher resonance and continuum contributions don't excess 30%. In both the cases, the Borel interval to satisfy the above criteria is 6 ≤ M 2 ≤ 12 GeV 2 . From the sum rule "windows", it follows that f 2 B * g B * B * ρ = 0.048 ± 0.013 GeV 2 and f 2 B * f B * B * ρ = 0.021 ± 0.007 GeV. The uncertainties quoted are in view of the variations of the b quark mass m b , threshold s 0 and Borel parameter M 2 . Dividing these two sum rules by f 2 B * yields g B * B * ρ = 1.88, f B * B * ρ = 0.82 GeV −1 , where we give only the central values of the numerical results. With a definition different from the present ones by a constant factor, the remaining two couplings g BBρ and f B * Bρ have been computed in the same approach [21]. However, the numerical results are not straightforwardly available for a consistent discussion, because they are derived with the inputs, most of which, including the model wavefunctions, are other than those used here and improved to a certain extent. An updated estimate is obligatory. In passing, it is deserving of mention that there is an unfortunate error checked out by us in the previous LCSR calculation on the B * -B-ρ coupling (where the factor of 3/4 in the term proportional A T (1/2) should be modified as −1/4), but with a small numerical impact. With this corresponding change, the LCSR expressions of the present concern can be achieved trivially from (17) and (23) of [21], for the products where the two additional parameters, m B and f B , for the bottom meson channels are taken as m B = 5.279 GeV and f B = 140 MeV. Our observation is that these two sum rules can share a Borel range, which is the about same as that for the B * B * ρ case, and provide the numerical predictions: f 2 B g BBρ = 0.037 ± 0.008 GeV 2 and f B * f B f B * Bρ = 0.019 ± 0.005 GeV, from whose central values we have g BBρ = 1.89, f B * Bρ = 0.85 GeV −1 . A physical interpretation is in order on the LCSR predictions presented above. As shown explicitly, there exist the approximate sum rule relations g BBρ ≈ g B * B * ρ and f B * Bρ ≈ f B * B * ρ , for the coupling constants appearing in the effective lagrangian (1). This may be accounted for intuitively by observing the construction of the effective lagrangian: The terms proportional to g BBρ and g B * B * ρ can be identified as describing the charge interactions between the B(B * ) and ρ meson fields, while the other two parts may be interpreted as indicating the magnetic interactions of the underlying bottom mesons with ρ meson fields. It is not surprising, therefore, that the relations g BBρ = g B * B * ρ and f B * Bρ = f B * B * ρ should hold exactly in the limit m Q −→ ∞, because of the heavy quark spin symmetry. In the following section, we are going to return to this problem, putting it the test whether the LCSR calculations could precisely give the asymptotic relations deduced from the heavy quark spin symmetry. Situations of the charm mesons can in parallel be discussed, using the LCSR formulae (18)- (21) with a replacement of the corresponding inputs. Of course, it is generally believed that in this case the gluon emission corrections from the charm quarks may be relatively important, due to the smaller heavy quark mass. Still, we omit them for a consistent purpose. The parameters for the charm channels are set as [7]: m c = 1. We would like to compare the present LCSR predictions with the ones, which are obtained with the inputs proposed earlier in [24,25] for the ρ meson parameters, to see that to what extent LCSR calculations have been improved with the updated parameters. It is demonstrated that in the bottom case, using the updated parameters can increase the LCSR evaluations by about 20% and a few percent, respectively, for the charge and magnetic interactions. The corresponding changes in the charm case amount to an order of 10% and of few percent, respectively. A further improvement on the LCSR results proposed here is expected, since in the present case the qqg components of ρ meson don't enter in consideration, in addition to the QCD radiative corrections, and a further update is possible on the nonperturbative inputs, in particular, the light-cone wavefunctions of ρ meson. If confining LCSR computation to the present accuracy, the numerical results signify that the heavy quark spin symmetry can kept well for both the charge and the magnetic interactions of the negative-parity heavy mesons with ρ meson, but the heavy flavor symmetry suffers from a violation of different degree in the two interaction situations. To quantify size of the effects from the heavy flavor symmetry breaking, we consider a ratio of the corresponding sum rule results in the bottom and charm cases. We observe that whereas the estimated ratios, for the charge interactions, are of a deviation of about 20% from 1, the resulting breaking effect is at a level of a few percent in the cases of the magnetic interaction. Heavy Quark Limit and Determination of β and λ In this section, we want to take a closer look at the behavior of the strong couplings in the heavy quark limit, checking up the consistency of the LCSR results with the predictions of heavy quark spin symmetry, and providing an assessment of the low energy effective parameters β and λ. The desired asymptotic forms are achievable from the corresponding sum rules for the finite quark quark mass, by working explicitly out m Q scaling behavior of the relevant parameters depending on the heavy degree of freedom. To be specific, we need to substitute in the sum rule results (18)(19)(20)(21) with Λ being the binding energy of the light degree of freedom in the static b quark chromomagnetic field, and F a low energy parameter. For the intrinsic parameters in the sum rules M 2 and s 0 , we need to rescale them as, with T and ω 0 being the m Q independent Borel variable and threshold, respectively. It turns out, with these expansions, that in the limit m Q → ∞, the sum rules in (18)(19)(20)(21), as desired, comply precisely with the asymptotic relations g BBρ = g B * B * ρ and f B * Bρ = f B * B * ρ , and so boil down to the two dependent expressions. In consequence, the m Q scaling behavior of the strong couplings are reproduced rightly and a consistent result is obtained with the effective chiral lagrangian approach, in the LCSR approach. Denoting the asymptotic forms of g BBρ (g B * B * ρ ) and f B * Bρ (f B * B * ρ ) by G 1 and G 2 , respectively, the resulting sum rules are of the following forms: The numerical analysis of the above asymptotic sum rules can be made using all the same procedure as in the finite heavy quark mass case. In the first place, the binding energy Λ, as an important input, requires to be fixed in a consistent way to reduce the numerical uncertainty as much as possible. It is easily calculated by taking the logarithmic derivative for one of (24) and (25) with respect to the inverse Borel parameter T . The result from (24), for instance, is Λ = 0.43 ± 0.15 GeV with the Borel interval 0.5 ≪ T ≪ 1.3 GeV and threshold ω 0 = 1.3 ± 0.1 GeV. With these inputs, we get F 2 G 1 = 0.210 ± 0.031 GeV 3 and F 2 G 2 = 0.098 ± 0.013 GeV 2 . The variations are depicted in Fig.1 of the LCSR results with the Borel parameter T . In order to have an assessment of the asymptotic couplings G 1 and G 2 , we may use the determination without QCD radiative corrections included [30]: F = 0.30 ± 0.05 GeV 3/2 . Instead of doing that, we prefer to directly substitute in (24) and (25) the sum rule form for F to make the numerical results free of a large uncertainty, yielding G 1 = 2.36 ± 0.32 and G 2 = 1.09 ± 0.15 GeV −1 , a result compatible with the sum rules for the finite heavy quark mass. Now we are in a position to determine the effective parameters β and λ. In the context of the effective chiral lagrangian [23], the related hadronic matrix elements obey, at the leading order in the 1/m B ( * ) , the following parameterizations: where g V = m ρ /f π ≈ 5.8 [23] and v indicates the velocity of the underlying heavy mesons. Confronting the hadronic matrix elements in (2)-(4) with those in (26)-(28) respectively, we have the following asymptotic relations: Using the above equation and sum rule results for G 1 and G 2 , we get β = 0.81 ± 0.11 and λ = 0.38 ± 0.05 GeV −1 . The authors of [17] give an estimate of the effective couplings β and λ. They consider the electromagnetic transition of a heavy pseudoscalar meson and assume that the hadronic matrix element of the light quark current is dominated by the ρ, ω,φ vector mesons. Then the current conservation leads automatically to the result β = To order to make an evaluation of the parameter λ, they adopt a combined use of several different approaches. The prescription is to compute one of the B → K * form factors at the squared momentum transfer q 2 = q 2 max , using the effective chiral lagrangian and B * s dominance model, respectively, and then to equate them for extracting λ which enters the result of the effective theory. The pole model representation for the form factor is determined by identifying, at q 2 = 17 GeV 2 , its result with the corresponding theoretical prediction from the LCSR's and lattice QCD. In such ways one gets λ = 0.57 GeV −1 . Also, it is passible to extract λ from the data on the D → K * form factor at the largest recoil, by extrapolating the form factor derived at zero recoil in the effective chiral lagrangian approach by means of the vector dominance [23]. The extracted λ is of a bit smaller central value: λ = 0.41 GeV −1 . It is generally agreed that these existing determinations of β and λ would be subject to a large uncertainty, especially in the λ case where a combined use of several different approaches to the form factor would more or less cause the inconsistency in calculation. Concentrating on the central values, we find that QCD LCSR's predict, for the parameter β, a numerical result nearly the same as the one of the pole model. In the λ case, a good numerical agreement is also observed with the result extracted experimentally, whereas there is a numerical deviation of about −30% from the one of [17]. On the whole, our LCSR results for β and λ are compatible with those of other approaches within errors. Summary The strong interactions of the negative-parity heavy mesons with ρ meson may be described uniformly in the context of an effective lagrangian observing SU (2) invariance in the isospin space. The established effective lagrangian contains four independent coupling parameters, which characterize the dynamics of strong interactions among the underlying meson fields. Using the QCD LCSR method and recently updated model parameters for the light-cone distribution amplitudes of ρ meson, we have presented a complete discussion on these couplings. Apart from an updated LCSR result for g BBρ and f B * Bρ , we give, among others, a detailed LCSR estimate of g B * B * ρ and f B * B * ρ , about which little was known before. Situations of the charm mesons are also inquired into in the same framework, which is especially important for us to understand the FSI effects in B decays. A systematic numerical discussion is made, including a detailed physical interpretation on the sum rule results and a numerical comparison with the LCSR computations using as inputs a model wavefunction given earlier. Also, we examine asymptotic forms of the LCSR results in the heavy quark limit. As shown explicitly, the LCSR approach could reproduce rightly the m Q scaling behavior of the physical quantities in question, and thus provide a consistent calculation with the results of the heavy quark symmetry. This would, needless to say, enhance considerably our confidence in applying the LCSR method to do calculation of nonperturbative quantities. Finally, we assess the low energy parameters β and λ appearing in the corresponding effective chiral lagrangian, and draw a numerical parallel between the present and previous calculations. The effective lagrangian approaches, using the present findings as inputs and in conjunction with other nonperturbative methods, could help to get a more knowledge of the long distance dynamics in heavy meson weak decays. No doubt, this is beneficial to promote our understanding of the standard model of particle physics.
6,157.2
2007-12-04T00:00:00.000
[ "Physics" ]
Structural equation modelling with lavaan: a tutorial and an intuitive introduction The goal of this paper is to present a tutorial on structural equation modelling (“SEM”). SEM is a combination of multivariate linear regression and path analysis models. We will discuss path analysis, measurement models, measurement invariance and when or how to use them, twin studies, and longitudinal data analysis. In this tutorial, we shall use the free and open source package “lavaan” in R. Structural equation modelling in health sciences and epidemiology Structural equation modelling (“SEM”) is a combination of multivariate linear regression and path analysis models. In this brief hands-on tutorial, we will discuss path analysis, measurement models, measurement invariance and when or how to use them, twin studies, and longitudinal data analysis using SEM. We shall use the free and open source package “lavaan” in R the free and open source statistical programming language Steps of SEM As structural equation modelling is graphical, we recommend that you follow the sequence: 1. Draw your model as a system of paths 2. Input data in the form of covariance or correlation matrix 3. Identify the model 4. Specify the model 5. Assess parameter estimates 6. Assess fit measures (chi-square, df, residual matrix, GFI, RMSEA) 7. Check the modification indices 8. Rerun the model till you get the best fit of the data to the model and theory Absolute first step before you proceed: Load the packages Assuming that you use R for your analyses, you need to load the following packages in R after installing R: library(lavaan) library(dagitty) library(ggdag) Qeios, CC-BY 4.0 · Article, April 6, 2021 Qeios ID: DNI7ET · https://doi.org/10.32388/DNI7ET 1/33 library(tidyverse) library(DiagrammeR) library(DiagrammeRsvg) library(rsvg) Step 1: Notes on the system of paths and directed acyclic graphs (DAGs) Everything graphical in SEM starts with path analysis. Richard Sewall Wright (1921), a hundred years ago, described a system of finding correlation between two variables, X and Y using a system of paths (Denis 2021). In this approach, he described that if a system of paths exist between two variables X and Y, the multiplication produce of the path coefficents of the sequences of the paths that traverse between the two variables should be added to the path coefficients of the direct paths that exist between X and Y to derive their correlations (Figure 1). Figure 1. Basic path diagram Figure 1. A path diagram The above figure presents a system of paths that can be connected to derive covariances between pairs fo variables. These paths can be traced from one variable to another according to set rules. Sewall Wright described such a system of path tracing rules as follows: 1. A path can start from one variable to be connected to another variable and can start in either a forward or a reverse direction in the direction of the arrowhead 2. Once started in one direction, the path must continue in the same direction unless it meets with another path in a Qeios, CC-BY 4.0 · Article, April 6, 2021 Qeios ID: DNI7ET · https://doi.org/10.32388/DNI7ET 2/33 reverse direction and at that point can proceed no further 3. A path can only contain ONE curved double headed arrow. A curved double headed arrow signifies either a covariance between two variables or a variance of a single variable 4. A path cannot go through the same variable twice, that is a path can only go through one variable at a time Then, once all the valid paths are identified, their path coefficients are multiplied and added to the direct path coefficient if one exists between the two variables to derive the correlation between these two paths. With these information, we can trace the following valid paths in Figure 1: x-a-b-c-y x-c-y x-d-e-y These are the only three valid paths. No direct path exists between x and y, and all other paths are either invalid or they are blocked one way or another. In order to derive the correlation between x and y, we will need to add the multiple products of the individual path coefficients as follows: cor(x,y) = (x-a)*(a-b)*(b-c)*(c-y) + (x-c)*(c-y) + (x-d)*(d-e)*(e-y) Figure 2. Another system of paths we use in measurement model or a confirmatory factor analysis model. Qeios, CC-BY 4.0 · Article, April 6, 2021 Qeios ID: DNI7ET · https://doi.org/10.32388/DNI7ET 3/33 library(tidyverse) library(DiagrammeR) library(DiagrammeRsvg) library(rsvg) Step 1: Notes on the system of paths and directed acyclic graphs (DAGs) Everything graphical in SEM starts with path analysis. Richard Sewall Wright (1921), a hundred years ago, described a system of finding correlation between two variables, X and Y using a system of paths (Denis 2021). In this approach, he described that if a system of paths exist between two variables X and Y, the multiplication produce of the path coefficents of the sequences of the paths that traverse between the two variables should be added to the path coefficients of the direct paths that exist between X and Y to derive their correlations (Figure 1). The above figure presents a system of paths that can be connected to derive covariances between pairs fo variables. These paths can be traced from one variable to another according to set rules. Sewall Wright described such a system of path tracing rules as follows: 1. A path can start from one variable to be connected to another variable and can start in either a forward or a reverse direction in the direction of the arrowhead 2. Once started in one direction, the path must continue in the same direction unless it meets with another path in a Qeios,.0 · Article, April 6, 2021 Qeios ID: DNI7ET · https://doi.org/10.32388/DNI7ET 2/33 reverse direction and at that point can proceed no further 3. A path can only contain ONE curved double headed arrow. A curved double headed arrow signifies either a covariance between two variables or a variance of a single variable 4. A path cannot go through the same variable twice, that is a path can only go through one variable at a time Then, once all the valid paths are identified, their path coefficients are multiplied and added to the direct path coefficient if one exists between the two variables to derive the correlation between these two paths. With these information, we can trace the following valid paths in Figure 1: x-a-b-c-y x-c-y x-d-e-y These are the only three valid paths. No direct path exists between x and y, and all other paths are either invalid or they are blocked one way or another. In order to derive the correlation between x and y, we will need to add the multiple products of the individual path coefficients as follows: cor ( The path model in Figure 2 is a simple measurement model or confirmatory factor analysis model with one latent variable ("lv1"), and three manifest variables (mv1 … mv3). You can see that a set of six paths connect the manifest variables. In confirmatory factor analysis, we estimate or constrain such path coefficients and the path coefficients are used to derive the variances and covariances of these variables. The path coefficients also tell us the effect of one variable over another. For example, the effect of lv on mv1 in Figure 2 will be determined by the path coefficient of the path connecting lv with mv1. Note another feature: all the arrows in these diagrams point in one direction, and the variables are all connected by arrows that move in one direction. Such kind of graphs are referred to as directed acyclic graphs (DAG) as no variable has arrows that eventually return to itself closing any loop. DAGs are visual tools to directly observe the causal relationships between exposures and outcomes, including mediators, confounders, and effect modifiers; this is perhaps as accurate a definition of the role of DAGs as you can get Rohrer (2018). Path diagrams for structural equation models ("SEM") have symbols that help the readers to understand what these models are doing. However, as our aim is to eventually discuss structural causal models in the light of structural equation models, we will briefly mention them here and continue with the models as we present here for uniformity. Table 2 provides a brief description of the symbols used for structural equation models and the differences we will adopt for our purposes in this tutorial. We will assume that all exogenous latent and manifest variables have variances so we do not show them separately and all endogenous latent and manifest variables have exogenous error terms. Hence we suggest the above scheme, besides, using dagitty and ggdag packages in R helps us to draw these graphs in a uniform way. Alternative, publication quality causal and structural equation graphs can be drawn using graphviz and DiagrammeR packages but they are also time consuming. We recommend that for rapid visualisation of the models, use dagitty. Path analysis in a measurement model: partition of variances Refer to Figure 2 where we see a measurement model with one latent variable and three manifest variables. Here, we Qeios,.0 · Article, April 6, 2021 Qeios ID: DNI7ET · https://doi.org/10.32388/DNI7ET 5/33 would like to use path tracing rules to derive the variance of manifest variable mv1. Here is the procedure: We will trace ALL paths that BEGIN with mv1 and end on mv1. We will multiply and add the path coefficients of all such paths What paths exist? A path starts from mv1, goes to lv and returns to mv1 (we call this mv1-lv-mv1) Another path is mv1-e1-mv1 No other path starts from mv1 and ends in mv1 Hence, -variance of mv1 = (mv1-lv) * var(lv) * (lv-mv1) + (mv1-e1) * var(e1) * (e1-mv1) Now, -mv1-lv and lv-mv1 are the same paths, so the path coefficient get squared -mv1-e1 and e1-mv1 are also the same paths, and we set the coefficients of these paths at 1.0 by convention -If we standardise lv, then var(lv) = 1.0 So, from here, we can say that the var(mv1) = square of the path coefficient from latent variable + variance of error term The term "square of the path coefficient" is referred to as "communality," because this part of the total variance (or variability, so to say) of mv1 is EXPLAINED by the latent variable that is common to all other manifest variables in the model that receive arrows from the latent variable. The path coefficient is also referred to as factor loading. Using the path analysis approach, you will see that as the error terms are uncorrelated, therefore the correlation (or covariance) between mv1 and mv2 is given by mv1-lv * var(lv) * lv-mv2 These concepts are fundamental to understanding what goes in SEMs. We have seen one model, that of a measurement model or confirmatory factor analysis model, where we have one or more latent variables load on manifest variables. where lv2 is regressed on lv1, so the path coefficient of lv1-lv2 can be viewed as a beta coefficient, with something like: This is the path diagram of a full structural equation model. Note that this model incorporates BOTH a set of measurement models (two measurement models here) and a structural model (lv1-lv2 path). You can extend and make these models as complex or as simple as you want. You could probably have only one manifest variable for the structural part where you would regress the manifest variable on the latent variable (simple), or you could have many more latent variable models that you would link up to form complex patterns that you would like to analyse. Path diagrams of models with meanstructures and group comparisons So far, we have confined our discussions to models that have only one group of people and models that only have explained covariances and variances. For example, a measurement model would be well suited to test the validity of the construct of a questionnaire you have set up to investigate some health construct. Say you have developed a questionnaire that aims to tap an individual's concept of "health" and decide to distribute this questionnaire to 200 individuals, and obtain data from them. Each individual is asked five questions, and you could have a measurement model Qeios,.0 · Article, April 6, 2021 Qeios ID: DNI7ET · https://doi.org/10.32388/DNI7ET 7/33 out of these five questions and a latent or unobserved construct of "health" from your research participants. Such a procedure would provide you with an estimate of whether you were able to tap the construct of "health" based on the items you asked your participants. Now imagine that among your participants there are those with chronic diseases (such as diabetes, high blood pressure, chronic heart disease and so on) and those who are otherwise healthy adults with no evidence of any disease. You may claim that the questions were perceived in the same way among members of both the groups, and that the factor loadings of the latent variable (in this case "health") would be equal in both groups; So even if the unexplained and explained variances of the manifest variables may be similar in both groups, it might still mean that they would have different average values on those scores. In other words, you may want to find out group differences or invariances of your measurements across groups to make sure that your model is a robust model. This is where a mean structure is important (Meredith 1993). In SEM, the means of the manifest variables are referred to as "intercepts" and the means of the latent variables as "factor mean." The factor mean is derived from a constant term, represented by a triangle (we will present here in the form of a circle with 1.0 as value). In a standardised solution, the path coefficient from the constant to the latent variable is set at 0 (mean = 0 for a standardised variable). Also note that as this is a constant, it's variance is 0, and therefore it contributes only to the mean of the latent and manifest variables. When the mean of the latent variable is set to 0, the intercept of the model is same as the mean of the individual manifest variables. Such a structure is used to compare and evaluate that the measurements and the measurement properties (such as factor loadings, and variance of the latent variable) REMAIN THE SAME for each group. The groups themselves can be constituted on the basis of categorical variables such as sex, race, or case-control status, for example. Both latent variables are "lv," except for group 1 it is 1lv, and for group 2, it is 2lv (similarly for manifest variables mv1, …, mv3 and e1 … e3, they are prefaced with 1 for group 1 and 2 for group 2) Three arrows on each side go from 1 to the manifest variables; these are the intercepts Then we have the measurement models of the two groups, group 1 and group 2 Now, a few things to consider here: 1. For each manifest variable in each group, we have their intercept, their variance, and their covariance on the basis of which the measurement model is formed 2. For each factor in each group, there are three manifest variables (in this case) 3. Each factor in each group (i.e., latent variable), we have measured their latent mean (the arrows that go from 1 to 1lv and 2lv), we have the factor loadings, and the variances We believe that the groups will differ in some ways, but we must make sure that they were measured in EXACTLY the same way. So, while we study both (or more groups), we can restrict or constrain the group parameters in a number of ways: At the least, we say that we only constrain that the configuration of the latent and manifest variables remain invariant in between the groups, but everything else can vary. If this is the case, this will be one of configural measurement Qeios, CC-BY 4.0 · Article, April 6, 2021 Qeios ID: DNI7ET · https://doi.org/10.32388/DNI7ET 9/33 invariance Then we can say that our factor loadings MUST be constrained in both the groups so as to ensure that both the groups had similar measurements. This is a weak measurement invariance because we still leave the possibiity that the variance of the latent variables themselves could differ between the groups Then we add the constraints of both factor loadings as well as latent variable variances to remain identical across the groups, but the latent means could still vary. This is strong invariance and is most practical assumption Finally, we can add the constraint that everything will be identical between the groups and see what factor structure will satisfy this condition and how would the model hold. This is referred to as strict measurement invariance. Using strong invariance to examine the group differences or groups is important to ascertain as to what differences exist even as the measurement of the constructs were conducted identically. Such groups can be two periods of time for the same population, or same sample, or two conditions (cases and controls or exposed and non-exposed, or treatment and control conditions), or gender, or indeed any characteristic of the individuals. It might be useful to investigate situations over two periods of time (as in baseline and final time point). Group comparison in the context of twin studies Group comparison is particularly useful in the context of twin studies (Neale and Cardon 2013). The problem and the solution are as follows. Say you want to test the hypothesis that BMI is determined by genes. BMI is a quantitative trait, so rather than one gene, we hypothesise that several genes add up to exert their effects on BMI. Yet we may also argue that in addition to genes, there are environmental variables that also determine individual BMI scores. To study such effects, you have collected data from 400 twins (200 monozygotic twins and 200 dizygotic twins), these twins were all raised in the same household and shared the same environment; however, even then, they also had unique environmental factors other than their shared households and rearing (such as different friends, different universities where they studied, different occupations, etc). So, if you take PAIRS of monzygotic and dizygotic twins and examine these TWO groups using a path analysis model, you will have the following: For each of the 200 pairs of Monozygotic (and it is true for dizygotic twins), you will have data on their BMI Twin1 and Twin2 (T1 and T2) will have on BMI a variance-covariance matrix whose patterns can be explained by three For MONOZYGOTIC twins, as they have EXACTLY the same complement of genes, their As will perfectly vary, i.e., Qeios,.0 · Article, April 6, 2021 Qeios ID: DNI7ET · https://doi.org/10.32388/DNI7ET 11/33 cor(A1, A2) = 1.0 For DIZYGOTIC twins, as they share 50% of their genes, their As will have cor(A1, A2) = 0.5 For DIZYGOTIC twins, as they share 25% of their dominant genes (heterozygosity), this will be cor(D1, D2) = 0.25 For BOTH monzygotic and dizygotic twins, their shared environments (Cs) will be perfectly correlated, so cor(C1, C2) = 1.0 for BOTH mono-and dizygotic twins, their unique environments are uncorrelated, so we will see cor(E1, E2) = 0 a$ ^2$ + e = 1.00 (assuming standardised coefficients) a is also referred to as "narrow heritability" Figure 5 shows the twin studies path diagram While analysing these paths, we have to be careful that while Cs are common to both MZ and DZ twins, the path coefficient that explains the part of the variation in the phenotype (say BMI in the example), will be very low if not negligible. This needs to be taken into account as you assess these models, so setting or constraining the path coefficients C1A1 and C2A2 to close to 0 (something like 0.001) is a useful strategy. Path analysis of longitudinal data or latent growth curve models So far, we have discussed path models that are all assessed over a single period of time, and even when we discussed measurement invariance and discussed two groups in two periods of time, the scope was limited. We now turn to the problem of what happens when you have correlated measures over repeated measurements taken over three or more periods. These models are referred to as latent growth curve models. There are two major differences between the models we have studied so far and latent growth curve models. First, so far we have assumed that the data for all our models came from studies that were conducted over a specified period of time, 3. Change must also have some kind of rate of change. This could be linear, this could be quadratic. Equally, as we factor in time, we factor in what intervals over which such change occur? Are the data collected linearly, as in every n number of years? Or are data collected every two years for the first two waves and then every five years for the next few waves and so on? Figure 6 shows a simple latent growth curve model Figure 6. A simple latent growth curve model Latent growth curve model As you can see, x1 … x4 shows the measured X variables that we want to model over 4 periods of time. These are our manifest variables. The arrows from i, the latent intercept is fixed at 1.0 to indicate that the latent intercept has the same influence over the time bound values. The arrows from latent slope, s, to x1 … x4 is fixed as follows: if we want to model them as linear, we start with 0, so in this case this will be 0,1,2,3. A 0th time point is the beginning or the first measurement. Depending on the frequency over which data were collected, we adjust the value of the path coefficients of the paths that run from s to individual xs. Finally the error terms for each time point observation. Here we have shown them to be uncorrelated, but these can be correlated errors. Code of all the graphs and diagrams we have presented so far: Qeios, CC-BY 4.0 · Article, April 6, 2021 Qeios Step 2: Input data in the form of a covariance matrix In order to analyse data in SEM, you will need the following: A correlation or a covariance matrix A set of standard deviation (if you use correlation matrix) a set of means for the individual manifest variables Sample size (the more the better) In our examples of SEM here, we will use lavaan package in R. You can download and install lavaan from lavaan's website. There are several ways of getting covariance or correlation matrices: Direct input of numbers You can input a raw set of numbers as variance-covariance or correlation matrix (lower or upper) and using the function lav_matrix_lower2full() you can obtain a full matrix in lavaan. ## Convert between correlation and covariance matrices Besides, if you have a correlation matrix and a set of standard deviation, you can convert a correlation matrix to a covariance matrix using the function cor2cov(matrix, sd) ## Directly compute the matrix from the variables in your data set Using the function cov(c(v1, ..., vn), na.rm = T), remember to set na.rm = T in R Now that we have seen that evidence from 400 countries in Europe, Australia, and other parts of the world suggest that several aspects of personal well-being and better living are related to the social well-being, we can ask another related question: what are the relative genetic and environmental contributions to sense of psychological well-being. We will use data from Archontaki et.al. (2012) on the twin study of psychological well-being; for the purpose of demonstration, we will only use a small set of twin correlations, but you can read the entire study here (Archontaki, Lewis, and Bates 2013) We will replicate a small subscale from the data presented in the article, We will analyse here the scale "personal growth" for this paper. We will set up the data for analysis as follows: # for mz twins, we do: mz = lav_matrix_lower2full(c(1.00, 0.38, 1.00)) rownames(mz) = colnames(mz) = c("T1", "T2") Now that we have set up the data for this study, we will run the models and evaluate them as follows. Here we will evaluate an ACE model, where we will test the path coefficients and heritability estimates for a model where we will test additive genetic effecs (As), common environmental effects (Cs), and unique environmental effects (Es). Note that as correlation of Cs on both MZ and DZ twins is 1.0 (that is they share the SAME environment), the impact of a shared or Qeios,.0 · Article, April 6, 2021 Qeios ID: DNI7ET · https://doi.org/10.32388/DNI7ET 25/33 common environment on their phenotype is likely to be very low, so we will set it at more than 0.001 Our final analysis will be based on data where Gallup World Poll measured the proportion of people in various countries where they said they were happy over time, see for more information here. We obtained the data and you can download a copy of the data from here. The data were collected between 1984 through 2014. We will study the longitudinal pattern using SEM (growth model) and study the baseline percentage from where it began and slope of the growth. The data were gathered every five years. #modificationIndices(happiness_fit) As this analysis suggests: About 72% people over the world population reported that they were happy or satisfied in 1993 Over time, more people tended to report they were happy over subsequent surveys Qeios,.0 · Article, April 6, 2021 Qeios ID: DNI7ET · https://doi.org/10.32388/DNI7ET 31/33 Countries that started with lower percentage of their people reporting in surveys they were happy, had a faster rate of growth over subsequent waves While our analysis borne this, you can also view the results reported graphically on the ourworldindata website as shown in the following figure: Figure 8. Trend in reporting of happiness Shared they were happy across time span, Ourworld in data as the source As you can see in this figure, countries such as Russia and Zimbabwe, that started at the lower end of the graphs had steeper upward slopes reporting happiness over time, while countries that started with higher proportion of people as happy tended to have slower trajectory of growth reporting happy people in subsequent waves. Overall, people do seem to get happier over time, at least till 2014 starting from 1993. Summary In this tutorial, we provided a brief roundup of using structural equation modelling as an analytical strategy. Structural equation modelling is a mix of regression and factor analysis, and as you may have seen, this can be used for validation of surveys and questionnaires, reducing data, linear regression modelling, analysis of group differences, twin data analyses, and growth curve modelling. We used worked out examples to show where you can start with simple measurement models, then transform or modify them to test how well they fit the data, and in the same modelling strategy, you can test regression as in testing structural models. Longitudinal studies and twin studies are different from measurement models Qeios, CC-BY 4.0 · Article, April 6, 2021 Qeios ID: DNI7ET · https://doi.org/10.32388/DNI7ET 32/33 and structural models in the sense that they need theoretical understandings to fix and constrain parameter estimates, and also for longitudinal studies, our goal is to find the parameters for latent intercepts and slopes, as we constrain the parameters that explain variance of the time bound values of variables. While SEMs are powerful strategies, one needs to be careful to deal with missing values, extreme outliers, non-normal distribution of the variables, and categorical variables. Lately, SEM strategies are being used to model non-parametric equations as in (Structural Causal Models) [https://www.causalflows.com/structural-causal-models/]. We have also not touched the issues of model specification, interpretation of modification indices, and sample size estimation. In subsequent extensions to this inflammation, we will discuss these issues.
6,501.6
2021-04-06T00:00:00.000
[ "Mathematics" ]
Fusion Network for Change Detection of High-Resolution Panchromatic Imagery : This paper proposes a fusion network for detecting changes between two high-resolution panchromatic images. The proposed fusion network consists of front- and back-end neural network architectures to generate dual outputs for change detection. Two networks for change detection were applied to handle image- and high-level changes of information, respectively. The fusion network employs single-path and dual-path networks to accomplish low-level and high-level differential detection, respectively. Based on two dual outputs, a two-stage decision algorithm was proposed to efficiently yield the final change detection results. The dual outputs were incorporated into the two-stage decision by operating logical operations. The proposed algorithm was designed to incorporate not only dual network outputs but also neighboring information. In this paper, a new fused loss function was presented to estimate the errors and optimize the proposed network during the learning stage. Based on our experimental evaluation, the proposed method yields a better detection performance than conventional neural network algorithms, with an average area under the curve of 0.9709, percentage correct classification of 99%, and Kappa of 75 for many test datasets. Introduction Change detection is a challenging task in remote sensing, used to identify areas of change between two images acquired at different times for the same geographical area.Such detection has been widely used in both civilian and military fields, including agricultural monitoring, urban planning, environment monitoring, and reconnaissance.In general, change detection involves a preprocessing step, feature extraction, and classification or clustering algorithm to distinguish changed and unchanged pixels.To obtain a good performance, the selected classification or clustering algorithm plays an important role in the field of change detection. In prior studies, statistical approaches have been proposed to identify a change [1][2][3].A corresponding maximal invariant statistic is obtained by analyzing a suitable group of transformations leaving problem invariant [2].Then, a general problem of testing equality among M covariance metrices in the complex-valued Gaussian case is analyzed for synthetic aperture radar (SAR) change detection.A sample coherence magnitude as a change metric has been proposed by [3].A new maximum-likelihood temporal change estimation and complex reflectance change detection is used for SAR coherent temporal change detection.Currently, a classification or clustering is becoming one approach to be used for change detection in remote sensed images by employing supervised or unsupervised learning, respectively.Feature selection and feature extraction are important aspects in this approach.Several detection algorithms using two images have been proposed with different features for different types of applications [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19].The methods used for change detection have mostly been designed to extract changed features such as in a difference image (DI) [3][4][5][6][7][8][9], a local change vector [10], or a texture vector [11][12][13].A DI is a common feature used to represent a change in information through the subtraction of temporal images.Local change vectors have also been used by applying neighbor pixels to avoid a direct subtraction based on the log ratio.This method computes a mean value of the log ratio of temporal neighbor pixels.A texture vector [11][12][13] is employed to extract statistical characteristics.These changed features are then fed into a classification or clustering algorithm to determine changed/unchanged pixels.Some unsupervised change detection methods have been proposed based on the fuzzy c-mean (FCM) algorithm [14,16].Such approaches are useful when labels in the training stages are unavailable.The learning algorithms in the aforementioned studies are based on observed data without any additional information, therefore, their application leads to overfitting for invariant changes.Furthermore, they do not yield a reasonably good performance in the change detection rates because they do not incorporate accurate information without supervision.Therefore, supervised change detection methods, such as a support vector machine (SVM) [11,[16][17][18], have been proposed.The basic SVM can apply a binary classification to changed or unchanged pixels with texture information or using a change vector analysis.These algorithms are not perfect in terms of incorporating accurate and full statistical characteristics for large multi-dimensional data.Furthermore, they do not yield the best detection performance for new datasets. Recently, a deep convolution neural network (DCNN) was developed to produce a hierarchy of feature-maps such as learned filters.The aforementioned DCNN can automatically learn a complex feature space from a huge amount of image data.A DCNN can achieve a superior performance compared to conventional classification algorithms.A restricted Boltzmann machine (RBM) [19], a convolutional neural network (CNN) [20][21][22], and deep belief networks (DBNs) [23] have been proposed for use in change detection.Such change detection algorithms based on deep learning yield a relatively good performance in terms of the detection accuracy.However, most can be categorized into front-end differential change detection using low-level features such as a difference image as a feature input of their networks, resulting in sensitivity to several deteriorated conditions caused by geometric/radiometric distortions, different viewing angles, and so on.This front-end differential change detection conducts an early feature extraction of two image inputs into a single-path network.In contrast, back-end differential detection methods by employing dual-path networks have been proposed for fusing higher-level features with a long short-term memory (LSTM) model [24] to avoid the use of low-level difference features such as a difference image.In addition, a Siamese convolutional network (SCN) [25][26][27] and dual-DCN (dDCN) [28] were also proposed to detect changed areas by measuring the similarity with high-level network features.These algorithms achieve a relatively good performance, although false negatives are still observed. To reduce false positives and false negatives in change detection, a fusion network incorporating low-and high-level feature spaces in neural networks was proposed in this paper.For low-level differential features, the difference image is fed into the front-end differential DCN (FD-DCN).For a high-level differential feature, a back-end differential dDCN (BD-dDCN) is employed.In addition, a two-stage decision algorithm is incorporated for post-processing to enhance the detection rate during the inference stage.The intersection and union operations are employed to validate the change map.First, an intersection operation is used to avoid false positives.The second-stage decision operates a union by considering the local information of the first decision.This stage is developed to validate and repair the change map from the first decision.In addition, this study introduces a new loss function that combines a contrastive loss and weighted binary cross entropy loss function to optimize high-and low-level differential features, respectively.In our experiment, we found that the proposed algorithm can yield a better performance than existing algorithms by achieving an average area under the curve (AUC) of 0.9709, a percentage correct classification (PCC) of 99%, and a Kappa of 75 for several test datasets. This work contributes three main key features as follows.(1) Unlike the mentioned existing works above, we propose a fusion network by combining a front-and back-end networks to perform the low-and high-level differential detection in one structure.(2) A combining loss function between contrastive loss and binary cross entropy loss is proposed to accomplish fusion of the proposed networks in training stage.(3) The two-stage decision as a post-processing is presented to validate and ensure the changes prediction at the inference stage to obtain better the final change map. This paper is organized into five sections.In Section 2, related studies are briefly described.Section 3 presents the proposed algorithm in detail.Section 4 describes and analyzes the experiment results.Finally, we provide some concluding remarks regarding this research. Deep Convolutional Network and Related Studies on Change Detection Deep neural architectures with hundreds of hidden layers have been developed to learn high-level feature spaces.The recently developed convolutional neural network (CNN) is a deep learning architecture that has been shown to be effective in image recognition and classification [29].The CNN architecture employs multiple convolutional layers, followed by an activation function, resulting in the development of feature maps.The rectified linear unit (ReLU) is widely used as the activation function in many CNN architectures.To progressively gather global spatial information, the feature maps are sub-sampled by the pooling layer.The final feature maps are connected to a fully connected layer to produce the class probability outputs (P class ), as shown in Figure 1.During the training stage, an objective loss such as cross-entropy is computed.All of the weighting parameters of the network are updated to reduce the cost function using the back-propagation algorithm.(3) The two-stage decision as a post-processing is presented to validate and ensure the changes prediction at the inference stage to obtain better the final change map.This paper is organized into five sections.In Section 2, related studies are briefly described.Section 3 presents the proposed algorithm in detail.Section 4 describes and analyzes the experiment results.Finally, we provide some concluding remarks regarding this research. Deep Convolutional Network and Related Studies on Change Detection Deep neural architectures with hundreds of hidden layers have been developed to learn highlevel feature spaces.The recently developed convolutional neural network (CNN) is a deep learning architecture that has been shown to be effective in image recognition and classification [29].The CNN architecture employs multiple convolutional layers, followed by an activation function, resulting in the development of feature maps.The rectified linear unit (ReLU) is widely used as the activation function in many CNN architectures.To progressively gather global spatial information, the feature maps are sub-sampled by the pooling layer.The final feature maps are connected to a fully connected layer to produce the class probability outputs ), as shown in Figure 1.During the training stage, an objective loss such as cross-entropy is computed.All of the weighting parameters of the network are updated to reduce the cost function using the back-propagation algorithm.The related studies on change detection based on deep learning can be categorized into two categories based on the type of network that is used: A front-end differential network (FDN) and a back-end differential network (BDN).The front-end network uses low-level differential features such as a DI or joint feature (JF) as the feature input of the network, as shown in Figure 2a.In this case, a network with a single-path architecture receives the extracted DI as low-level differential features of the temporal images to identify changed pixels.Several studies based on an FDN have been proposed to improve the performance of the change detection rate.In addition, a deep neural network (DNN) is applied to detect objects from synthetic aperture radar (SAR) data [30].The differential feature of temporal data is employed instead of a DI.This feature is used to solve the initial weight problem through pre-training using the restricted Boltzmann machine (RBM) algorithm.These pre-trained weights are then fed into the initial weights of the DNN during the training stage.In contrast, unsupervised change detection has been proposed by combining DBNs with a feature change analysis [23].The feature maps of temporal input images are obtained using the DBN.The magnitude and direction of these feature maps are analyzed to distinguish the types of feature changes using an unsupervised fuzzy C-means algorithm.Other unsupervised systems have been proposed by combining a sparse autoencoder (SAE), unsupervised clustering, and a CNN to overcome the change detection problem without supervision [20].First, a DI is computed using a log-ratio operator.The feature maps of the DI are extracted through the SAE and clustered into change classes as the labels for the training CNN.Next, some feature maps extracted by the SAE are taken as the training data for the CNN.In addition, an autoencoder and multi-layer perceptron (MLP) are combined to identify The related studies on change detection based on deep learning can be categorized into two categories based on the type of network that is used: A front-end differential network (FDN) and a back-end differential network (BDN).The front-end network uses low-level differential features such as a DI or joint feature (JF) as the feature input of the network, as shown in Figure 2a.In this case, a network with a single-path architecture receives the extracted DI as low-level differential features of the temporal images to identify changed pixels.Several studies based on an FDN have been proposed to improve the performance of the change detection rate.In addition, a deep neural network (DNN) is applied to detect objects from synthetic aperture radar (SAR) data [30].The differential feature of temporal data is employed instead of a DI.This feature is used to solve the initial weight problem through pre-training using the restricted Boltzmann machine (RBM) algorithm.These pre-trained weights are then fed into the initial weights of the DNN during the training stage.In contrast, unsupervised change detection has been proposed by combining DBNs with a feature change analysis [23].The feature maps of temporal input images are obtained using the DBN.The magnitude and direction of these feature maps are analyzed to distinguish the types of feature changes using an unsupervised fuzzy C-means algorithm.Other unsupervised systems have been proposed by combining a sparse autoencoder (SAE), unsupervised clustering, and a CNN to overcome the change detection problem without supervision [20].First, a DI is computed using a log-ratio operator.The feature maps of the DI are extracted through the SAE and clustered into Appl.Sci.2019, 9, 1441 4 of 17 change classes as the labels for the training CNN.Next, some feature maps extracted by the SAE are taken as the training data for the CNN.In addition, an autoencoder and multi-layer perceptron (MLP) are combined to identify changed pixels [31].Change detection using faster R-CNN has been proposed for high-resolution images [32].This work detects changed areas with bounding boxes.The DI is extracted and then fed into faster R-CNN to detect changed locations.Each of these deep learning algorithms tackles the change detection problem using a front-end differential network.This network identifies changes by observing low-level feature such as the DI, which is sensitive to various distortions, including geometric and radiometric distortions, and different viewing angles.Another approach of FDN to detect the changes has been proposed by joining feature inputs (JF) [23].Two temporal images are concatenated and they are fed into DBN to avoid a DI for change detection.However, by joining the features in the early network causes both low-level differential inputs to be dependently learned in the single network.It is for global change detection, resulting in more false positives. Appl.Sci.2019, 9, x FOR PEER REVIEW 4 of 16 changed pixels [31].Change detection using faster R-CNN has been proposed for high-resolution images [32].This work detects changed areas with bounding boxes.The DI is extracted and then fed into faster R-CNN to detect changed locations.Each of these deep learning algorithms tackles the change detection problem using a front-end differential network.This network identifies changes by observing low-level feature such as the DI, which is sensitive to various distortions, including geometric and radiometric distortions, and different viewing angles.Another approach of FDN to detect the changes has been proposed by joining feature inputs (JF) [23].Alternative algorithms for change detection were introduced by employing a high-level differential feature with a dual-path network, as shown in Figure 2b.Siamese CNN (SCNN) was proposed to detect changed areas for multimodal remote sensing data [27].This architecture was employed to learn the different characteristics between multimodal remote sensing data.This approach learns the feature map of temporal images in each path network.The Euclidean distance was employed to measure the similarity at the back-end of the network.A similar method was developed based on an SCNN for optical aerial images [25].A deep CNN was proposed by producing a change detection map directly from two images [33].A change map was evaluated using the pixelwise Euclidean distance from high-dimensional feature maps.Another method was proposed that incorporates a deep stacked denoising autoencoder (SDAE) and feature change analysis (FCA) for multi-spatial resolution change detection [34].In the aforementioned study, denoising autoencoders were stacked to learn local and high-level features for unsupervised learning.Then, the inner relationship between the multi-resolution image pair was exploited by building a mapping neural network to identify any change representations.A dual-dense convolutional network was presented by incorporating information from neighbor pixels [28].In the aforementioned study, a dense connection was used to enhance the features of the changed map information.All of the above- Alternative algorithms for change detection were introduced by employing a high-level differential feature with a dual-path network, as shown in Figure 2b.Siamese CNN (SCNN) was proposed to detect changed areas for multimodal remote sensing data [27].This architecture was employed to learn the different characteristics between multimodal remote sensing data.This approach learns the feature map of temporal images in each path network.The Euclidean distance was employed to measure the similarity at the back-end of the network.A similar method was developed based on an SCNN for optical aerial images [25].A deep CNN was proposed by producing a change detection map directly from two images [33].A change map was evaluated using the pixel-wise Euclidean distance from high-dimensional feature maps.Another method was proposed that incorporates a deep stacked denoising autoencoder (SDAE) and feature change analysis (FCA) for multi-spatial resolution change detection [34].In the aforementioned study, denoising autoencoders were stacked to learn local and high-level features for unsupervised learning.Then, the inner relationship between the multi-resolution image pair was exploited by building a mapping neural network to identify any change representations.A dual-dense convolutional network was presented by incorporating information from neighbor pixels [28].In the aforementioned study, a dense connection was used to enhance the features of the changed map information.All of the above-mentioned BDN architectures yield good performances by inspecting high-level differential features, which can reduce false positives.However, a BDN can achieve higher sensitivity and specificity through high-level differential features. Although a high-level differential network can improve the sensitivity and specificity, the false negative rate is still too high for practical applications.The FDN architecture can achieve a relatively higher true-positive rate regardless of the number of false positives.In addition, the BDN architecture can reduce the false-positive rate by producing some false negatives caused by strict decision criteria in high-level differential features.In this work, an FDN and a BDN were fused to employ the advantages of both.A post-processing step was then employed during the inference stage to obtain the final decision for change detection. Proposed Fusion Network for Change Detection with Panchromatic Imagery In general, a change detection system involves a pre-processing step to reduce geometric and radiometric distortions for better results.A radiometric correction is applied to remove atmospheric effects for a time-series image analysis.In this study, a radiometric correction was applied by converting digital numbers (DNs) into a radiance value.Then, the top-of-atmosphere (TOA) reflectance values were computed using the gain and offset values provided by a satellite provider.In addition, to ensure that the pixels in the image were in their proper geometric position on the Earth's surface, a geometric correction was applied.The parameters (polynomial coefficients) of the polynomial functions were estimated using least square fitting with ground control points (GCPs) identified in an unrectified image and corresponding to their real coordinates.A digital elevation model (DEM), namely, shuttle radar topography mission DEM (SRTM DEM), was then used to correct the optical distortion and terrain effect.The corrected images were then incorporated into the proposed network to detect changes. To achieve a change detection, the proposed network employs a fusion network by fusing the FDN and BDN architectures.Dual outputs were generated to solve low-level differential and high-level differential problems.For the training stage, a contrastive loss function and a weighted binary loss function were combined to optimize the proposed fusion network parameters.In addition, a pre-processing step was applied to validate and ensure false changes during the inference stage.Intersection and union operations were then applied from the dual outputs of the proposed network.According to the proposed change detection, the false-positive and false-negative rates could be reduced, resulting in high sensitivity and specificity for a proper change detection.Symbols used in the proposed method are tabulated in Table 1.Patch network 1 and 2 correspond to the back-end network N 3 Patch network 3 correspond to the front-end network Feature maps of the l-th layer at the r-th dense block and the i-th network P 1 and P 2 Outputs of N 1 and N 2 , respectively D Dissimilarity distance O Change detection probability output of N 3 Incorporation process of a batch normalization (BN), a 3 × 3 convolution, and ReLU of the (l−1)-th layer at the r-th dense block and the i-th network Fusion Network for Change Detection For a change detection, an FDN architecture is commonly used for identifying changed pixels.Such an architecture uses low-level differential features that are relatively sensitive to noises.It is caused by direct low-level features comparison, which misalignments of geometric error and a different angle view are very influential.This FDN assigns a DI or JF to a single path network.They conduct dependent learning of both low-level features together which lead to hard learning for invariant changes and above-mentioned noisy conditions.Thus, this approach would produce a global change detection, resulting in true positives and more false positives.In addition, BDN architectures are designed to avoid low-level differential features, thereby reducing the false-positive detection rate.These architectures apply strict identification for a high-level differential, which may cause some false negatives.Therefore, an FDN is suitable in terms of the true-positive detection rate, and a BDN is extremely reliable for overcoming false positives.To obtain a proper change detection, a fusion network architecture is proposed by fusing an FD-DCN and a BD-dDCN with a dense-connectivity of the convolution layers, as shown in Figure 3.There are three branch networks, N 1 , N 2 , and N 3 , receiving two temporal images (I 1 and I 2 ) in which N 1 and N 2 correspond to the back-end network, and N 3 refers to the front-end network by concatenating these two inputs (I 1 and I 2 ).A dense convolutional connection was employed in the proposed fusion network to enhance the feature representation [35].This dense architecture is very effective at covering invariant change representations by reusing all preceding feature maps of the network.The proposed network was designed using dual outputs, namely, the dissimilarity distance (D) and change probability (O) at the last layer, corresponding to the back-end and front-end networks, respectively.Let us assume that the feature maps of the l-th layer at the r-th dense block and the i-th network are computed as: where [F i 0,d r , F i 1,d r , . . ., F i l−1,d r ] indicates a concatenation of the feature-maps of all of previous layers, layer 0, . . ., and layer (l − 1).In addition, H(•) incorporates a batch normalization (BN), a 3 × 3 convolution, and ReLU.A pair of temporal images were cropped into two patches (40 × 40) (I 1 and I 2 ) by sliding the window and fed into N 1 and N 2 , respectively.The dissimilarity distance (D) was then computed based on the Euclidean distance, which is defined as follows: where P 1 and P 2 are the outputs of N 1 and N 2 activated by sigmoid function, respectively.The proposed method applies a pixel-wise change detection by inspecting the neighboring pixels.The 40 × 40 patch images identify a change corresponding to the center pixel of the patch.Thus, when the value of D is close to 1, the center of I is assigned to a changed pixel.In addition, I 1 and I 2 were concatenated to be fed into N 3 .The same dense convolution architecture was employed in this branch network to generate the change detection probability (O).The dual outputs (D and O) are a result of this fusion network.In addition, a post-processing step during the inference stage was proposed based on these outputs (D and O) to achieve a proper prediction. Training of the Proposed Fusion Network for Change Detection During the training stage, this paper introduced a loss function (L) by combining the contrastive loss ( ) [36] and weighted binary cross entropy loss ( ) as defined by: where is a weight loss.Given a training set consisting of 40 × 40 image pairs and a binary label of the ground truth (Y), the proposed network was trained using backpropagation.Here, was applied to optimize the parameters of and , and is as computed as follows [36]: where = 1 is a changed pixel and = 0 is an unchanged pixel.In addition, is a partial loss function for a pair of similar pixels, and is a partial loss function for a pair of dissimilar pixels, as defined by [36]: The value of m is set to 1 as the margin value.In addition, was used to optimize the Training of the Proposed Fusion Network for Change Detection During the training stage, this paper introduced a loss function (L) by combining the contrastive loss (E c ) [36] and weighted binary cross entropy loss (E B ) as defined by: where α is a weight loss.Given a training set consisting of 40 × 40 image pairs and a binary label of the ground truth (Y), the proposed network was trained using backpropagation.Here, E c was applied to optimize the parameters of N 1 and N 2 , and is as computed as follows [36]: where y = 1 is a changed pixel and y = 0 is an unchanged pixel.In addition, Ls is a partial loss function for a pair of similar pixels, and L D is a partial loss function for a pair of dissimilar pixels, as defined by [36]: 5) The value of m is set to 1 as the margin value.In addition, E B was used to optimize the parameters of N 3 , as defined by: where W is the proposed weighted function used to penalize the false-positive and false-negative errors.Thus, W is computed by: where β c and β u are penalization weights for false-negative and false-positive errors, respectively.Moreover, C and U are the changed and unchanged numbers of pixels in the full dataset (N), respectively.The proposed network was trained using a stochastic gradient descent (SGD) with the training parameters, including 0.001, 1 × 10 −6 , and 0.9 as the learning rate, decay, and momentum, respectively.In addition, the epoch number was set to 30.The value of α was set to 0.7 to further penalize E c .It was to prevent false positives, which are possible in a back-end network.The goal of prediction through the front-end was to obtain better true-positive rates regardless of the number of false positives.Thus, the false negatives were penalized ten times more than false positives, namely, β c = 10 and β u = 1. Dual-Prediction Post-Processing for Change Detection During the inference stage, post-processing was introduced using dual-prediction for change detection.In the counting rule, binary hypotheses can be passed to a fusion center, which then decides which one of the two hypotheses is true [37].The proposed algorithm employed a hard-logical rule using an AND and OR operation with the same probability output thresholds to predict a changed pixel.This aimed to validate and ensure the change detection based on the proposed fusion network outputs (D and O).There were two steps to applying this post-processing.First, an intersection operation was employed to obtain a strict prediction and avoid false positives.Assume that (m × n) images (T) will be tested using the proposed fusion network, resulting in an (m × n) change map (M 1 ).This prediction was conducted by sliding in the raster scan order, as shown in Figure 4.The inputs (I 1 and I 2 ) with the central pixel position, x and y, were assigned to the proposed fusion network to generate the values of D and O.If D and O identified a changed pixel, then M 1 (x, y) was set to a value of 1; otherwise, it was set 0. This was performed for the entire image T. images (T) will be tested using the proposed fusion network, resulting in an ( × ) change map ( ).This prediction was conducted by sliding in the raster scan order, as shown in Figure 4.The inputs ( and ) with the central pixel position, x and y, were assigned to the proposed fusion network to generate the values of D and O.If D and O identified a changed pixel, then , was set to a value of 1; otherwise, it was set 0. This was performed for the entire image T.Then, the second prediction was performed to ensure the first prediction, as shown in Figure 5.Let us assume that ( × ) was a change map for the second prediction.Initially, a prediction noise was investigated by analyzing the local information from by computing Nb, as defined by: Then, the second prediction was performed to ensure the first prediction, as shown in Figure 5.Let us assume that (m × n) M 2 was a change map for the second prediction.Initially, a prediction noise was investigated by analyzing the local information from M 1 by computing Nb, as defined by: where Nb(x, y) computes the local information M 1 (x, y) using a q × q window.If the value of Nb(x, y) is greater than the input size s (40) divided by 4, then the second prediction is applied, otherwise, M 2 (x, y) is assigned to 0. A union operation was operated from D and O for the second prediction.When it returned the changed pixel, M 2 (x, y) was assigned a value of 1, otherwise, it was assigned a value of 0. The final change map was obtained based on the result of M 2 . Experimental Evaluation and Discussion This study used a dataset of panchromatic imageries, which provided 0.7 GSD captured by the KOMPSAT-3 sensor.For the training dataset, this study used a scene of overlapped images (1214 × 886) over Seoul, South Korea, as shown in Figure 6.These images were cropped into a 40 × 40 sliding patch, and the center pixels of the cropped patch pair were labeled based on the ground truth. where , computes the local information , using a × window.If the value of , is greater than the input size s (40) divided by 4, then the second prediction is applied, otherwise, , is assigned to 0. A union operation was operated from D and O for the second prediction.When it returned the changed pixel, , was assigned a value of 1, otherwise, it was assigned a value of 0. The final change map was obtained based on the result of . Experimental Evaluation and Discussion This study used a dataset of panchromatic imageries, which provided 0.7 GSD captured by the KOMPSAT-3 sensor.For the training dataset, this study used a scene of overlapped images (1214 × 886) over Seoul, South Korea, as shown in Figure 6.These images were cropped into a 40 × 40 sliding patch, and the center pixels of the cropped patch pair were labeled based on the ground truth.Figure 6 shows an area containing completed changes and changes under contraction.In addition, these images have many tall buildings, roads, houses, and forests to be trained for solving the misalignment and viewing angle problems.In our experiments, to assess the effectiveness of the proposed change detection system, three areas of the panchromatic datasets were used, namely, Areas 1, 2, and 3, as shown in Figure 7. Figure 6 shows an area containing completed changes and changes under contraction.In addition, these images have many tall buildings, roads, houses, and forests to be trained for solving the misalignment and viewing angle problems.In our experiments, to assess the effectiveness of the proposed change detection system, three areas of the panchromatic datasets were used, namely, Areas 1, 2, and 3, as shown in Figure 7. and (c) the ground truth. Figure 6 shows an area containing completed changes and changes under contraction.In addition, these images have many tall buildings, roads, houses, and forests to be trained for solving the misalignment and viewing angle problems.In our experiments, to assess the effectiveness of the proposed change detection system, three areas of the panchromatic datasets were used, namely, Areas 1, 2, and 3, as shown in Figure 7.The images in Figure 7 were acquired in March 2014 and October 2015 over different areas of Seoul, South Korea.Each image pair had been radiometrically corrected and had a geometric misalignment of approximately ±6 pixels.In addition, it also had a different angle view, which The images in Figure 7 were acquired in March 2014 and October 2015 over different areas of Seoul, South Korea.Each image pair had been radiometrically corrected and had a geometric misalignment of approximately ±6 pixels.In addition, it also had a different angle view, which cannot be resolved without precise 3D building models.Area 1 was located in a downtown part of Seoul, and contained areas changed through building construction.Moreover, the urban area had tall buildings and roads.These datasets included several factors of geometric distortion, misalignments, and different viewing angle effects, which could lead to many false changes.In addition, Area 2 represented a downtown area near a forest.These two images were acquired in different seasons.It was difficult to achieve robustness to seasonal changes for practical applications.Area 3 had many tall buildings, making it difficult to achieve an accurate detection rate owing to the different viewing angles. In this study, the receiver operating characteristic (ROC) curve, AUC, PCC, and Kappa coefficient were used to quantitatively evaluate the performance of the proposed method.Moreover, to evaluate the effectiveness of the proposed method, it was compared with conventional algorithms having FDN and BD-dDCN architectures [28].A DI and JF were incorporated into a single-path CNN architecture (DI + CNN and JF + CNN).These architectures included eight depth convolutional, two pooling, and two fully connected layers, which were the same as the proposed depth layers.In addition, Dual-DCN [28] was also compared to the proposed method. Figure 8 shows an ROC curve, which indicates that the proposed method could achieve a better AUC compared to the existing algorithms.For Area 1, the proposed method yielded an AUC of 0.9904, which means that it could identify changes approximating the ground truth.It had a slightly higher dual-DCN of 0.9878.The FDN architectures provided an AUC lower than the proposed algorithm which JF + CNN and DI + CNN gave an AUC of approximately 0.9509 and 0.7060, respectively.Furthermore, the proposed method significantly outperformed the other algorithms with regard to the AUC for Areas 2 and 3 because it could properly detect the change events with the incorporation of low-and high-level differential features.Table 2 summarizes the PCC and Kappa values of the different methods applied for the three areas.The proposed method showed a higher PCC in Areas 1 and 3.The dual-DCN achieved a slightly higher PCC than the proposed method in Area 2. However, in terms of the Kappa value, the proposed fusion network outperformed all other existing algorithms.The proposed method achieved a Kappa value of 75.16 on average, which means that it yielded a good agreement in terms of the results.Figure 9 shows the change map results when applying the existing and proposed algorithms.Visually, the proposed method achieved a much better result than the existing algorithms.In Area 1, the proposed fusion network nearly approximated the ground truth.It could reduce the number of false positives while preserving the true positives.The proposed network produced a cleaner change map than the existing algorithms regarding false positives.Moreover, the proposed algorithm yielded reasonably good results for Areas 2 and 3.The proposed method significantly reduced the number of false positives and enhanced the true positives.This is caused by the proposed fusion network, which was designed and trained for low-and high-level differential problems.In addition, a post-processing step was employed to validate and repair the change map.To evaluate the effectiveness of the proposed two-stage decision, the proposed algorithm also was compared to each individual network output (D and O) and the other decision method between two outputs of the proposed fusion network based on the mean operation.In addition, another single output fusion network (SOFN) architecture was designed same as the proposed fusion network architecture by fusing D and O outputs for more comparisons.This network was trained with the binary cross entropy loss function by the same training parameters.The objective and subjective evaluation are presented in Table 3 and Figure 10, respectively.can reduce the false-positive rate.This condition makes the proposed two-stage decision working as the goal that detection rates can be accelerated by the combining of two network outputs with a twostage decision.In addition, the proposed algorithm still outperformed the mean operation between two network outputs for all areas.SOFN with the single output also gave worse results than the proposed one caused by no validation decision of post-processing for change detection.The proposed fusion network was employed with a two-stage decision to obtain a better prediction rate.Regarding time complexity, the proposed fusion network consumed more computational complexity than the existing algorithm by a factor of approximately two over the dual-path network and three with the single-path network.It was due to the proposed architecture designed with more According to Table 3, the proposed two-stage decision shows better performance compared to individual outputs and mean operation.In term of AUC, PCC, and Kappa, the proposed gave significantly better results than that by individual outputs (D and O). Figure 10 shows that the output O produced more true positives regardless of the number of false positives.However, the output D can reduce the false-positive rate.This condition makes the proposed two-stage decision working as the goal that detection rates can be accelerated by the combining of two network outputs with a two-stage decision.In addition, the proposed algorithm still outperformed the mean operation between two network outputs for all areas.SOFN with the single output also gave worse results than the proposed one caused by no validation decision of post-processing for change detection.The proposed fusion network was employed with a two-stage decision to obtain a better prediction rate. (c) Regarding time complexity, the proposed fusion network consumed more computational complexity than the existing algorithm by a factor of approximately two over the dual-path network and three with the single-path network.It was due to the proposed architecture designed with more network paths.In addition, the proposed two-stage decision required an additional prediction process in the inference stage.Let us see that the general total time complexity of dense convolutional network [35] was O K 2 run-time complexity for a depth K network [38].Dual-DCN [28] employed dual-path dense convolutional network with the depth of 6 that produced a run-time complexity of O 2•6 2 .The proposed fusion network included three-path dense convolutional networks with the same depth by fusing back-and front-end differential network architectures, resulting in a run-time complexity of O 3•6 2 .In the inference stage, a two-steps decision for the proposed made the run-time be O 2• 3•6 2 that gave it an expensive computational complexity while producing a better result. Conclusions This paper presented a robust fusion network for detecting changed/unchanged areas in high-resolution panchromatic images. The proposed method learns and identifies the changed/unchanged areas by combining front-and back-end neural network architectures.The dual outputs are efficiently incorporated for low-and high-level differential features with a modified loss function that combines the contrastive and weighted binary cross entropy losses. In addition, a post-processing step was applied to enhance the sensitivity and specificity from false changes/unchanged detections based on the neighboring information.We found through qualitative and quantitative evaluations that the proposed algorithm can yield a higher sensitivity and specificity compared to the existing algorithms, even under noisy conditions such as geometric distortions and different viewing angles For further work, the proposed algorithm can be extended for other modalities such as multi-spectrum images, Pan-sharpening, and SAR data.In addition, the proposed algorithm requires expensive time complexity caused by pixel-wise detection with a two-stage decision.To accelerate run-time complexity, block-wise prediction design would also be a focus of future work. Appl.Sci.2019, 9, x FOR PEER REVIEW 3 of 16 networks in training stage. Appl.Sci.2019, 9, x FOR PEER REVIEW 7 of 16 branch network to generate the change detection probability (O).The dual outputs (D and O) are a result of this fusion network.In addition, a post-processing step during the inference stage was proposed based on these outputs (D and O) to achieve a proper prediction. Figure 3 . Figure 3.The proposed fusion network architecture for change detection. Figure 3 . Figure 3.The proposed fusion network architecture for change detection. Figure 6 . Figure 6.Training dataset: (a) Image acquired in March 2014, (b) image acquired in December 2015, and (c) the ground truth. Figure 6 . Figure 6.Training dataset: (a) Image acquired in March 2014, (b) image acquired in December 2015, and (c) the ground truth. Figure 10 . Figure 10.Detection results for three areas when using an individual network output and the proposed algorithms: (a) D, (b) O, (c) Mean, (d) SOFN, and (e) the proposed fusion network with a two-stage decision. Figure 10 . Figure 10.Detection results for three areas when using an individual network output and the proposed algorithms: (a) D, (b) O, (c) Mean, (d) SOFN, and (e) the proposed fusion network with a two-stage decision. Table 1 . Symbols used in the proposed fusion network for change detection. Table 2 . Quantitative assessment of the existing and proposed algorithms. Table 3 . Quantitative assessment of single output decision and proposed algorithms.
9,426.2
2019-04-05T00:00:00.000
[ "Environmental Science", "Computer Science" ]
Diagnosis and Localization of Prostate Cancer via Automated Multiparametric MRI Equipped with Artificial Intelligence Prostate MRI scans for pre-biopsied patients are important. However, fewer radiologists are available for MRI diagnoses, which requires multi-sequential interpretations of multi-slice images. To reduce such a burden, artificial intelligence (AI)-based, computer-aided diagnosis is expected to be a critical technology. We present an AI-based method for pinpointing prostate cancer location and determining tumor morphology using multiparametric MRI. The study enrolled 15 patients who underwent radical prostatectomy between April 2008 and August 2017 at our institution. We labeled the cancer area on the peripheral zone on MR images, comparing MRI with histopathological mapping of radical prostatectomy specimens. Likelihood maps were drawn, and tumors were divided into morphologically distinct regions using the superpixel method. Likelihood maps consisted of pixels, which utilize the cancer likelihood value computed from the T2-weighted, apparent diffusion coefficient, and diffusion-weighted MRI-based texture features. Cancer location was determined based on the likelihood maps. We evaluated the diagnostic performance by the area under the receiver operating characteristic (ROC) curve according to the Chi-square test. The area under the ROC curve was 0.985. Sensitivity and specificity for our approach were 0.875 and 0.961 (p < 0.01), respectively. Our AI-based procedures were successfully applied to automated prostate cancer localization and shape estimation using multiparametric MRI. Introduction Technological progress in medical imaging has enabled more sophisticated diagnosis using clinical images through the use of various modalities and protocols with thinner slices at multiple time points. When fused with transrectal ultrasonography (TRUS)guided prostate biopsy, multiparametric magnetic resonance imaging (mpMRI) is a critical modality for the detection of prostate cancer [1,2]; especially when the cancer is limited to the ventral region or transitional zone of the prostate, where cancer is rarely detected by systematic biopsy [3]. Moreover, a targeted biopsy can avoid overdetection of clinically insignificant cancer [4]. Real-time MRI-TRUS fusion has been put into practical use for targeted prostate biopsy, and the findings have led the European Association of Urology and National Comprehensive Cancer Network guidelines to demonstrate the superiority of diagnosis via targeted biopsy compared with systematic biopsy [5,6]. Recent review supported the fundamental role of targeted biopsy, complementary with systemic biopsy in enhancing detection rates and reducing the risk of missing clinically significant cancer [7]. Uro 2022, 2 22 An unnecessary biopsy could be avoided using screening algorithms with up-front liquid biopsy followed by mpMRI and biopsy [8]. On the other hand, the increase in the total number of radiograms that must be analyzed has imposed a severe burden on clinicians, who are responsible for the interpretation of all radiograms [9]. Moreover, interpretation of prostate mpMRI for cancer detection requires well-trained interpretation to make a diagnosis by the combined findings from multi-sequential MR images such as T2-weighted images (T2WI), diffusion-weighted images (DWI), apparent diffusion coefficient (ADC) maps, and dynamic contrast-enhanced images [10,11]. To reduce such an interpretational burden and improve productivity in clinical practice, artificial intelligence (AI)-based computer-aided diagnosis (CAD) is expected to be a critical technology. Furthermore, localizing a cancer is clinically more important than determining the cancer presence or aggressiveness because, at present, histopathological diagnosis is essential for a definitive diagnosis of prostate cancer. A number of studies have reported that the detection of cancer location by mpMRI enhances the accuracy of targeted prostate biopsy [1,2], whereas studies of AI-based automated cancer diagnosis from prostate mpMRI have mainly focused on the cancer presence or aggressiveness [12][13][14]. Furthermore, previous reports about AI-based cancer localization proposed individual-pixel diagnosis [15,16], but the methods used in those studies had shape-estimation problems that generated vermiculate overdetection and omissions owing to statistical outliers [15]. To resolve such problems, we propose a method of superpixel segmentation of likelihood maps, which can pinpoint cancer distribution. Likelihood maps consist of the cancer likelihood value for each pixel computed from texture information of MR images using a support vector machine (SVM). The superpixel method divides images into non-linearly shaped regions, aggregating neighboring pixels with similar pixel values. Collaboration between the two machine-learning techniques enables more accurate cancer localization and shape estimation and clearly delineates the physiological boundary and anatomical continuity. We present an AI-based diagnostic method of prostate cancer localization and shape determination using mpMRI. Materials and Methods This study has received the approval of the Institutional Review Board for clinical research of Hokkaido University Hospital. Study Population This study enrolled 15 prostate cancer patients who underwent radical prostatectomy (RP) and prostate mpMRI at the Hokkaido University Hospital between April 2008 and August 2017. The patients met the following inclusion criteria: (i) biopsy naïve MR images were available in all of the three sequences, i.e., the T2WI OR sequences; T2WI, DWI, and ADC maps, (ii) whole-mount histopathological tumor maps were available, and (iii) patients had no prior treatment for prostate cancer or surgery for a benign prostate tumor. Those patients whose cancer sites were not visible on MRI or whose cancer sites were too small and patients without peripheral zone cancer were excluded from the study. Baseline characteristics of PCa patients are shown in Tables 1 and 2. Overview of the AI-Assisted Diagnostic Method Our automated diagnostic procedure consisted of three steps ( Figure 1). The first step was to extract texture features of tumors and draw likelihood maps. For each pixel, the texture features were calculated as numeric values from neighbor or inter-sequential pixel values on MR images, and the SVM converted the texture features into a cancer probability as one of the pixel values of likelihood maps [17]. In this way, we generated "likelihood maps" that defined the cancer distribution. In the second step, the superpixel method divided a likelihood map into~600 nonlinear superpixel regions to bring cancerous pixels together [18]. A cancer diagnosis for each superpixel constituted the final step in our procedure. Each superpixel was assigned as "cancer" or "normal" based on its mean likelihood. We implemented the automated cancer detection program using MATLAB ® . MR Images and Histopathological Images All MR images were acquired with a 3.0 T scanner (Achieva 3.0-T TX series R3.21; Philips Medical Systems, Best, The Netherlands) with a pelvic phased-array coil (32-channel SENSE Torso/Cardiac Coil). No endorectal coil was used. The slice thickness was 3 mm for all sequences. The following MR sequences were obtained: axial T2WI and axial DWI. The ADC values were calculated from two DWI scans acquired with b = 0 and 2000 s/mm 2 , and ADC maps were then rebuilt by calculating the ADC values for each pixel of each slice. In the second step, the superpixel method divided a likelihood map into ~600 nonlinear superpixel regions to bring cancerous pixels together [18]. A cancer diagnosis for each superpixel constituted the final step in our procedure. Each superpixel was assigned as "cancer" or "normal" based on its mean likelihood. We implemented the automated cancer detection program using MATLAB ® . MR Images and Histopathological Images All MR images were acquired with a 3.0 T scanner (Achieva 3.0-T TX series R3.21; Philips Medical Systems, Best, The Netherlands) with a pelvic phased-array coil (32-channel SENSE Torso/Cardiac Coil). No endorectal coil was used. The slice thickness was 3 mm for all sequences. The following MR sequences were obtained: axial T2WI and axial DWI. The ADC values were calculated from two DWI scans acquired with b = 0 and 2000 s/mm 2 , and ADC maps were then rebuilt by calculating the ADC values for each pixel of each slice. RP specimens were sectioned perpendicular to the prostatic urethra from the apex to the base according to Japanese General Rules for Prostatic Cancer. Pathologists in our institution examined the specimens and mapped the cancer regions that were apparent in all cross-sections. We labeled the cancer regions on the MR images by comparing the histopathological maps with the MR images. If both the histopathological maps and MR findings indicated the presence of prostate cancer, it was judged as a cancer region. RP specimens were sectioned perpendicular to the prostatic urethra from the apex to the base according to Japanese General Rules for Prostatic Cancer. Pathologists in our institution examined the specimens and mapped the cancer regions that were apparent in all cross-sections. We labeled the cancer regions on the MR images by comparing the histopathological maps with the MR images. If both the histopathological maps and MR findings indicated the presence of prostate cancer, it was judged as a cancer region. Texture Features and Likelihood Maps For the first step, we designed a new texture feature named higher-order local texture information (HLTI), which is a suitable customization of higher-order local autocorrelation (HLAC) for pixel-based computation [19]. HLTI includes intra-sequential and inter-sequential HLAC, contrast, and homogeneity. SVM posterior probability computation generated four types of primary likelihood maps by changing the combination of MR sequences for feature extraction: T2WI and ADC maps; T2WI and DWI; ADC maps and DWI; T2WI, ADC maps, and DWI. Subsequently, we generated secondary likelihood maps, giving each pixel the least cancer likelihood among the four primary likelihood maps. Superpixel Segmentation and Cancer Diagnosis In this study, we applied the superpixel method to non-linear segmentation of likelihood maps to describe cancer distribution. The superpixel method uses pixel-based, k-means clustering. For our purposes, the superpixel method used a simple linear iterative clustering (SLIC) algorithm [18], and hyperparameters consisted of the following: number of superpixels = 600, compactness = 60, and number of iterations = 15. The SLIC algorithm groups neighbor pixels into superpixel regions with similar pixel values. In this way, the peripheral zone of the tumor was divided into cancerous superpixels and benign ones. We calculated the mean likelihood of cancerous tissue for each superpixel and set 0.5 as the diagnostic threshold. We defined superpixels whose mean likelihood was greater than 0.5 as a diagnosis of cancer and defined superpixels containing more than 50% of cancer pixels as true cancer. We evaluated the diagnostic accuracy of our AI-based CAD via area-weighted sensitivity, specificity, and the area under the area-weighted receiver operating characteristic (ROC) curve. Pearson's Chi-square test was used to compare categorical data, with p < 0.01 considered statistically significant. "Area-weighted" implies that each superpixel was counted as its number of pixels. Cross Validation SVM classifiers were evaluated independently through leave-one-patient-out cross validation. Cross validation is a technique used to evaluate classifiers by partitioning the original sample into a training dataset from 14 patients to train the classifier and a test dataset from one patient to evaluate it. The SVM classifier used radial basis function kernels, and the hyperparameters consisted of each kernel's width parameter γ and misclassification penalty C. Results The area under the area-weighted ROC curve was 0.985 as shown in Figure 2. Examples of AI diagnosis are shown in Figure 3. In these examples, area-weighted sensitivity, specificity, positive predictive value, and negative predictive value were 87.5%, 96.1%, 73.7%, and 98.4% (p < 0.0001), respectively. Discussion Our AI-based CAD accurately identified prostate cancer location and shape using mpMRI despite the small population of patients. The most important advantage of our procedure is that our CAD requires just more than a dozen training datasets to provide adequate performance, whereas the training of neural networks requires hundreds or thousands of datasets or data augmentation [14]. This is because we chose pixel-based feature extraction. In other words, all the peripheral zone pixels were data samples for training the SVM. Although the population size was very small, the data sample size was very large in this study. Little has been reported about cancer localization through the automated diagnosis of prostate MR images, but some studies have focused on cancer presence or aggressiveness [12][13][14]. In 2012, the first report, to our knowledge, of AI-based CAD of prostate cancer via MRI proposed that cancer probability maps can be computed by SVM [16]. Strictly speaking, however, this CAD did not make a final diagnosis but rather provided only a "distribution of cancer probability" to facilitate accurate diagnosis by physicians. In 2017, Sun et al. reported on automated cancer localization including a final diagnosis [15]. Classifying individual pixels by SVM, they succeeded in extracting cancer regions of typical morphology. However, this SVM diagnosed individual pixels using extremely local information without subregional aggregate computation. In this way, it generated a number of minute overdetection and omissions that reduced the diagnostic performance. Fehr et al. indicated that, under manual segmentation of cancer regions, SVM and the Haralick feature could not only classify cancer regions but also estimate Gleason's score [20], especially with SMOTE data augmentation [12]. The present study demonstrates that the combination of the likelihood map and pixel aggregation by the superpixel method enhances cancer localization and shape estimation. This combination filtered out minute misclassifications and automatically extracted cancer regions, thereby accurately portraying tumor size and shape. Correct localization would reduce the length of biopsy core to achieve Gleason's score agreement between biopsy and RP specimen and lead to providing accurate information for a decision of active surveillance or treatment [21]. The localization of cancer in the prostate requires detailed knowledge of the distribution of local populations of cancer cells. We labeled the cancer regions in each MR slice according to histopathological prostate cancer mappings, and our SVM computation estimated the cancer probability for individual pixels. These pixel-based procedures for cancer labeling and estimating cancer probability achieved accurate localization of cancer and approximation of tumor shape. In this report, we manually labeled cancer regions on each T2WI, comparing histopathological maps with MR images, whereas Sun et al. directly compared their AI-based diagnosis with raw histopathological maps. We chose this approach because we assumed that cancer distribution in raw histopathological maps did not fit with most MR images and could even cause incorrect labeling. In fact, the slice thickness and angle differed between our histopathological maps and MR images. Formalin fixation or a surgical procedure could modify the RP specimen, but the computation of texture features in a single pixel causes statistical outliers because of the absence of statistical processing such as averaging. Outliers are major causes of vermiculate overdetection and omissions. Superpixel segmentation contributed to resolving this outlier problem. Moreover, we generated four types of primary likelihood maps by changing the combination of MR sequences from which the texture feature was extracted. We synthesized secondary likelihood maps from the four primary likelihood maps. The reason why secondary maps are needed is that using only one type of likelihood map often diminishes any overdetection in other types of likelihood maps, whereas true cancer tends not to be diminished by any type of likelihood map. HLTI Here we propose the use of the newly designed texture feature named HLTI, which is an extended texture feature of HLAC [19], for multi-sequential MR images. HLTI is computed from pixel values of a central pixel and neighboring pixels and differs from the existing HLAC or Haralick feature with respect to whether the texture feature is differentially extracted from each sequential MR image or is extracted from two or three sequential MR images in a lump [20]. As far as we know, little has been reported about such inter-sequential feature extraction from prostate mpMRI. Inter-sequential feature extraction is expected to reduce the deterioration of inter-sequential information by standardization. Diagnostic Partition Using the Superpixel Method Accurate cancer localization for targeted biopsy or estimation of capsule invasion definitely requires detailed shape estimation. In our unpublished preliminary experiment, we divided MR images into small rectangular patches and diagnosed each patch. This procedure, named 'partial diagnosis', can hardly estimate the shape of cancer regions because MR images were divided into patches, which ignored the true physiological boundary. This automatic cancer segmentation could contribute to extracting cancer-sitebased features to make more elaborate diagnoses [22]. Diagnosis of aggregated cancerous pixels, however, is a reasonable method for detailed shape estimation compared with partial diagnosis. We call this procedure 'diagnostic partition'. We used the SLIC superpixel method for segmentation [18], which aggregates neighboring pixels with similar values. The superpixel method segments raw MR images in a manner that is blind to texture information. Therefore, raw MR images should be converted into cancer distribution images, which express cancer likelihood as pixel values. In other words, the superpixel method partitions the likelihood maps. In this way, our procedure enables non-linear segmentation, thereby preserving the physiological bound-ary. Furthermore, the superpixel method has the benefit of absorbing outlier-induced misdiagnosed pixels as neighboring pixels. Limitations There are some limitations to this study. First, histopathological maps cannot be directly applied to MR images as cancer labelling because the location of the MR slice does not exactly match that of histopathological maps or even other sequences of MR images in terms of slice angle, thickness, and scale because of body motion, rectum peristalsis, formalin fixation, surgical procedure, or MRI device settings. It is difficult to correct such mismatches. However, combining automated organ segmentation with image fusion techniques such as elastic fusion or shrinkage factor might facilitate automated and accurate cancer labeling [23][24][25]. Secondary, no cancer-free, whole-mount prostate specimens were available. Although we demonstrated favorable performance of our CAD, it remained unknown how correctly our CAD diagnoses benign prostate. Prostate MRI before radical cystectomy would break through the limitation. Third, our CAD doesn't support the diagnosis of Gleason's score, local tumor invasion, or rare histological variant. This is because our CAD needs at least hundreds of training data. In this study, pixel-based training, which provides large training data, enables our CAD to achieve the benchmarks. We have to mention that only 15 patients were enrolled in this single-center study. A future study including a large population would resolve this problem. Conclusions In conclusion, diagnostic partition using the superpixel method and SVM-computed likelihood maps enables automated diagnosis of prostate cancer location and shape in mpMRI. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to a particular file format. Conflicts of Interest: The authors declare no conflict of interest.
4,191.6
2022-01-14T00:00:00.000
[ "Medicine", "Computer Science", "Engineering" ]
Development of an LCD-Photomask-Based Desktop Manufacturing System A liquid crystal display (LCD)-photomask-based desktop manufacturing system that includes software and hardware configuration is described. The software design includes a slicing algorithm, an LCD photomask display, a process-user interface, and a motion control program. The bucket-sorting algorithm is used in the slicing preprocessing for search speed enhancement. The slicing time ratio can be reduced to nearly 25% with five buckets. The hardware configuration of this architecture includes an LCD photomask, an optical system, a z-axis elevator, and a personal-computer-based control system. The matrix optics assists the optical system design of the proposed RP machine. After the slicing process, a cross-sectional contour of each layer is transferred to the LCD photomask. The optical system emits parallel light upward through the photomask to expose and solidify the entire layer at once. This visible light can expose and solidify an entire layer at once, layer by layer, until the whole part is finished. We have physically demonstrated the complete system, including hardware and software implementation, and the experimental results are as what was expected and are described here. Development of an LCD-Photomask-Based Desktop Manufacturing System Ren C. Luo, Fellow, IEEE, and Jyh Hwa Tzou Abstract-A liquid crystal display (LCD)-photomask-based desktop manufacturing system that includes software and hardware configuration is described.The software design includes a slicing algorithm, an LCD photomask display, a process-user interface, and a motion control program.The bucket-sorting algorithm is used in the slicing preprocessing for search speed enhancement.The slicing time ratio can be reduced to nearly 25% with five buckets.The hardware configuration of this architecture includes an LCD photomask, an optical system, a z-axis elevator, and a personal-computer-based control system.The matrix optics assists the optical system design of the proposed RP machine.After the slicing process, a cross-sectional contour of each layer is transferred to the LCD photomask.The optical system emits parallel light upward through the photomask to expose and solidify the entire layer at once.This visible light can expose and solidify an entire layer at once, layer by layer, until the whole part is finished.We have physically demonstrated the complete system, including hardware and software implementation, and the experimental results are as what was expected and are described here. Index Terms-Bucket sorting, desktop manufacturing system, liquid crystal display (LCD) photomask, matrix optics. I. INTRODUCTION D ESKTOP manufacturing refers to various rapid prototyp- ing (RP) techniques, where 3-D components are directly built, layer by layer, from a computer data description or a computer-aided design (CAD) file.Due to their ability and the relative ease of transforming a conceptual design into a physical model, desktop manufacturing technologies have met escalating demand in the industry for shortening new product development cycle time.The layer-by-layer fabrication methodology also allows complex models to be made with ease.In desktop manufacturing processes, the geometry of the object that will be manufactured can be obtained from CAD model data, an existing object (through reverse engineering) [1]- [3], or mathematical data (e.g., surface equations) [4].Prashant et al. [5] reviewed many process planning techniques in layer manufacturing.Most desktop manufacturing systems accept model data that are described in an intermediate file format called (stereolithography) STL.This file format approximates the original model geometry by using a series of triangular facets. After loading an STL model, a slicing procedure is, then, applied to the tessellated model.In this process, the model is intersected with a set of horizontal planes to create a series of cross sections or slices that comprise contours that represent the material boundaries of the part that will be generated.The contours are subsequently used to generate the numerically controlled tool paths for the desktop manufacturing system. There are many commercial RP systems that are currently available on the market, such as InVision [6], Objet [7], Perfactory [8], stereolithography apparatus (SLA) [9], and fused deposition modeling (FDM) [10].The InVision 3-D printer combines 3-D Systems' multijet modeling printing technology with an acrylic photopolymer model material.Objet's Polyjet technology works by jetting photopolymer materials in ultrathin (i.e., 0.016 mm) layers onto a build tray, layer by layer, until the part is complete.The Perfactory RP system uses a photomonomer resin and a digital light processing (DLP) projector for polymerizing 3-D finished parts.SLA and FDM are old RP processes.Both the laser beam of the SLA system and the thermal extrusion head of the FDM system generate 2-D crosssectional areas by using 1-D tool paths.The disadvantages of these systems are lower speed, the requirement for an expensive xyz table, and three-axis motion control system.Another old RP process, i.e., solid ground curing (SGC) [11], involves creating a temporary photomask of each layer, applying a thin coating of photopolymers, and exposing the layer to a burst of ultraviolet light for curing it.Because the cross sections of one layer are simultaneously cured, the SGC system has a faster build speed.However, the mechanism of the SGC system is complex, and the price is very expensive. The purpose of this paper is to develop a low-cost desktop manufacturing system.We use a liquid crystal display (LCD) panel as a photomask [12].With the bottom exposure method, the image of the LCD photomask is calculated from the sliced data.The desktop manufacturing system has the advantages of low cost, compactness, and no special physical support requirement, making it suitable for use in offices. Ventura et al. [13] developed a direct photo shaping process for the layer-by-layer fabrication of functional ceramic components.Each layer is photoimaged by a LCD or a DLP projection system.Young et al. [14] described a novel device for producing 3-D objects, which has been developed using an LCD as a programmable dynamic mask and visible light for initiating photopolymerization.Monneret et al. [15] presented a new process of microstereolithography for manufacturing freeform solid 3-D microcomponents, with its outer dimensions being in the millimeter size range.Huang and Jiang [16] analyzed the shrinkage deformation of the mask-type stereolithography process.Jiang et al. [17] developed a masked photopolymerization RP system that uses an LCD panel as a dynamic mask with an upper exposure skill. II. LCD-PHOTOMASK-BASED DESKTOP MANUFACTURING SYSTEM The LCD-photomask-based desktop manufacturing system structure is illustrated in Fig. 1.The hardware configuration of this system includes an LCD photomask, an optical system, a z-axis elevator, and a personal computer (PC)-based control system.The optical system can generate parallel light that passes through the LCD photomask to cure the photopolymer.The RP part is generated layer by layer and is attached to a platform that rises as each successive layer is attached to the lowermost face.The resin is deposited onto the transparent bottom plate.The platform and the previously built structure are lowered into the resin, leaving a liquid film between the part and the bottom plate, with a correct thickness for the next layer.The new layer is formed beneath the platform by exposing the LCD photomask.After the layer is finished, the platform is raised, separating the layer from the bottom plate, filling and wiping the resin, and repeating the process until all layers are fabricated.The completed RP part is, then, removed from the platform, post cured, and finished, if needed.The desktop manufacturing system architecture is illustrated in Fig. 2. The z-axis elevator with a high-precision ball screw is driven by an alternating-current servomotor.A PC-based DSP motion controller is used to control the movement of the z-axis elevator.The LCD photomask is connected to the video graphics array card of the computer. The architecture of the proposed system includes five main components: 1) a data processing unit; 2) an LCD photomask; 3) an optical system; 4) a PC-based DSP motion control system; and 5) a z-axis elevator.The details of these components are discussed as follows. A. Data Processing Unit The data processing unit performs the slicing procedure and the photomask generation process.The slicing procedure transforms the 3-D CAD model into a set of 2-D layer contours.According to this contour data, the photomask generation program exports the contour of each layer to the LCD photomask.Note that the region inside the contour is displayed in white and the region outside the contour is displayed in black. B. LCD Photomask An LCD serves as a photomask that is used to display layer contours.The light source emits parallel light upward through the transparent portions of the LCD mask to expose and solidify the entire layer at once.As shown in Fig. 3, a 14.1-in thin-film transistor LCD with 1024 × 768 pixels is used here.Each pixel is 0.28 mm in both width and length, yielding a photomask resolution of under 0.28 mm.An insulating membrane is located below the LCD photomask for insulating the resin from the ultraviolet rays and heat that the light source produced.Furthermore, the size of the RP part that can be produced in this system is restricted by the size of the LCD panel used.If we can use a larger LCD panel, the size of the RP part could be increased. C. Optical system The optical system strongly influences the system structure, forming method, and building time for parts.The proposed system uses a NAF-200N photo-curable liquid resin (from Denken Engineering Company Ltd., Oita, Japan) as the building material.NAF-200N solidifies under exposure to 680-nm visible light.A 275-W xenon lamp serves as the light source.The optical spectrum of this xenon lamp, which is detected by a spectrometer, is shown in Fig. 4, where the spectral radiance is observed with maximum power at a 736-nm wavelength and an adequate amount of energy at 680 nm.The experimental results confirm the ready solidification of NAF-200N under exposure to this xenon lamp.According to the experimental results, the layer thickness for one layer is 0.254 mm.The curing time for one layer is 135 s. The optical system design is illustrated in Fig. 5(a).The actual structure is shown in Fig. 5(b).The proposed system uses plane shaping instead of line shaping.The visible light source is emitted in a bottom-up manner, instead of a top-down way, to reduce the amount of wasted resin. The ray tracing method is an important tool in geometrical optics.Matrix optics [18], [19] were used to design the RP machine optical system.A ray is described by its position and its angle with respect to the optical axis.The matrix form of several optical components can be shown as follows. 1) Free-Space Propagation: As shown in Fig. 6, a ray that traverses a distance d is altered in accordance with y 2 = y 1 + θ 1 d and θ 2 = θ 1 .The ray-transfer matrix is In addition Fig. 6.Free-space propagation.2) Transmission Through a Thin Lens: As shown in Fig. 7, the relation between θ1 and θ2 for paraxial rays is transmitted through a thin lens of focal length f .Since the height remains unchanged (i.e., y 2 = y 1 ), the refraction matrix of the thin lens is In addition 3) Reflection From a Planar Mirror: As shown in Fig. 8, the ray position is not altered (i.e., y 2 = y 1 ), and we conclude that θ 2 = θ 1 .The ray-transfer matrix is, therefore, the following identity matrix: R = 1 0 0 1 . In addition Authorized licensed use limited to: National Taiwan University.Downloaded on January 14, 2009 at 00:57 from IEEE Xplore.Restrictions apply.4) Reflection From a Spherical Mirror: As shown in Fig. 9, the ray position is not altered (i.e., y 2 = y 1 ).The reflection matrix of a spherical mirror is In addition The ray tracing diagram of the optical system of the RP machine is shown in Fig. 10.The light source P 0 (i.e., the xenon lamp) is located on the focus of the biconvex lens and at the center the spherical mirror.The ray that is emitted by light source P 0 can be divided into two parts: 1) ray trajectory 1 directly transmits through the thin lens and 2) ray trajectory 2 is reflected from the spherical mirror. These two ray trajectories are discussed as follows: • Ray trajectory 1: P 0 P 1 P 2 P 3 P 4 P 5 . The ray that is emitted from light source P 0 transmits through the thin lens.After reflecting the ray from the flat mirror, the reflected parallel light can be generated onto the LCD photomask position (i.e., P 5 ).The system matrix is, then, defined as where T 10 ray-transfer matrix from P 0 to P 1 ; A 21 thin-lens refraction matrix from P 1 to P 2 ; T 32 ray-transfer matrix from P 2 to P 3 ; R 43 reflection matrix of a planar mirror from P 3 to P 4 ; T 54 ray-transfer matrix from P 4 to P 5 .Thus, the ray at point P 5 on the LCD photomask position is given by If d 1 = f and y 1 = 0, (6) can be simplified as Consequently • Ray trajectory 2: P 0 P 6 P 7 P 8 P 9 P 10 P 11 P 12 . As shown in Fig. 10, the ray that is emitted from light source P 0 is reflected by the spherical mirror.After the Fig. 12. Bucket-sorting algorithm for data sorting, reflected ray is transferred through the thin lens and flat mirror, the parallel light can be generated onto the LCD photomask position (i.e., P 12 ).The system matrix is then defined as where T 60 ray-transfer matrix from P 0 to P 6 ; S 76 reflection matrix of a spherical mirror from P 6 to P 7 ; T 87 ray-transfer matrix from P 7 to P 8 ; A 98 thin-lens refraction matrix from P 8 to P 9 ; T 10•9 ray-transfer matrix from P 9 to P 10 ; R 11•10 reflection matrix of a planar mirror from P 10 to P 11 ; T 12•11 ray-transfer matrix from P 11 to P 12 .Thus, the ray at point P 12 on the LCD photomask position is given by If d 1 = f and y 1 = 0, (10) can be simplified as Consequently From the results of ( 8) and ( 12), the ray angle θ 2 on the LCD photomask is independent of the incident ray angle θ 1 in two ray trajectories.The optical system emits parallel light upward (i.e., θ 2 = 0) through the LCD photomask to expose and solidify the photo-curable resin. The focal length of the biconvex lens and the radius of curvature of the spherical mirror are selected for 15 and 10 cm due to the machine space limitations, respectively.Because the high-power xenon lamp could generate enough convection and radiation heat to affect the LCD photomask and resin, a sunshade is placed between the biconvex lens and flat mirror to reduce the heat transfer to the LCD photomask. After constructing the optical system, measurement of the power that spreads onto the LCD photomask is necessary.This paper uses an optical power meter for measuring the light power that spreads onto the LCD photomask.The valid light area on the LCD photomask is 10 × 10 cm 2 .This paper divides the valid area into 100 equal parts.Each part is measured by the light power by using an optical power meter.The measured values are shown in Fig. 11.Based on these experimental results, the light that spreads onto the LCD photomask is determined in good uniform.The light source uses a 275-W xenon lamp.The average light power through the LCD photomask is 1.43 mW. A. Bucket-Sorting Algorithm For most RP systems, CAD models that are described in the STL file format must be sliced into contours.An effective slicing algorithm is necessary for RP systems.A simple approach is to intersect every facet with every slicing plane.This approach is time consuming.The bucket-sorting algorithm is used in the slicing preprocessing for search speed enhancement. The bucket-sorting algorithm divides the spatial space into N subspaces.Searching for triangular data that are inside these smaller subspaces is often faster than browsing the entire space.In this paper, the split space was also called a slab, as illustrated in Fig. 12.The slabs were generated by the defined maximum acceptable thickness and by the maximum and minimum Z-coordinate of the facets.Each slab was defined Authorized licensed use limited to: National Taiwan University.Downloaded on January 14, 2009 at 00:57 from IEEE Xplore.Restrictions apply.between a Z min and a Z max (see Fig. 12) so that, when slicing at a specific height z, the specific slab was the one that included z within its limits [Z min , Z max ].A facet is assigned to a slab whenever one or more of its vertices fall within the slab's range. If a vertex has a Z value that is exactly equal to the boundary height between two slabs, that facet is assigned to both slabs.Fig. 12 shows that the four facets (i.e., F 4 , F 5 , F 6 , and F 8 ) have a common vertex V and this vertex's Z-coordinate is equal to the boundary height Z 2 between two slabs (i.e., slabs 1 and 2).Consequently, these facets (i.e., F 4 , F 5 , F 6 , and F 8 ) are assigned to both slabs 1 and 2. To implement the bucket-sorting algorithm, Fig. 13(a) and (b) refer to the input files.The bucket number was changed to compare the slicing time.The results (slicing time) for different bucket numbers are shown in Table I [Fig.13(a)] and Table II [Fig.13(b)].If the bucket number is 1, the bucket-sorting algorithm was not used.Based on these results, the slicing time is greatly reduced using the bucket-sorting algorithm. The results in Tables I and II are shown in Fig. 14(a) and (b).Based on these results, the slicing time is greatly reduced by the bucket-sorting algorithm.The slicing time ratio can be reduced by nearly 25% with five buckets.In Table I, the slicing time ratio will reach the minimum value with 50 buckets.If the bucket number is greater than this certain value (i.e., 50 for Table I), the slicing time will increase.However, in Table II, the fastest slicing time will occur at 80 buckets.This optimum bucket number for slicing time is not a fixed value.It depends on the RP part's height, its facet number, and the size of the facets.In general, the optimum bucket number value is 8 ∼ 10.If the bucket number is greater than 10, the slicing time will not obviously be decreased.This means that many buckets are not useful for reducing the slicing time. B. LCD Photomask Display Algorithm The LCD photomask displays the cross-sectional contours of model layers, and the optical system can project the light through the white areas of the photomask.The LCD photomask display algorithm is described as follows.The program fills the inside of the contour with white color and fills the outside of the contour with black color.The light beam shines through the white areas to cure the resin.After curing one layer, the program will display the cross-sectional contour of the next layer in the LCD photomask.When all layers have been built, the program stops the RP machine, and the physical part is finished. We used Visual Basic as the algorithm compiler.The program outputs display data to the LCD photomask to display the filled contours layer by layer.Fig. 15( the result after slicing the STL file.Fig. 16(a) and (b) shows the cross-sectional contours that are filled with white color inside. As described above, the proposed desktop manufacturing system uses the photo-curable liquid resin NAF-200N as the building material.This resin will solidify under exposure to 680-nm-wavelength light.Based on the experimental results, the relation between the exposure time and the hardened depth of the resin is illustrated in Table III.In general, the proposed system uses uniform slicing, and the layer thickness is set to 0.254 mm so that the exposure time for each layer is 135 s. IV. EXPERIMENTAL RESULTS To compare the machining efficiency, the FDM 2000 RP machine (from the Stratasys, Inc.) has been chosen for comparison.Although FDM 2000 is among the slowest systems, it is the only RP system that is available in our laboratory.A 100 mm × 100 mm × 1 mm thin plate is manufactured by the FDM 2000 and the proposed RP system.After finishing the slicing process, the tool path for the FDM 2000 RP machine is shown in Fig. 17(a), and the LCD photomask display of the proposed RP system is shown in Fig. 17(b).The manufacturing time for building one layer is 582 s in FDM 2000.However, the manufacturing time for building one layer is 135 s in the proposed RP system.The machining efficiency of the proposed system is better than the FDM 2000 RP system.The accuracy of the proposed RP system is 0.4 mm (i.e., 0.015 in), which is sufficient for real applications. A. Case 1 In Case 1, the 3-D CAD model is illustrated in Fig. 18(a), and the STL model is shown in Fig. 18(b).The RP software first reads the STL file and proceeds with the slicing process.The sliced model is shown in Fig. 19(a). The LCD photomask displays the cross-sectional contour, and the proposed desktop manufacturing system builds the physical model.The cross-sectional contour that is displayed in the LCD photomask is illustrated in Fig. 19(b).The physical part is built, layer by layer, using the proposed system.The finished RP part for Case 1 is shown in Fig. 20. B. Case 2 In Case 2, the 3-D CAD model is illustrated in Fig. 21(a), and the STL model is shown in Fig. 21(b).After slicing the STL file, the sliced model is illustrated in Fig. 22(a), and the LCD photomask cross-sectional contour is illustrated in Fig. 22(b).The physical part that was built using the proposed system is shown in Fig. 23. The experimental results of Cases 1 and 2 are also shown in Table IV. V. CONCLUSION Desktop manufacturing systems have widely been known as being capable of fabricating 3-D objects with complex geometric shapes.The purpose of this paper has been to develop an LCD-photomask-based desktop manufacturing system. The main features of the proposed system are described as follows. 1) The LCD photomask is connected to the computer to allow for quickly changing the cross-sectional display of each layer. 2) The software design includes a slicing algorithm, an LCD photomask display, a process-user interface, and a motion control program.The bucket-sorting algorithm is used in the slicing preprocessing for search speed enhancement.The slicing time ratio can be reduced to nearly 25% with five buckets. 3) The hardware configuration of this architecture includes an LCD photomask, an optical system, a z-axis elevator, and a PC-based control system.4) The matrix optics is used to design the optical system.5) The experimental results show that the proposed desktop manufacturing system can produce RP parts with good machining efficiency, but the surface roughness should further be improved.6) The proposed desktop manufacturing system has the advantages of low cost, compactness, speed and accuracy, and no additional support requirement, providing a valuable addition to the working office environment for designers at all levels and in all locations. Fig. 5 . Fig. 5. (a) Design of the curing light source.(b) Real structure of the curing light source. Fig. 15 . Fig. 15.(a) STL model created by Pro/Engineering CAD software.(b) STL model that was sliced using the slicing process. a) shows an STL file that is created by Pro/Engineering 3-D CAD software.Fig. 15(b) is Authorized licensed use limited to: National Taiwan University.Downloaded on January 14, 2009 at 00:57 from IEEE Xplore.Restrictions apply. Fig. 16 . Fig. 16.LCD photomask shows the cross-sectional contours that are filled with white color. Fig. 17 . Fig. 17.(a) Tool path for the FDM 2000 RP machine.(b) LCD photomask display of the proposed RP system. Fig. 19 . Fig. 19.(a) Illustration of the sliced model.(b) Cross-sectional contour that is displayed in the LCD photomask. Fig. 20 . Fig. 20.RP part of Case 1, which was built by the proposed desktop manufacturing system. Fig. 23 . Fig. 23.RP part of Case 2, which was built by the proposed desktop manufacturing system. TABLE I SLICING TIME (IN SECONDS) FOR DIFFERENT BUCKET NUMBERS [FIG.13(a)] TABLE II SLICING TIME (IN SECONDS) FOR DIFFERENT BUCKET NUMBERS [FIG.13(b)] TABLE III EXPERIMENTAL RESULTS: EXPOSURE TIME (RELATIVE TO THE HARDENED DEPTHOF THE RESIN) TABLE IV EXPERIMENTAL RESULTS OF CASES 1 AND 2
5,609.8
2008-09-30T00:00:00.000
[ "Computer Science", "Engineering" ]
Immunogenomic Landscape Analysis of Prognostic Immune-Related Genes in Hepatocellular Carcinoma Hepatocellular carcinoma (HCC) is one of the leading causes of cancer-related death. HBV infection is an important risk factor for the tumorigenesis of HCC, given that the inflammatory environment is closely related to morbidity and prognosis. Consequently, it is of urgent importance to explore the immunogenomic landscape to supplement the prognosis of HCC. The expression profiles of immune‐related genes (IRGs) were integrated with 377 HCC patients to generate differentially expressed IRGs based on the Cancer Genome Atlas (TCGA) dataset. These IRGs were evaluated and assessed in terms of their diagnostic and prognostic values. A total of 32 differentially expressed immune‐related genes resulted as significantly correlated with the overall survival of HCC patients. The Gene Ontology functional enrichment analysis revealed that these genes were actively involved in cytokine‐cytokine receptor interaction. A prognostic signature based on IRGs (HSPA4, PSME3, PSMD14, FABP6, ISG20L2, TRAF3, NDRG1, NRAS, CSPG5, and IL17D) stratified patients into high-risk versus low-risk groups in terms of overall survival and remained as an independent prognostic factor in multivariate analyses after adjusting for clinical and pathologic factors. Several IRGs (HSPA4, PSME3, PSMD14, FABP6, ISG20L2, TRAF3, NDRG1, NRAS, CSPG5, and IL17D) of clinical significance were screened in the present study, revealing that the proposed clinical-immune signature is a promising risk score for predicting the prognosis of HCC. Introduction Hepatocellular carcinoma (HCC) ranks seventh among malignant tumors in terms of incidence. According to the latest Global Cancer Statistics, 841,080 new incidents of HCC and 781,631 deaths occurred during the year 2018 [1]. China is a country with a high incidence of HCC, which accounts for more than half of the world's deaths [2]. With the development of modern medical science and technology, significant progress has been made in the treatment of HCC. As the clinical symptoms of early HCC are not typical, 70% to 80% of the patients have advanced disease at the time of diagnosis [3]. Existing treatment strategies are insufficient for patients with advanced HCC. erefore, identifying novel and sensitive biomarkers is of critical importance for the early diagnosis of HCC. In previous studies, the therapeutic response of HCC patients was stratified based on the identification of molecular biomarkers, such as genes, microinterfering RNA (miRNA), circular RNA (circRNA), and long noncoding RNA (lncRNA). Chen et al. reported a four-gene (KPNA2, CDC20, SPP1, and TOP2A) based signature, which could be a candidate prognostic factor for patients with HCC [4]. e deregulation of miRNA-122 has been related to an increased risk of developing HCC [5]. Also, the upregulation of miRNA-372 has been associated with tumor progression and prognosis in HCC [6]. Several circular RNAs such as circRNA_0001955 [7] and circRNA_101505 [8] have been identified as potential biomarkers for HCC diagnosis and prognosis. Moreover, a five-long noncoding RNAs signature has been reported to improve survival prediction and be used as a prognostic biomarker for HCC patients [9]. e liver is an essential organ for the proper functioning of the immune system, which is rich in various immune cells, and especially the cytotoxic T lymphocyte (CTL) that can recognize tumor antigens and eliminate the tumor cells from the tumor microenvironment. Over the last decade, cancer immunotherapy has proven to be a promising treatment protocol for various types of cancer [10,11]. Certain studies have shown that HCC cells, which are in a highly immunosuppressive microenvironment, can induce host immunosuppression and avoid autoimmune response by downregulating major histocompatibility complex-1 (MHC-1), secreting immunosuppressive cytokines, and mediating negative costimulatory signals [12,13]. Cancer immunotherapy can delay the progress of tumors by enhancing the immune response of the body, stimulating specific immunity of tumors, and breaking immune tolerance [14,15]. Over recent decades, immunotherapy has been applied in the treatment of various types of tumors [11,16,17] and immune checkpoint inhibitors have become potential effective treatment in patients with advanced HCC [10]. In September 2017, nivolumab was approved by the FDA for liver cancer as a second-line treatment after failure of sorafenib based on the data of the multicohort phase 1/2 trial CheckMate 040 [18]. New immunotherapy technologies, such as chimeric antigen receptor T cells (CAR-T) [19], T cell receptor genetically engineered T cells (TCR-T) [20,21], new antigen vaccines, and oncolytic viruses, have gradually found their application in clinic. ese clinical results fully demonstrate the importance of immunology in liver cancer, so it is crucial to understand these molecular mechanisms, especially immune gene effects. e emergence of public, large-scale gene expression datasets has enabled researchers to identify responsible biomarkers for tumor monitoring and surveillance with much accuracy [22,23]. e prognostic value of immunerelated genes (IRGs) was explored to develop an individualized immune signature, which could improve prognostic estimation in patients with nonsquamous non-small cell lung cancer [24]. e purpose of this research was to investigate whether IRGs have potential prognostic value for HCC and whether they can be used as biomarkers for immunotherapy. Initially, we combined the transcriptome RNA-sequencing data downloaded from TCGA to analyze the differentially expressed genes and differentially expressed immune-related genes in HCC. en, we integrated IRGs expression profiles with clinical information, applying computational methods for the assessment of overall survival (OS) in HCC patients. Making the best of the complementary value of IRGs expression profiles and clinical characteristics, we investigated the potential clinical utility of IRGs on prognostic stratification and their implicational potential as biomarkers for targeted HCC therapy. Eventually, we build an individualized prognostic signature, which may support HCC prognosis. e study has the following contributions in this regard: (i) e immune genes related differentially expressed genes (DEGs) were discovered, and an immunerelated gene-based prognostic index (IRGPI) consisting of 10 genes (HSPA4, PSME3, PSMD14, FABP6, ISG20L2, TRAF3, NDRG1, NRAS, CSPG5, IL17D) Transcriptome Expression Data and Clinical Information Acquisition. e transcriptome expression profiles and corresponding clinical information of hepatocellular carcinoma were downloaded from the Genomic Data Commons Data Portal of TCGA (https://cancergenome.nih.gov/), which contained data from 374 hepatocellular carcinoma and 50 noncancerous liver tissues. e IRGs list was derived from the Immunology Database and Analysis Portal (ImmPort) database [25]. Differential Gene Analysis. Differentially expressed genes (DEGs) between HCC and nontumor samples were screened by the R software edgeR package (http:// bioconductor.org/packages/edgeR/) to select DEGs related to hepatocarcinogenesis [26]. e raw data were normalized by the Trimmed mean of M values (TMM) implemented in the edgeR Bioconductor package. Gene expression comparison was carried out by calculating the level of fold change (FC) in HCC versus noncancerous liver tissue with a false discovery rate (FDR) <0.05 and a log2 |fold change| >1 as the cutoff values. Differentially expressed IRGs were then extracted from all DEGs. e functional enrichment of Gene Ontology (GO) analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analyses of these differentially expressed IRGs was performed on the Database for Annotation, Visualization and Integrated Discovery (DA-VID) (https://david.ncifcrf.gov/) [27,28]. Survival-Associated IRGs Analysis. e follow-up data of HCC patients were derived from TCGA's Pan-Cancer Atlas. Differentially expressed IRGs, which were significantly correlated to overall survival (OS) in HCC patients, were selected via univariate COX analysis that was conducted using the R software survival package. ese survival-associated IRGs were also used for functional enrichment analysis. Copy number alterations data of these IRGs were obtained from Cbioportal (http://www.cbioportal.org/) [26]. To clear up the potential molecular mechanisms of these survival-associated IRGs, we focused on the transcription factors (TFs), which are essential molecules that directly control the degree of gene expression. e expression profiles of 318 transcription factors (TFs) were downloaded from the Cistrome Cancer database, which is a valuable resource for experimental and computational cancer biology research [29]. Besides, a functional network between the TFs and these survival-associated IRGs was constructed via Cytoscape (version 2.8, http://cytoscape.org). Construction and Validation of the Immune-Related Gene-Based Prognostic Index (IRGPI). ese selected survival-related IRGs were submitted for multivariate analyses, with integrated IRGs remaining as independent prognostic indicators for the development of the IRGPI. Prognostic IRGs with a false discovery rate of less than 0.05 were candidates to calculate the risk score value. Based on the results of the median risk score value, the IRGPI significantly stratified patients into high-and low-risk groups. e optimal IRGPI cutoff was determined by a time-dependent receiver operating characteristic (ROC) curve [30] at 5 years. Statistical Analysis. Gene functional enrichment analyses were performed using R (version 3.6.1; https://www.rproject.org/) software cluster Profiler package [31]. AUC of the survival ROC curve was calculated by the survival ROC R software package to verify the reliability of the prognostic signature [30]. e differences in clinical parameters were tested by independent t-tests. Statistical significance was defined as P < 0.05. Identification of Differentially Expressed IRGs. Transcriptional expression profiles and phenotype data of 377 HCC patients from the TCGA cohort were downloaded and integrated. Among them, there were 255 males and 122 females. e edgeR algorithm identified a total of 7,667 differentially expressed genes, 7,273 upregulated and 394 downregulated genes with the threshold of |log2FC| >1 and FDR <0.05 (Figures 1(a) and 1(b)). From this set of genes, we extracted 329 differentially expressed IRGs, including 267 upregulated and 62 downregulated (Figures 1(c) and 1(d)). As expected, gene functional enrichment analysis revealed that inflammatory pathways were most frequently implicated. "Immune response," "extracellular space," and "growth factor activity" were the most frequent biological terms among biological processes, cellular components, and molecular functions, respectively (Figure 2(a)). For the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways, cytokine-cytokine receptor interactions were most often enriched by differentially expressed IRGs (Figure 2(b)). Identification and Characteristics of Survival-Associated IRGs. As predicting the prognosis is essential for clinical guidance, we focused on uncovering molecular biomarkers that could serve as viable prognostic indicators. In total, 32 IRGs that were significantly correlated with overall survival (OS) HCC patients (P < 0.001; Table 1) were identified. Protein-protein interaction (PPI) network analysis demonstrated that HSP90AA1, PSMD10, and PSMD2 were the three hub genes among these datasets ( Figure 3). A forest plot of expression profiles revealed that all of the 32 survivalassociated IRGs were upregulated in HCC samples (Figure 4(a)). Given the important clinical significance of these IRGs, genetic alterations of these genes were examined, revealing that mRNA upregulation and amplification were the two most commonly occurring types of mutations ( Figure 4(b)). TF Regulatory Network. To further understand the potential molecular mechanisms of these clinically related IRGs, we analyzed the regulatory mechanisms of these genes. e expression profiles of 318 transcription factors (TFs) were examined, and 117 were found to be differentially expressed between HCC and nontumor liver samples with the threshold of |log2FC| >2 and FDR <0.05 (Figures 5(a) and 5(b)). e correlation analysis was constructed between these 117 TFs and the 32 survival-associated IRGs with a correlation score of more than 0.6 set as the cutoff values. e regulatory schematic acutely illustrated the regulatory relationships among these IRGs ( Figure 5(c)). Evaluation of Clinical Outcomes. In this study, we developed a prognostic signature based on the results of multivariate Cox regression analysis to divide the HCC patients into two groups with discrete clinical outcomes with regard to OS ( . e immune-based prognostic index (IRGPI) significantly stratified patients into low-risk (IRGPI < median value) and high-risk (IRGPI > median value) groups in terms of overall survival (Figure 7(a)). e area under the curve of the receiver operating characteristic (ROC) curve was 0.826, which suggested the moderate potential for the prognostic signature based on IRGs in survival monitoring (Figure 7(b)). e clinical data and risk scores were analyzed by univariate and multivariate regression analysis. e P value of risk score was less than 0.001 in both univariate (Figure 7(c)) and multivariate regression (Figure 7(d)) analyses. ese results indicated that the IRGPI obtained by our model could be used as an independent predictor after adjusting for other parameters, including age, gender, grade, tumor stage, tumor size, distant metastasis status, and the amounts of nodules (Table 2). Clinical Utility of IRGPI. To further assess the clinical value of the immune-related gene-based prognostic index (IRGPI), the relationship between this hub survival-associated IRGs and clinical characteristics including age, gender, survival state, grade, pathological stage, T stage, M stage, and N stage were analyzed (Table 3). IRGPI showed a significant difference in survival state (Figure 8 (Figure 8(d)). However, no difference was observed between age, gender, and N stage. Discussion Hepatocellular carcinoma is also known as a clear example of inflammation-related cancer, given that more than 90% of HCCs arise in the context of hepatic injury and inflammation [32]. is fact highlights the importance of the differentially expressed IRGs. Previous studies have already addressed gene expression-based prognostic signatures in hepatocellular carcinoma [33,34], thus providing a fundamental understanding of the pathogenesis of HCC at the genetic level. However, there is no comprehensive study that explored the characteristics of IRGs in HCC. Consequently, we conducted this comprehensive, genome-wide profiling study of IRGs to explore their clinical significance and verify reliable prognostic biomarkers that could be used to select patients at the highest risk for recurrence. Bioinformatic systems make it possible to explore their molecular mechanisms more deeply. Immune characteristics in the tumor microenvironment are essential for the development of immunotherapies and the prediction of their clinical responses in cancers [35]. Our research focused on the comparison of immunogenomic profiles between hepatocellular carcinoma and healthy liver tissue, trying to identify some potential clinical implications. Gene functional enrichment analysis and KEGG suggested that these genes are mainly involved in growth factor activity and cytokine-cytokine receptor interactions, respectively. Hepatocyte growth factor (HGF) is the first factor to stimulate hepatocyte division and regeneration [36]. It also participates in enhancing angiogenesis, immune response, cell motility, and cell differentiation [37]. e interaction between HGF and hepatocytes can enhance HGF/c-Met signal transduction [38]. e expression of c-Met in HCC was higher than in surrounding tissues. Overexpression of c-met and other oncogenes have been identified as the causes of HCC invasiveness [39]. Our results showed that the change of the immune genome could affect the occurrence of hepatocellular carcinoma through the growth factor pathway. As hepatitis virus infection is the main cause of hepatocellular carcinoma [40], inflammatory response induced by cytokine-mediated immune response is considered the most important factor in the development of hepatocellular carcinoma [41]. Our KEGG analysis showed that the key immune regulatory molecules of hepatocellular carcinoma were mainly involved in the cytokine-cytokine receptor interaction pathway. Gene-regulatory networks modulate the entire process of gene expression and protein formation in living cells and therefore determine the fate of cells. TFs regulate gene expression by translating cis-regulatory codes into specific gene-regulatory events. In this study, we explored the main regulation consisting of transcription factors (TFs) and their impacting immune-related genes. Among HCC immunerelated genes, we identified the potential targets of TFs. ese datasets and their regulations were used to construct a comprehensive HCC immune-related genes TF mediated regulatory network. SIRT6, CENPA, and KDM1A are prominently featured in this network. It has been reported that SIRT6 overexpression in primary HCC tumors is correlated with tumor size and grade [42], while CEAPA, combined with KIF20A, PLK1, and NCAPG, form a 4-gene expression prognostic signature, which can be used to predict prognosis and to define a subgroup of high-risk HCC patients who could potentially benefit from JmjC inhibitor therapy [43]. GNPAT overexpression induced by c-myc/ KDM1A complex transcriptional activation has been confirmed to be related to the progression of HCC [44]. However, the relationship between these TFs and IRGs has not yet been confirmed. Our network is conducive to a better understanding of the potential molecular mechanism of these IRGs. Previously, Zhao et al. performed genome-wide methylation profiling of the different stages of hepatitis B virusrelated hepatocellular carcinoma [45]. Zucman et al. integrated signatures to study the genetic landscape and biomarkers of HCC [46]. Deng et al. and his team analyzed tumor microenvironment-related genes of prognostic value in hepatocellular carcinoma [47]. Although several HCC signatures based on immune-related genes have been developed recently [48][49][50][51][52][53], a more complete and reliable index that can predict both survival and immunotherapy success for HCC patients is urgently needed. In this study, we developed a prognostic signature based on ten immunerelated genes for hepatocellular carcinoma. Our prognostic immune signature can be used to stratify clinically defined HCC patients into subgroups with different survival outcomes and can be clarified as an immune status indicator. Interestingly, our data showed that IRGPI performed moderately in prognostic predictions and was correlated with age, tumor stage, metastasis, number of lesions, and tumor burden. We further leveraged the complementary value of molecular and clinical characteristics and showed that combining both could provide a more accurate estimation of overall survival in HCC. Conclusion e present study identified the immune genes associated with DEGs that were then used to construct and validate the immune-related gene-based prognostic index for predicting the outcomes of HCC patients. Further study of these immune genes associated with DEGs will provide a new understanding of the potential relationship between immune genes and HCC prognosis. Data Availability e data used in this work are provided by Baidu Picture and are available. Conflicts of Interest e authors declare that they have no financial and personal relationships with other people or organizations that can inappropriately influence the work.
3,988.6
2021-10-29T00:00:00.000
[ "Biology", "Medicine" ]
Students' Hypothetical Learning Trajectory (HLT) in Learning Fraction Division Calculation Operations This research aims to obtain a comprehensive picture of the hypothetical learning trajectory (HLT) that should be developed based on students' learning trajectory (LT) in learning fraction division arithmetic operations in grade V elementary school. The HLT was developed based on the analysis of the Learning Implementation Plan document made by the teacher by considering the students' learning trajectory on the material of fraction division arithmetic operation, including the learning carried out by the teacher in the classroom, examining aspects of learning obstacles that occur in the learning process, and examining what didactical situations will be built, predicting student responses that may occur to the situation created, and determining didactical and pedagogical anticipation of these responses. The HLT in this study started with the context of generating the idea of number division, learning the concept of number division such as natural numbers divided by natural numbers, and recalling the concept of fractions and the concept of division of fractions. The learning objective designed in the HLT is that students can solve problems with at least two ways related to division of fractions. INTRODUCTION Fractions are one of the most complex and important mathematics materials in elementary school [1], [2].Learning mathematics on fraction material is still considered difficult [3], [4], [5], [6].The difficulty in learning the concept of fractions is because the ability to understand fractions as part of a whole expressed in symbols requires special understanding.The concept of fractions is seen as a concept that is difficult to learn and difficult to teach.This can pose pedagogical challenges for teachers on an ongoing basis, especially in teaching mathematics [7].One of the main factors contributing to the complexity of learning fractions is the multi-faceted notion that includes five interrelated sub-constructions (i.e., part-whole, ratio, operation, quotient, and measure) [8].Much of the confusion with fractions are related to different interpretations (constructions), representations (models), and coding conventions.The debate causes fraction material to be considered difficult by students [9].Fractions are a topic area that many teachers are challenged to learn and teach [10]. Fraction material that is considered difficult is fraction calculation operations.In the fraction calculation operation material, students' difficulties are when solving story problems.The cause of student difficulties is due to students' lack of understanding of the basic concepts of fractions, misapplication of fractions in solving fraction problems, carelessness in understanding the language of the problem, lack of understanding of the prerequisite material, and errors in the computation or calculation process [11], [12], [8], [2].Student errors related to story problems, incorrect generalization of learned rules for fractions, considering numerators and denominators as whole numbers, not learning the division operation of fractions conceptually, and associating division with addition, subtraction, and multiplication operations [13].The causes of errors made by students in solving fraction arithmetic operations include students lacking understanding of the concepts of addition, subtraction, multiplication, and division of fraction arithmetic operations, students lacking ability in systematic steps to solve fraction arithmetic operations, and students lacking accuracy in performing calculations [13].[14]. The material for calculating the division of fractions, when viewed from the KI and KD in Permendikbud-RI No. 37 of 2018 [15], is found in grade V with KD "3.2.Explain and perform multiplication and division of fractions and decimals and 4.2.Solve problems related to multiplication and division of fractions and decimals".Furthermore, to further strengthen the initial suspicion related to problems in the material of calculating the division of fractions, the researcher conducted a preliminary study.The study was conducted on mathematics learning in grade VI public elementary schools in Bandung City, especially on the material of fraction division calculation operations in the form of story problems.The reason for choosing grade VI as a preliminary study was because the position at the time of data collection was carried out at the beginning of the odd semester and students had already received material on the calculation of division of fractions in the previous class. The results of observations and interviews with teachers found that learning activities had not been directed at a problem and ended in problem solving.The nature of good mathematics learning is that it begins with the submission of problems and aims to solve problems [16].This is what seems to have not been well developed in the student learning process on the material of calculating the division of fractions.As previously stated, the learning interaction is still dominated by the teacher when discussing material and solving a problem.Students are less involved in the process. An indication of the lack of or perhaps not going through a series of mental actions students can cause students' way of thinking related to fraction operation material to be limited [16].The limited way of thinking of students related to fraction arithmetic operation material also results in how to generate understanding and how to find problem-solving strategies in students related to fraction arithmetic operation material is not well facilitated.So, it is necessary to design learning activities that can create situations where students can develop their understanding of the problems given. The services provided by teachers in helping to overcome students' learning obstacles in the material of calculating the division of fractions are not optimal.The activities carried out by the teacher are only question-and-answer activities and re-explanation of learning materials without finding out the causes and solutions to overcome these obstacles.So that the learning objectives that have been set are not achieved.Students should not be treated as passive recipients who only receive material by using formulas and procedures to solve a problem.Students should be given the opportunity and guided to situations to rediscover mathematical concepts in their way [17]. These findings become the basis for researchers to design learning activities based on students' learning trajectories.To facilitate the differences in students' learning trajectories, teachers need to design a hypothetical learning trajectory (HLT) so that the designed Page 160 of 166 learning objectives can be achieved.This research aims to obtain a comprehensive picture of the hypothetical learning trajectory (HLT) that should be developed based on n student's learning trajectory (LT) in learning fraction division arithmetic in grade V elementary school. MATERIAL AND METHODS HLT is designed based on the learning objectives to be achieved, activities that support the objectives, and mathematical hypotheses in the form of conjectures that are expected to occur to students according to their thinking abilities [18].The HLT was prepared based on the results of the analysis of the Learning Implementation Plan (RPP) document made by the teacher by considering the student's learning trajectory on the material of fraction division calculation operation.The steps taken by the researcher in preparing the HLT were as follows: 1).Theoretical study of students' learning trajectory in the age range of 10 -11 years (grade V); 2).Studying the history and in-depth study of the theory of fraction division calculation material; 3).Review the curriculum and mathematics textbooks used by grade V students, including the learning conducted by teachers in the classroom; 4).Examine aspects of learning obstacles that occur in the learning process and minimize learning obstacles from both student and teacher aspects; and 5).Examine what didactical situations will be built, predict students' responses that may occur to the situations created, and determine didactical and pedagogical anticipations of these responses. RESULTS AND DISCUSSION The results of the analysis of the Learning Implementation Plan (RPP) document made by the teacher found a mismatch between the material, method, and use of media and learning design with the demands of thinking and student characteristics.This mismatch causes didactical barriers for students.Didactic barriers as learning obstacles or difficulties caused by the state of the didactic design used or the teacher's didactic intervention [19].Some things that need to be considered in designing lesson plans and implementing learning such as the ability to compile and present material must pay attention to the order of the material, both structurally (interrelationships between concepts), as well as functionally (suitability for students' level of thinking), and the stages of presenting the material.Selection of methods that can make students active in learning and explore the potential of students' thinking in solving problems.The use of media can facilitate students' understanding of the material.The function of teaching aids is to avoid the abstractness of a concept and can capture the true meaning of the concept [20].For this reason, in choosing objects around students that will be used as props in instilling an understanding of the concept of division of fractions must be careful.If the selection of media is not appropriate, it is likely, the concept of division of fractions that will be instilled will not be captured properly by students. Three reasons for the low conceptual understanding of students on fractions from The Trends in International Mathematics and Science Study (TIMSS) results are 1) the content of the Indonesian curriculum which places low emphasis on the basic concept of fractions and introduces fraction operations too early; 2) Indonesian mathematics textbooks present only one definition of fractions, namely as part of a whole; and 3) there is limited use of models or representations of fractions in classroom practice [21]. Problems in fractions need to be discussed and found a solution because fraction material in the 2013 curriculum is not only taught in elementary schools starting from grade II to grade V and in junior high schools (SMP) is also taught in grade VII with a lot of material coverage [22].In addition, teachers must also know the juridical basis underlying the importance of learning fractions, by reviewing the Core Competencies (KI) and Basic Competencies (KD) of the lessons in the 2013 Curriculum, especially the content of mathematics subjects in elementary school as stated in the Regulation of the Minister of Education and Culture of the Republic of Indonesia No. 37 of 2018 [15].Then, making a comparison of the curriculum in Indonesia with the curriculum in the National Council of Teachers of Mathematics (NCTM) and other developed countries such as Singapore, Japan, and Finland, as shown in Figure 1 below: Source:Singapore -TIMSS 2015 Encyclopedia (bc.edu) [23] The Content Standards for Mathematics to real-life situations that exist in it [25]. Creating a classroom environment that normalizes errors as part of the learning process is something that teachers need to do.It is important to involve students in assessing mathematical errors and misconceptions.This is to ensure that students have a deep conceptual understanding, as well as to link new knowledge to prior knowledge correctly [26].Misconceptions can never be completely avoided, but teachers can intervene before misconceptions take root.First, teachers must understand why their students make mistakes or how they develop misconceptions before they can address them and develop interventions to promote correct understanding [27], [26].What teachers can do is anticipate the learning barriers that students experience in learning.Students need to be directed in their way of thinking in understanding knowledge in learning mathematics.In addition, teachers also need to pay attention to students' learning flow. Hypothetical Learning Trajectory (HLT) contains a series of instructional tasks to provide students with an understanding of mathematics learning concepts.HLT is one of the important aspects that must be designed by teachers to make learning more meaningful [28].In this study, HLT was developed based on the findings of learning obstacles experienced by students, causal factors, and learning objectives that were not achieved during the learning process of fraction division calculation operation material before the application of the initial didactical design.The learning objectives designed and not achieved were as follows: 1).Through the activity of manipulating folding paper, students can explain the concept of division of fractions correctly; and 2).Through the teacher's explanation, students can solve fraction division arithmetic operation problems correctly (Source: teacher's lesson plan).These learning objectives are developed based on Basic Competency (KD) 3.2.Explain and perform multiplication and division of fractions and decimals and 4.2.Solve problems related to multiplication and division of fractions and decimals (Source: Teacher's lesson plan). The preparation of HLT according to Simon [29] explained that the presumptive learning path is an activity plan prepared by the teacher in anticipating possible student learning paths by considering knowledge acquisition, level of understanding, and selection of learning activities with mathematics learning objectives.The preparation of the presumptive learning path is based on the learning objectives to be achieved in the form of learning stages in the form of a series of didactical situations that are mutually sustainable [30], [31].The preparation of HLT needs to pay attention to the stage of thinking and cognitive development of students [32].Gravemeijer [33], [32] described the levels of conceptual learning trajectories, namely: situational, referential, general, and formal.At the situational level, the student's position is in the context of a particular situation.Referential level, the explanation of the situation in the problem refers to the application of models and strategies.The general level is on the mastery of strategies based on the given context.Formal level, performing actions by applying conventional procedures and notations.After the four stages are passed by students, they are expected to be able to apply the concept to new problems in different contexts. Based on the review and analysis of the researcher's findings, the HLT designed for learning fraction division calculation materials is presented in Figure 2. .explains that HLT in learning division of fractions starts with the context of bringing up the idea of number division such as students being given problems that are close to students' daily lives.The problem-solving process uses the help or manipulation of real objects and structured images.Furthermore, the learning of number division concepts such as natural numbers divided by natural numbers whose solutions use repeated subtraction, number lines, pictures, or flat shapes without using algorithms first, then algorithms. The next lesson is to recall the concept of fractions such as understanding, symbols, types of fractions, fractions worth, and simplifying fractions.In the learning process, students can use real objects, folded paper, and paper with patterns, that Students are expected to be able to express fractions in the form of number lines, drawing approaches, or the area of flat buildings with square, rectangle, triangle, circle, trapezoid, and parallelogram shapes.After students understand the concept of fractions, they proceed with the concept of division of fractions, such as students are introduced to the definition of division of fractions, the application of division operations on types of fractions, and how to solve division on fractions such as using the number line approach, pictures or flat area approach, multiplication approach and problem-solving story problems adaptation of Polya's Heuristics [34]. The learning objective designed in the HLT is that students can solve problems in at least two ways related to the division of fractions.The form of problem is packaged in the form of story problems and mathematical sentences.The two ways in question are adjusted to the form of the problem given, for example, the problem given in the form of a mathematical sentence can use a number line and a picture or a flat area approach, a number line and a multiplication approach, a picture or flat area approach, and a multiplication approach.If the problem given is a story problem, then the way to answer it can use a number line and a picture or a flat area approach, a number line and a multiplication approach, a number line and solving Polya's Heuristic story problem, a picture or a flat area approach and a multiplication approach, a picture or a flat area approach and solving Polya's Heuristic story problem, a multiplication approach and solving Polya's Heuristic story problem. The division calculation operations on fractions introduced to students are a division of natural numbers by natural fractions; natural fractions divided by natural numbers; natural fractions divided by natural fractions; natural numbers divided by mixed fractions; mixed fractions divided by natural numbers; mixed fractions divided by natural fractions, and mixed fractions divided by mixed fractions. Based on the material description above, the researcher made a concept map related to the division of fractions presented in the form of a chart.This aims to provide an overview of the limitations or breadth of the material to be studied.In the chart, it is briefly studied starting from the definition of division of fractions, namely the division operation that applies to fractions, the application of division operations on types of fractions, and how to solve the problem.Figure 3. below, presents a chart of fraction division operation material. Definition of Fraction Division Division Calculation Operations on Types of Fractions How to solve the division of fractions The division calculation operation used on fractions can generally be written as follows: Figure 3 explains that to achieve the learning objectives, students first understand the position of the fraction or number being divided and the fraction or number divider, students are also allowed to learn the application of division operations on types of fractions to make it easier to search and find various alternative ways to solve problems related to the division of fractions. CONCLUSIONS The Hypothetical Learning Trajectory (HLT) of students in learning fraction division arithmetic operation material in grade V SD starts with the context of bringing up the idea of number division such as students are given problems related to students daily lives.As for the problem-solving process using the help of or manipulating real objects and structured images.Furthermore, learning the concept of number division such as natural numbers divided by natural numbers whose solutions use repeated subtraction, number lines, pictures, or flat shapes without using algorithms first, then algorithms.The next learning is reminding the concept of fractions such as understanding, symbols, types of fractions, fractions worth, and simplifying fractions.In the learning process, students can use real objects, folded paper, and paper with patterns, that Students are expected to be able to express fractions in the form of number lines, pictures, or approaches to the area of flat buildings of various shapes such as squares, rectangles, circles, triangles, trapezoids, and parallelogram.After students understand the concept of fractions, they proceed with the concept of division of fractions, such as students are introduced to the definition of division of fractions, the application of division operations on types of fractions, and how to solve division on fractions using the number line approach, pictures or flat area approach, multiplication approach and problem-solving of story problems adapting Polya's Heuristics.The learning objectives designed in the HLT are that students can solve problems in at least two ways related to the division of fractions. Figure 1 . Figure 1.The Content Standards for Mathematics Figure 1 Figure1above, explains in general the comparison results obtained that mathematics learning materials include Numbers, Algebra, Geometry, and Statistics.Number material in NCTM and developed countries is given at the beginning of learning mathematics as well as in Indonesia.This is not without reason, because number material is a prerequisite for students to learn the next material.The reasons why numbers are so important for students to learn, such as: (1) number is the first material taught in school formally; (2) number is a basic part of the material in mathematics; (3) number operations and applications are related Figure 2 . Figure 2. HLT for learning the division of fractions Figure 2 Figure 2. explains that HLT in learning division of fractions starts with the context of bringing up the idea of number division such as students being given problems that are close to students' daily lives.The problem-solving process uses the help or manipulation of real objects and structured images.Furthermore, the learning of number division concepts such as natural numbers divided by natural numbers whose solutions use repeated subtraction, number lines, pictures, or flat shapes without using algorithms first, then algorithms.The next lesson is to recall the concept of fractions such as understanding, symbols, types of fractions, fractions worth, and simplifying fractions.In the learning process, students can use real objects, folded paper, and paper with patterns, that Students are expected to be able to express fractions in the form of number lines, drawing approaches, or the area of flat buildings with square, rectangle, triangle, circle, trapezoid, and parallelogram shapes.After students understand the concept of fractions, they proceed with the concept of division of fractions, such as students are introduced to the definition of division of fractions, the application of division operations on types of fractions, and how to solve division on fractions such as using the number line approach, pictures or flat area Figure 3 . Figure 3. Concept of fraction division calculation operation
4,713.8
2023-04-12T00:00:00.000
[ "Mathematics", "Education" ]
Long time dynamics of von Karman evolutions with thermal effects ∗ This paper presents a short survey of recent results pertaining to stability and long time behavior of von Karman thermoelastic plates. Questions such as uniform stability and associated exponential decay rates for the energy function, existence of attractors in the case of internally/externally forced plates along with properties of attractors such as smoothness and dimensionality will be presented. The model considered consists of undamped oscillatory plate equation strongly coupled with heat equation. There are no other sources of dissipation. Nevertheless it will be shown that that the long-time behavior of the nonlinear evolution is ultimately finite dimensional and ”smooth”. In addition, the obtained estimate for the dimension and the size of the attractor are independent of the rotational inertia parameter γ, which is known to change the character of dynamics from hyperbolic (γ > 0) to parabolic like (γ = 0). Other properties such as additional smoothness of attractors, upper-semicontinuity with respect to parameter γ and existence of inertial manifolds are also presented. Introduction In what follows below we shall describe model under consideration which is thermoelastic von Karman plate subjected to an external and internal forcing.Other types of nonlinearities (eg Berger's plates) can be considered as well -see [24,10,11] -however for the sake of concretness we limit ourselves to von Karman nolinearities which are representative of major mathematical difficulties encountered. The corresponding equations (see, e.g., [43,45] and the references therein) have the following form    where Ω is a bounded domain in R 2 with the boundary ∂Ω = Γ, ∆ denotes the Laplace operator, F 0 and p are given functions with regularity specified later.Von Karman bracket [•, •] is given by and Airy's stress function v = v(u) is a solution to the problem The temperature θ satisfies the Dirichlet boundary condition : θ = 0 on Γ.The boundary conditions imposed on the displacement u are either "clamped": where ν is the outer normal vector, or else "hinged (simply supported)": The parameters α and η are positive and γ is non-negative.Parameter γ is proportional to the square of the thickness of the plate and in some models it is neglected (i.e.γ = 0).The case γ > 0 corresponds to taking into account rotational inertia of filaments of the plate.The characteristics of these two models, particularly with respect to stability analysis, are very different.From physical point of view the main peculiarities of the model in ( 1) are (i) possibility of large deflections of the plate and (ii) small changes of the temperature near the reference temperature of the plate (which is reasonable in the absence of phase transitions).We refer to [43,45,27,38] for further discussions and references. The main aim in this paper is to provide a survey of results pertinent to wellposedness and long time behavior of the thermal von Karman evolutions described by (1), (2)., Particular emphasis will be placed on dependence of regularity and long time characteristics with respect to varying parameters 0 ≤ γ ≤ M γ for some (fixed) positive constants M γ .For simplicity we will be taking M γ = 1.This includes questions such as: (i) existence and uniqueness of weak solutions, (ii) uniform stability for the unforced system, (iii ) existence of a compact global attractor and its structure, (ii) smoothness and finite dimensionality of the attractor, (iii) uniform decay rates to equilibria, and (iv) upper semi-continuity of family of attractors (with respect to the parameters γ and (v) existence of inertial manifolds. In order to point out timeliness of the topic under consideration, we wish to note that the issue of uniform decay rates for linear, unforced, thermoelastic plates has been settled down only recently.Indeed, results of the previous literature did require an addition of mechanical damping (boundary or interior), in order to force exponential decay rates for the energy function, see [44] and references therein.Instead, recent progress in the area of control theory and inverse problems, [1,3,8,9,36,41,43,51,54,57] has provided a stimulus to the field and produced an array of results on controllability, analyticity (when γ = 0) and uniform stability without any mechanical dissipation.In fact, it was shown in [2] that not only linear thermoelastic plates with either hinged or clamped boundary conditions are exponentially stable without any mechanical dissipation, but also that the decay rates are independent on the values of rotational parameter γ ≥ 0. It is a purpose of this paper to provide fairly complete description of long time behavior of thermoelastic plates driven by von Karman nonlinearity with both internal and external forcing and without any mechanical (viscous or structural) dissipation. In order to gain a better understanding of the problem under consideration, one should note that topological behavior of the model is strongly dependent on the parameter γ ≥ 0 .It is by now well known, that the parameter γ changes drastically the linear dynamics from analytic γ = 0 to hyperbolic-like γ > 0 [53,54].This implies that the flow has additional regularity for the limit case γ = 0, while these properties completely disappear when γ > 0. Our main challenge is to characterize long time behavior of the thermal plates, uniformly with respect to the values of the parameter γ ≥ 0. This includes: (i) seeking an upper bound for dimensionality of attractors that are uniform in γ and κ, (ii) seeking an uniform measure of regularity enjoyed by trajectories evolving on the attractor, (iii) establishing upper semicontinuity with respect to γ. 2. Generation of a semi-flow and its properties. Abstract form of the problem. In what follows we assume that the domain Ω is either smooth or rectangular.We denote by H s (Ω) the L 2 -based Sobolev space of the order s and by H s 0 (Ω) the closure of C ∞ 0 (Ω) in H s (Ω).We also use the following notation: In the space H = L 2 (Ω) we define the operator A by the formula and consider the operator M γ = I +γA.It is well-known that the both operators A and M γ are positive self-adjoint operators in H.We also introduce the biharmonic operator In the "commutative" case of hinged boundary conditions (5) one has A 1 = A 2 which provides a lot of symmetry for the problem.Indeed, all the operators A, M γ and A 1 do commute.This feature simplifies substantially the analysis with respect to clamped case (4), where the latter requires several additional technical estimates that account for the lack of commutativity. We also introduce nonlinear mapping B(•) by the formula where v(u) ∈ H 2 (Ω) is determined by u via (3).With the above notation, equations in (1) with the boundary conditions considered can be written in the form We equip (7) with initial data u| t=0 = u 0 , u t | t=0 = u 1 , θ| t=0 = θ 0 .We note that long-time dynamics of the models in (7) with the hinged boundary conditions and γ = 0 has been studied in [19] in the context of inertial manifolds. 2.2.Nonlinear semigroup.We begin by introducing appropriate phase (energy) spaces H γ which capture dependence on the varying parameters γ.Our aim here is to present well-posedness of a continuous semi-flow for the models (7).By this we mean existence, uniqueness and continuous dependence of solutions with respect to initial data and t > 0. For every γ ≥ 0 we introduce the Hilbert space where γ ) which is H 1 0 (Ω) for γ > 0 and L 2 (Ω) for γ = 0. We equip the space H γ with the norm We have that D(A We begin the discussion of wellposedness of weak solution by considering first linear problem, ie when B(u) = 0.In this case standard application of Lumer-Phillips Theorem yields an existence of a strongly continuous semigroup of contractions S γ t defined on H γ .However, the properties of this semigroup are very different for γ > 0 and γ = 0. We have section 1. • γ = 0: In the case rotational inertia are not accounted, the semigroup S 0 t is analytic on H 0 .[57,54,47] • γ > 0: In the rotational case, the semigroup S γ t has predominantly hyperbolic character.More specifically, it can be written as where T γ t is a group and K t is compact for every t > 0. [53,48]. Theorem 1 remains valid also in the case of "free" boundary conditions [53,50].Though, in this latter case the proofs are more delicate. Wellposedness of solutions in the nonlinear case is more subtle.For the case γ = 0 the analysis of wellposedness relies on the additional regularity of the semigroup (analyticity).However, arguing this way, the estimates representing wellposedness and continuous dependence on the initial data do depend on γ.Instead, by using sharp regularity of Airy's stress function, (see [31] and also Lemma 1.2 below) this can be avoided.As the result one obtains wellposedness theory with the estimates independent on γ ≥ 0. • Existence and Uniqueness for all initial data which depends continuously on the initial data.This solution satisfies the energy balance equality for all t ≥ s ≥ 0, where E γ (u, u t , θ) is the energy functional for the model (7) given by with Moreover, when γ = 0, where the constant a R > 0 does not depend on γ ≥ 0 The above Proposition allows to define a strongly continuous semi-flow -semigroup S γ t acting on H γ .The main idea behind the proof [26] is to consider the nonlinear evolution as a locally Lipschitz perturbation of a contraction linear semigroup on H γ .Indeed, the nonlinear term B(u) is locally Lipschitz, on the strength of regularity result given in (16).The key role in our analysis is played by the following regularity of von Karman bracket (2).Lemma 1.2 ( [25,31]).Assume that Ω is either smooth, bounded domain or a rectangular domain.Let ∆ −2 denotes the inverse of ∆ 2 supplied with clamped boundary conditions.Then the bilinear map We also have the following estimates Consequently, We note that standard regularity of Airy's stress function [56] will not be sufficient for most of the arguments in this paper.In fact, regularity in (17) does not imply (16), where the latter is essential for the analysis to follow. The solutions to problem (7) generate a family of dynamical systems with the phase spaces H γ given by (8).The evolution operator S γ t is given by the formula , where u(t) and θ(t) solve (7) with initial data in H γ .So, in all cases considered we have well defined semi-flow on the space H γ .When γ > 0, the corresponding semi-flow is predominantly hyperbolic.When γ = 0 the semi-flow is parabolic like. In what follows below we discuss "regular solutions".The existence of such is asserted below.Proposition 1.3 (Regular solutions, [26]). For the initial data such that the corresponding solutions (u(t), θ(t)) to problem (7) have the following regularity: Moreover w(t) = u t (t) and ξ(t) = θ t (t) solves the following equations with an appropriate initial data. γ ) in the case γ > 0 [33] and thus by Closed In the case γ = 0 we obviously have that 2.3.Backward uniqueness of the semi-flow.Backward uniqueness for thermoelastic nonlinear plate, beside being of interest in its own rights, arises as an issue in the context of studying properties of attractors.Indeed, it becomes a tool in proving certain characteristics of attractors.Since the thermoelastic dynamics is represented by a continuous semi-flow -and not a flow -the issue of backward uniqueness is far from obvious.When γ = 0 the analyticity of the underlying linear semigroup provides a tool (see, e.g., [37,Sect.7.3]) for the backward unique continuation.However, when γ > 0, the problem is more subtle due to parabolichyperbolic mixing of the dynamics.In fact, even for linear thermoelastic plates with time independent coefficients, this property has been shown only recently [55] by using complex analysis methods.Backward uniqueness, quantitatively, means that two trajectories coinciding at a given time t > 0 must coincide also at any earlier time.Precise formulation of the corresponding backward uniqueness result is given below.Proposition 1.4 (Backward Uniqueness, [40,26]).Let p ∈ L 2 (Ω) and Then the following statements hold: ) and (u 2 (t), θ 2 (t)) be two solutions of equations (7) on an interval [0, T ] such that ) and (w(t), ξ(t)) be a solution to the linear (nonautonomous) equations (19) such that The proof of Proposition 1.4, given in [26], is based on adaptation of technique presented in [40], where linear and unforced thermal plates with space and time dependent coefficients are considered. Backward uniqueness is a fundamental property not only in stability theory but also in controllability theory. Stationary solutions. We introduce the set of stationary points of S γ t denoted by N (as we see below this set does not depend on γ ): One can see that every stationary point V has the form V = (u, 0, 0) where u = u(x) ∈ H 2 (Ω) is a weak (variational) solution to the problem with the corresponding boundary condition (either (4) or ( 5)), where the function v(u) solves (3).In particular, stationary point do not depend on the parameters γ, α and η.One can also see that and p L 2 (Ω) only.We use this fact in [26] to prove some uniform estimates for the attractor.It follows from the corresponding energy relation, the full energy E γ given by ( 10) ) is non-increasing.Therefore the set is forward invariant for every R > 0, i.e., S γ t E γ R ⊂ E γ R for t ≥ 0. One can also see, because of the topological equivalence between the norm induced by the energy and the topology of H γ that there exists R * 0 ≥ R 0 which depends on F 0 W 2,∞ (Ω) and p L 2 (Ω) only such that N ⊂ E γ R * 0 .As we see below this property makes it possible to prove that the global attractor belongs to the set {U ∈ H γ : |U | γ ≤ R * }, where R * depends on F 0 W 2,∞ (Ω) and p L 2 (Ω) only. Attractors for abstract dynamical systems We recall (see, e.g., [6,17,65]) that by definition a global attractor for a dynamical system (X, S t ) on a complete metric space X is a closed bounded set A in X which is invariant (i.e. S t A = A for any t > 0) and uniformly attracting, i.e. lim t→+∞ sup y∈B dist X {S t y, A} = 0 for any bounded set B ⊂ X. Remark 2. It follows directly from the definition of that a global attractor for (X, S t ) is a collection of all bounded full trajectories of the semi-flow S t .We recall the a continuous curve γ = {u(t) : t ∈ R} in X is said to be a full trajectory, if S t u(τ ) = u(t + τ ) for all t ≥ 0 and τ ∈ R. We will use this simple observation in the study of continuity properties of attractors with respect to parameters. Let N be the set of stationary points of the dynamical system (X, S t ), i.e. We define the unstable manifold M u (N ) emanating from the set N as a set of all y ∈ X such that there exists a full trajectory γ = {u(t) : t ∈ R} with the properties u(0) = y and dist X (u(t), N ) → 0 as t → −∞.It is clear that M u (N ) is an invariant set.It is also easy to prove (see, e.g., [6], [17] or [65]) that if the dynamical system (X, S t ) possesses a global attractor A, then M u (N ) ⊂ A. For gradient systems it is possible to prove that M u (N ) = A. We give the following definition (see [6,17,34,42,65]).Definition 2.1.A dynamical system (X, S t ) is said to be gradient if it possesses a strict Lyapunov function, i.e. there exists a continuous functional Φ(y) defined on X such that (i) the function t → Φ(S t y) is nonincreasing for any y ∈ X, and (ii) the equation Φ(S t y) = Φ(y) for all t > 0 and for some y ∈ X implies that S t y = y for all t > 0, i.e. y is a stationary point of (X, S t ). It follows from energy relation ( 9) that the the energy E γ (u, u t , θ) is a strict Lyapunov function for the dynamical system (H γ , S γ t ).Thus this system is gradient. We have the following result on the structure of a global attractor (for the proof we refer to any book from the list [6,17,34,42,65]).section 3. Let a gradient dynamical system (X, S t ) possess a compact global attractor A. Then A = M u (N ).Moreover the global attractor A consists of full trajectories γ = {u(t The following assertion describes long-time behavior of individual trajectories (for the proof we refer to [17] or [65], for instance).section 4. Assume that a gradient dynamical system (X, S t ) possesses a compact global attractor A. Then for any x ∈ X we have lim t→+∞ dist X (S t x, N ) = 0, i.e. any trajectory stabilizes to the set N of stationary points. Theorems 3 and 4 imply the following assertion. Corollary 4.1.Assume that a gradient dynamical system (X, S t ) possesses a compact global attractor A and N = {e 1 , . . ., e n } is a finite set.Then A = ∪ n i=1 M u (e i ), where M u (e i ) is the unstable manifold of the stationary point e i , and (i) the global attractor A consists of full trajectories γ = {u(t) : t ∈ R} connecting pairs of stationary points, i.e. any u ∈ A belongs some full trajectory γ and for any γ ⊂ A there exists a pair {e, e * } ⊂ N such that u(t) → e as t → −∞ and u(t) → e * as t → +∞; (ii) for any v ∈ X there exists a stationary point e such that S t v → e as t → +∞. The following assertion provides exponential rate of stabilization to the attractor along with some additional properties of the attractor (see, e.g., [6], [34] and also Theorems 4.7 and 4.8 in the survey [62]).section 5.In addition to previous hypotheses, asssume that (i) an evolution operator S t is C 1 , (ii) the set N of equilibrium points is finite and all equilibria are hyperbolic, and (iii) there exists a Lyapunov Φ(x) function such that Φ(S t x) < Φ(x) for all x ∈ X, x ∈ N and for all t > 0. Then • For any y ∈ X there exists e ∈ N such that S t y − e X ≤ C y e −ωt , t > 0. Moreover, for any bounded set B in X we have that Here above A is a global attractor, C y , C B and ω are positive constants, ω in (22) depends on the minimum, over e ∈ N , of the distance of the spectrum of D[S 1 e] to the unit circle in C. Asymptotic smoothness is the most critical property which is necessary for the existence of a compact global attractor.There are several approaches to the proof of this property.For instance, we can use either a splitting method (see [6,17,34,65] and the references therein) or the method of energy type identities (see [7] and also the survey [62]).However the stabilizability estimate which we prove in [26] makes it possible to apply the following criterium (see [12,34] and also [24] for some generalizations) for the proof of asymptotic smoothness of the dynamical system (H γ , S γ t ) generated by (1).section 6.Let (X, S t ) be a dynamical system on a complete metric space X endowed with a metric d.Assume that for any bounded positively invariant set B in X there exist numbers T > 0 and 0 < q < 1, and a pseudometric T B on C(0, T ; X) such that (i) the pseudometric T B is precompact (with respect to X) in the following sense: any sequence {x n } ⊂ B has a subsequence {x n k } such that the sequence for every y 1 , y 2 ∈ B, where we denote by {S τ y i } the element in the space C(0, T ; X) given by function y i (τ ) = S τ y i . Then (X, S t ) is an asymptotically smooth dynamical system. An important characteristic of a global attractor is its (fractal) dimension.We recall that the fractal dimension dim X f M of a compact set M in a complete metric space X is defined by where N (M, ε) is the minimal number of closed sets in X of the diameter 2ε which cover the set M .We note that fractal (dim X f M ) and Hausdorff (dim H M implies the finiteness of the Hausdorff dimension and lower bounds for dim X H M provide us with lower bounds for the fractal dimension. Our proof of finite dimensionality of the attractors for (H γ S γ t ) is based on the following assertion (see [24] and also [21,22] which contain other versions of the theorem stated below).section 7. Let X be a Banach space and M be a bounded closed set in X. Assume that there exists a mapping V : M → X such that M ⊆ V M and also for any v 1 , v 2 ∈ M , where 0 < η < 1 and K > 0 are constants (a seminorm n(x) on X is said to be compact iff for any bounded set B ⊂ X there exists a sequence Then M is a compact set in X of a finite fractal dimension.Moreover, we have the estimate where m 0 (R) is the maximal number of pairs (x i , y i ) in X × X possessing the properties Asymptotic behavior of von Karman thermal plates 4.1.Exponential decays to a single equilibrium.We begin by recalling uniform stability results in the case when the attractor is trivial and consists just of one point.Wlog we assume that the only equilibrium is zero, so we take F 0 = 0, p = 0.In that case we have section 8. Let F 0 = 0, p = 0. Then the energy of the nonlinear plate decays to zero exponenntially, with the rates independent on 0 ≤ γ ≤ 1.This is to say there exists constant ω > 0 such that where the energy functional E γ (u, u t , θ) is given by (11). Exponential decay rates presented in theorem 8 were established in [2] -for the linear case and in [4,5] -for the nonlinear case.We also note that the same result holds for "free" boundary conditions-though the proof is much more technical [3].In the case γ = 0 thermal plates with hinged boundary conditins have been known for some time [41,38] to be exponentially decaying. Other related results on exponential stability of nonlinear thermal plates can be found in [51,8,4,5] 4.2.Global Attractors.Our main results on global attractors for dynamical systems (H γ , S γ t ) with 0 ≤ γ ≤ 1 are formulated below.section 9 (Compact Attractors).For every 0 ≤ γ ≤ 1 the dynamical system (H γ , S γ t ) is gradient and possesses a compact global attractor A γ = M u γ (N ), where M u γ (N ) is unstable manifold emanating from the set N of stationary points.Thus the conclusions of Theorem 3 and Theorem 4 hold true for (H γ , S γ t ).Moreover, • Finite-dimensionality: there exists d 0 > 0 independent of γ such that fractal dimension of A γ in H γ admits the estimate dim • Regularity: any full trajectory {U (t) : t ∈ R} from the attractor possesses the properties and for all t ∈ R, where the both constants R 1 and R 2 do not depend on 0 ≤ γ ≤ 1 and R 1 is also independent of η and α and in the case γ = 0 we additionally have that u(t) 4 ≤ R 2 for t ∈ R); • Upper semi-continuity: the family of the attractors A γ is upper semicontinuous with respect to γ in the sense that for any γ 0 ≥ 0 we have that We note that in the case of isothermal von Karman plate upper semi-continuity of the attractor when γ → 0 was proved in [16]. Our next result relies on Theorem 5 and deals with the case when the set N is finite and every stationary point is hyperbolic.section 10 (Exponential Attractor).Assume that N = {E i : i = 1, . . ., n} is a finite set.Then the conclusions of Corollary 4.1 holds true for the system (H γ, , S γ t ) for every γ ≥ 0 In particular, A γ = ∪ n i=1 M u (E i ).Moreover, if every stationary point E i = (e i ; 0; 0) is hyperbolic in the sense that the equation A 1 w = B (e i )w, where B (u) is Frechet derivative of the mapping B given by (6), has only trivial solutions.Then: • For any U 0 ∈ H γ , there exists an equilibrium point E = (e, 0, 0) ∈ H γ and constants ω > 0 and C U0 > 0 (possibly depending on γ) such that Here A γ is the global attractor, C B and ω are positive constants which may depend on γ. Remark 11.The first statement of Theorem 10 implies that the global attractor is exponential.However this property requires finiteness and hyperbolicity of the set N of equilibria.Whether the dependence of exponential rate of attraction in (27) on γ ≥ 0 could be surpressed, is not known at the present time.We also note that in the general (non-hyperbolic) case one can apply Corollary 2.23 [24] and argument similar given in the proof of Theorem 4.43 [24] to obtain the existence of exponential fractal attractor (inertial set) with an uniform (with respect to γ) estimate for the dimension.For details concerning a general notion of an exponential fractal attractor we refer to the monograph [29]. Remark 12.If we compare (28) with the result on the dimension from Theorem 9, then we obtain that max E∈N ind (E) can be estimated from above by a constant independent of γ. Remark 13.We note that the present treatment does not rely on analyticity of the semigroup associated with the model when γ = 0.All the estimates obtained for the size and the dimension of the attractor are independent on γ ≥ 0. This was possible to achieve for both simply supported and clamped boundary conditions.However, in the case of free boundary conditions, the situation is more complicated. To our best knowledge, there are no appropriate estimates -independent on γ even in the linear case.Nevertheless, the methods of the paper [26] provide all the results on attractors for each value of the parameter (γ > 0 and γ = 0).In the case γ = 0, critical use of the analyticity (see, e.g., [54]) of the semigroup will have to play the role.How to make these estimates (in the case of free boundary conditions) uniform with respect to γ is an open problem. Inertial Manifolds. For plates with hinged boundary conditions and special geometry of the domain Ω one can prove [19] existence of inertial manifolds.We begin by recalling definition of inertial manifold. Definition 13.1.Let M be a finite-dimensional surface in H of the following structure: M ≡ {p + Φ)p), p ∈ PH, Φ : PH → (I − P)H} (29) where P is a finite dimensional projector and Φ is a Lipschitz continuous mapping.Then, M is said to be an inertial manifold for the dynamical system (S t , H), iff (i) the surface M is invariant under the flow, (ii) M is exponentially attracting. In the case of locally Lipschitz nonlinearities, a locally invariant manifold is relevant.This means that the invariance property is restricted to some ball in H. Definition 13.2.The Lipschitz surface M is said to be locally invariant inertial manifold, if it is exponentially attracting and, moreover, there exists R > 0 such that the ball B R in H is absorbing, and M is locally invariant in B R .This is to say, for all u ∈ B R ∩ M, S t u ∈ B R for t ∈ [0, T ], we have that S t u ∈ M, t ∈ [0, T ]. The general theory of inertial manifolds was started with the paper [32] and has been developed and widely studied for deterministic systems by many authors (see, e.g., the monographs [17,28,65] and the references therein).All known results concerning existence of inertial manifolds require some gap condition on the spectrum of the linearized problem. In the case when Ω is a rectangle, γ = 0, and the boundary conditions associated are hinged, an existence of inertial manifold has been established in [19].This result is reported below. We recall that the abstract form of the thermoelastic system with hinged boundary conditions is written as u tt − αAθ + A 2 u = B(u), θ t + ηAθ + αAu t = 0 (30) An important role in this result is played by the properties of the roots of characteristic equation This equation has one positive root z 1 and the two remaining, z 2 and z 3 , are complex conjugates.section 14.Consider (30) where Ω = (0, l 1 ) × (0, l 2 ) with p ∈ L 2 (Ω) and F 0 ∈ W 2,∞ (Ω).We assume is rational , or else α is sufficiently large. Then the flow S t corresponding to (30) and defined on H = D(A) × L 2 (Ω) × L 2 (Ω) possesses a locally invariant inertial manifold. The proof of Theorem 14, given in [19], is based on spectral analysis of the linear problem.The key element is to show that certain gap condition between eigenvalues separating stable and unstable manifolds is satisfied.To accomplish this, number theoretic properties are exploited.Application of these necessitates imposition of geometric conditions listed in the theorem.Whether the same result holds in a broader context (e.g., for the case γ > 0, and/or for non-rectangle domains, or else with other boundary conditions, etc.) remains an open problem.
7,261.8
2007-06-25T00:00:00.000
[ "Physics", "Engineering" ]
Bayesian Hierarchical Copula Models with a Dirichlet–Laplace Prior : We discuss a Bayesian hierarchical copula model for clusters of financial time series. A similar approach has been developed in recent paper. However, the prior distributions proposed there do not always provide a proper posterior. In order to circumvent the problem, we adopt a proper global–local shrinkage prior, which is also able to account for potential dependence structures among different clusters. The performance of the proposed model is presented via simulations and a real data analysis. Introduction There is a large body of literature with respect to hierarchical model settings. The concept to pull the mean of a single group towards the mean across different groups can be found at least in Kelley [1]. Tiao and Tan [2] and Hill [3] consider the one-way random effects model and they discuss a Bayesian approach for the analysis of variance because the frequentist unbiased estimator of the variance of random effects could be negative. For the same model, Stone and Springer [4] discuss and resolve a paradox that arises with the use of Jeffreys' prior. The foundation for the Bayesian hierarchical linear model is established in Lindley and Smith [5]. More recently, Gelman [6] discuss a review on prior distributions for variance parameters in the hierarchical model. More recently, Zhuang et al. [7] introduced a hierarchical model in a copula framework; they suggest using, for the variance parameters of two different priors, (i) the standard improper prior for scale parameters, which is proportional to σ −2 , or (ii) a vaguely informative prior, say an inverse gamma density with both parameters equal to a small value. However, both the above proposals might be impractical: in the first case, the posterior is simply not proper (as we show in the Appendix A); in the second case, the use of small parameters of the inverse Gamma priors simply hides the problem without actually solving it; see for example Berger [8]. Hobert and Casella [9] also provide another review on the effect of improper priors in the Gibbs sampling algorithm. In this paper, we propose a Bayesian hierarchical copula model using a different prior. In particular, we adopt a global-local shrinkage prior. These prior distributions naturally arise in a linear regression framework with high dimensional data and where a sparsity constraint is necessary for the vector of coefficients. Several different global-local shrinkage families of priors have been proposed: Park and Casella [10] and Hans [11] discuss the Bayesian LASSO; Carvalho et al. [12] introduce the Horseshoe prior, Armagan et al. [13] propose a Generalized Double Pareto prior. Here, we will use a Dirichlet-Laplace prior, proposed in Bhattacharya et al. [14], with a slight modification; while in a regression framework, it is natural to adopt a prior that shrinks the parameters towards zero, this is not the case for our hierarchical copula model, where the zero value does not have a Stats 2022, 5 particular interpretation in the model. For this reason we need to introduce a further level of hierarchy, assuming a prior distribution on the location of the shrinkage point. The rest of this paper is organized as follows: The next section is devoted to illustrating the statistical model and the prior distribution, highlighting the differences with the approach described in Zhuang et al. [7]; we conclude the section with a description of the sampling algorithm. In the third section, we perform a simulation study in order to compare the mean square error of the estimates produced by our model and compare them with a standard maximum likelihood approach. Then, we reconsider a dataset discussed in Zhuang et al. [7] and compare the results of the two approaches. We conclude with another illustration of the model in the problem of clustering financial time series. Likelihood and Priors Distributions Copula representation is a way to recast a multivariate distribution in such a way that the dependence structure is not influenced by the shape, the parametrization, and the unit of measurement of the marginal distributions. Their applications in statistical inferences and a review on the most popular approaches can be found in Hofert et al. [15]. In this paper we will consider several different parametric forms of copula functions: In particular, in the bivariate case, we will use the standard Archimedean families, namely the Joe, Clayton, Gumbel, and Frank copulae. For more than two dimensions, we will concentrate on the use of the most popular elliptical versions, namely the Gaussian and Student's t copulae. Since the main objective of the paper is the clusterization of the dependence structure, for the sake of simplicity and without a loss of generality , we will assume that all marginal distributions are known or, equivalently, their parameters have been previously estimated. In this way, we can directly work with the transformed variables: Let c i (·|ψ i ) be the generic copula density function associated with the i-th group . The statistical model can be stated as follows: where m denotes the number of groups or clusters. Set the following: and assume the following. In the previous expressions, b i and B i , respectively, denote the lower and the upper bound of the parameter space of the corresponding ψ i , and γ i is the mapping of ψ i into the real axis; d i is the dimension of i-th group, and a is a hyperparameter, which we typically set to 1, although different values can be used. In general, the Archimedean copulae are parametrized in terms of Kendall's Tau, for which its range of values has been restricted to (0, 1) for the Clayton, Joe, and Gumbel copulae, while it is set to (−1, 1) for the Frank copula. In the elliptical case, the Gaussian copula is parametrized in terms of the correlation coefficient ρ, which ranges in (−1, 1); finally, Student's t copula has the additional parameter ν, and that is the number of degrees of freedom: A discrete uniform prior on {1, 2, . . . , 35} has been used here. When dimension d of the specific group is larger than two, we restrict the analysis to elliptical copulae with an equi-correlation matrix: in that case, it is well known that the range of the correlation parameter is (−1/(d − 1), 1). Let U be entire observed sample and let U ijk be the k-th observation of i-th component in the j-th group, and let n j be the number of observation in the j-th group. The posterior distribution on the parameter vector (γ, ξ, α, τ) is then described as follows: where γ = (γ 1 , γ 2 , . . . , γ m ) and α = (α 1 , α 2 , . . . , α m ). The complex form of the posterior distribution requires the use of simulation based methods of inference. In particular, we will adapt the algorithm of Bhattacharya et al. [14] with a minor modification for the updates of γ and the shrinkage location ξ. Following,Bhattacharya et al. [14], we introduce a vector β = β 1 , β 2 , . . . , β m ∈ R m in order to have a latent variable representation of the γ prior; then, the following is obtained. Here, we briefly describe the algorithm. Start the chain at time 0 by drawing a sample from the prior. At time t, we use the following updating procedure: 1. In previous statements, Cauchy(a, b) denotes a one-dimensional Cauchy distribution with location a and scale b, while GIG(p, a, b) is the generalized inverse Gaussian distribution with the following density function. Notice that IG(a, b) is the inverse Gaussian distribution, and it is known that Finally, δ γ and δ ξ are scalar tuning parameters. In the case of the Student's t copula, we need to add another step between stride 1 and 2 in order to update ν = (ν 1 , ν 2 , . . . , ν m ): Compute the following. Prior Distribution of ξ The choice of the prior distribution for the shrinkage location ξ needs some explanation. First of all, notice that, according to our prior specification, . . , m} implies a standard logistic density for ξ. Previous Work Apart form the prior specification, the model described in previous sections is the one proposed by Zhuang et al. [7]. We restrict our discussion to the case where each copula expression has one parameter only. Their prior can be stated as follows. There is no unique choice for the distributions of (σ 2 , λ, δ), although the authors suggest using weakly informative priors, for example, inverse gamma densities with small hyperparameters values or, as an alternative, an objective prior: for example, an improper uniform prior. However, one can prove that, in the second case, the posterior distribution cannot be proper no matter what the sample size is. We show this result in Appendix A. When the posterior distribution is improper, the resulting summary statistics are meaningless. In fact, the Markov Chain implied by the MCMC does not have a limiting distribution so the Ergodic theorem does not hold and the posterior is completely useless. Moreover, even the first solution is not feasible. In fact, when an improper prior produces an improper posterior, using a vague proper prior can typically hide-not solve-the problem. In these cases, in fact, as shown in Berger [8] (p. 398), the use of a vague prior approximating an improper prior typically concentrates the posterior mass on some boundary of the parameter space. Simulation Study We compare the performance of our approach with the results based on a maximum likelihood approach in a simulation study. We will use a Student's t copula with an equi-correlation matrix and set the number of groups m equal to five. We repeat the procedure 100 times; at iteration j for the i-th group, we sample the true value γ T ij from a standard normal distribution, the degrees of freedom ν T ij are sampled from the prior distribution, and the dimensions d ij of the groups are sampled from the uniform discrete distribution in {1, 2, . . . , 5}. Given the parameters and dimensions of the groups, we sample 20 observations for each group. In the maximum likelihood framework, we estimate the following: and compute the standard errors. In a Bayesian framework, we use the posterior mean as a point estimate, obtained from the use of the MCMC algorithm described above. We ran six independent chains of 2.5 × 10 5 scans, discarded the first 5 × 10 4 as a burn-in, and finally computed theγ Bay ij via the sample mean of simulation outputs for all i ∈ {1, . . . , 5}. As a tuning parameters, we set δ γ = 10 −3 and δ ξ = 10 −1 . Then, we compute the following. Comparison are performed in terms of the corresponding mean square errors. Real Data Applications This section is devoted to the implementation of the method in two different applications. The first one is the same as in Zhuang et al. [7] and we include it for comparative purposes; to this end, we quantify the goodness of fit of the model using a predictive approach based on the conditional version of the Widely Applicable Information Criterion, WAIC, in a hierarchical setting, as discussed in Millar [16]. The second one deals with clustering financial time series. Column Vertebral Data We apply our model to the Column Vertebral Data, available at the UCI Machine Learning Repository. It consists of 60 patients with disk hernia, 150 subjects with spondylolisthesis, and 100 healthy individuals; data are available for the following variables: angle of pelvic incidence (PI), angle of pelvic tilt (PT), lumbar lordosis angle (LL), sacral slope (SS), pelvic radius (PR), and the degree of spondylolisthesis (DS). As in Zhuang et al. [7], we adopt the generalized skew-t distribution for the marginals, use a maximum likelihood estimator in order to calibrate the parameters and then transform data via the fitted cumulative distribution function. Computations were performed using the R package sgt available on CRAN. Table 2 reports the values of fitted parameters for the marginals. Following Zhuang et al. [7], we consider the same parametric copulae for the bivariate distributions of the features of interest, and for each of these, we construct our Bayesian hierarchical copula model for three groups of subjects. We run six independent chains of 2.5 × 10 6 simulations and discard the first 5 × 10 5 . We also set δ γ = 10 −3 and δ ξ = 10 −1 . We did not report any convergence issues, and the multiple Gelman-Rubin test scores for each of the six implemented models Gelman [17] were very close to the optimal value 1. In terms of the goodness of fit, we have computed the WAIC index for all six models. Our findings is that the most significant relation is the one between PI and PT. Table 3 compares the results of Zhuang et al. [7] (model A) with our ones (model B). The main difference between the results obtained with the two methods is related to the posterior uncertainty quantification. Credible intervals obtained with model B are systemically larger than those obtianed with model A. Our feeling is that it depends on the fact that results in model A are obtained by running a chain where some hyperparameters are fixed to some estimated values, as explained in Zhuang et al. [7]. Fixing values of the hyperparameters eliminates a critical source of variation, inducing shrinkage in credible intervals size. For the ease of comparisons, we follow Zhuang et al. [7] and report the results not in terms of parameter γ but rather according the natural parameter of each copula, that is, ρ for the Gaussian copula and θ for the Archimedean ones. Financial Data Application Grouping financial time series is important for diversification purposes; a portfolio manager should avoid investing in instruments with a high degree of positive dependence, and clustering procedures allow the construction of groups according to some specific risk measure. In this way, financial instruments that belong to the same group will show a certain degree of association; however, the strength of dependence within groups may well be different in different groups. It is then important to assess the strength of the association for each single cluster, and a method to perform this is to use a hierarchical structure, such as the one discussed in this paper. As a risk measure, we consider the so-called tail index, which measures the strength of dependence between two variables when one of them takes extremely low values. Following De Luca and Zuccolotto [18], we construct a dissimilarity measure based on the lower tail coefficient. Let (Y 1 , Y 2 ) be a bivariate random vector; the lower tail coefficient λ L of (Y 1 , Y 2 ) is defined as follow: or, equivalently, where C(·, ·) is the cumulative distribution function of the copula associated to (Y 1 , Y 2 ). In order to estimate λ L , we use the empirical estimator discussed in [19]: whereĈ(·, ·) is the empirical copula, and n is the sample size. The dissimilarity measure is then defined as follows. The preliminary clustering procedure has been implemented using a complete linkage method. Notice that a bivariate lower tail coefficient is not the unique method for modeling dependence on extreme low values: Durante et al. [20] proposed a conditioned correlation coefficient estimated using a nonparametric approach; Fuchs et al. [21] analyzed dissimilarity measure applicable to a multivariate lower tail coefficient. We consider the "S&P 500 Full Dataset" available at Kaggle: It contains more relevant information for the components of S&P 500. We take the daily closing prices from 5 June 2000 to 5 June 2020 and discard instruments without a complete record for this period. Then, we restrict our analysis to 379 components. For all of them, we computed the log-returns by taking log-differences and filter data by fitting; for each time series, an ARMA(1,1)GJR-GARCH(1,1) model with Student's t innovations was used; then, we extracted residuals and transformed them via the fitted cumulative distribution function in order to obtain pseudo-data. Computations were performed using the CRAN package rugarch. Hence, we compute the empirical estimator of the lower tail coefficient for any possible pair and the dissimilarity measure associated and use them to feed the clustering algorithm. Due to computational complexities, we used the coarsest partition under the constraint that the largest group must have at most 10 components. We obtained 30 groups with dimensions of more than one and discarded instruments that belong to groups with only one component. The final number of instruments was thus reduced to 93. We ran the MCMC algorithm described above for the 30 clusters, performing 12 independent chains of 10 5 scans and discarding the first 1.5 × 10 4 as they burned in. Tuning parameters were set to δ γ = 10 −6 , δ ξ = 10 −3 . Moreover, in this example, we did not report any convergence issues, and the Gelman-Rubin test score was 1.02. For each scan and for any group, we compute the lower tail coefficient via the following formula: where T ν (·) is the univariate cumulative distribution function of a Student's t random variable with ν degrees of freedom. The copula used in this example was a Student's t copula with an equi-correlation matrix: As a consequence, we obtained a single value for the lower tail coefficient for each cluster. Table 4 reports the results for each pair that belongs to the same group. Finally, we report the estimation results. Conclusions We discussed and improved a fully Bayesian analysis for a hierarchical copula model proposed in Zhuang et al. [7]. We proposed the use of a proper prior, which is able to induce shrinkage and, at the same time, dependence among different clusters of observations. This prior does not mimic the behavior of an improper prior and is better suited for objectively representing information coming from the data. Our prior belongs to the large family of globa-local shrinkage densities, with an extra stage in the hierarchy, due to the absence of a significant shrinkage value; we experienced that this approach is very effective and useful in the case of parametric copulae depending on a single parameter. In a more general situation, this approach needs to be modified, and this can be easily accommodated. Finally, we presented an application in a financial context, where the goal was to estimate the lower tail coefficient of several financial time series in a parametric way using the Student's t copula. Conflicts of Interest: The authors declare no conflicts of interest. Abbreviations The following abbreviations are used in this manuscript: Appendix A Here, we show that the prior proposed in Zhuang et al. [7] leads to an improper posterior. The statistical model consists of m d-dimensional copulae governing different sets of observations. U 1i , U 2i , . . . , Let γ i = η i g i (θ i ); here, η i is a scaling parameter that can be considered known. One-to-one mapping functions g i (·) are needed to put all dependence parameters on the real line. Zhuang et al. [7] made the following assumptions. Hyper-parameters σ i 's, λ, and δ 2 are given a suitable prior distribution. For the moment, we do not specify the priors and set the following. The next proposition shows that, using standard noninformative priors for scale and location parameters, the resulting posterior will be improper independently of the sample size. Consider only the following: and setμ = 1 m m ∑ i=1 µ i ; then, we obtain the following. For any choice of m > 1, π(µ) can be written as follows. Now, we compute the following. Notice that the following is the case: µ i : then, we obtain the following. (µ i −μ) 2 is a convex parabolic function of µ m , and by the Weierstrass theorem, a global maximum exists for all bounded and closed sets. By integrating µ m , one obtains the following. For the same argument, one can also see that the following obtains. A similar argument can be used to prove the following result.
4,758.6
2022-11-01T00:00:00.000
[ "Mathematics" ]
Production and Consumption: Textile Economy and Urbanisation in Mediterranean Europe 1000-500 BCE (PROCON) Institute of Archaeology, funded by a European Research Council starting grant (No. 312603). The aim of the project is to test the hypothesis that textile production and consumption was a significant driving force of the economy and of the creation and perception of wealth in Mediterranean Europe during the period of urbanisation and early urbanism in 1000–500 BCE. The overarching question to be answered is: To what extent did textile production and consumption define the development of productive and commercial activities of early urban Mediterranean societies in the Iron Age? The past few years have witnessed a major dynamism in the field of archaeological textile research in Europe, as demonstrated by numerous conferences and publications on the topic, as well as the establishment of large-scale interdisciplinary collaborative programmes, such as the Centre for Textile Research (CTR), funded by the Danish National Research Foundation (2005–2015), and the pan-European project ‘Clothing and Identities – New Perspectives on Textiles in the Roman Empire’ (DressID), funded by the European Union Education, Audiovisual and Culture Executive Agency (2007–2012). The impetus created by these projects has provided an important arena for the development of new research techniques and approaches. From this basis, the necessary next step is to lead this growing field into answering some of the fundamental questions of archaeology, where evidence for textiles has hitherto been virtually unexplored. It has been convincingly demonstrated that intensive production and consumption of textiles was at the heart of urbanisation throughout the history of the world. The lords of the Inka state extracted heavy tribute of cloth from its peasants, which in turn clothed and sheltered the army, dressed its lords and citizens and filled its storehouses. In 18th-century England, the Industrial Revolution was fuelled by the desire of the nobility and aspiring middle classes to invest in cloth and clothing, with its chance for self-promotion and political investiture. In the ancient past a similar pattern is recognisable in the emergence of the Bronze Age urban state centres of Mesopotamia and the Aegean. There, early written state archives provide abundant evidence of the importance of textile production and consumption in the formation of the political systems synonymous with urbanisation. Archaeologists have focused particular attention on the floruit of urbanism in the 1st millennium BCE in ancient Greece, Italy and Spain (Osborne Gleba, M et al. 2013 Production and Consumption: Textile Economy and Urbanisation in Mediterranean Europe 1000–500 BCE (PROCON). Archaeology International, No. 16 (2012-2013): 54-58, DOI: http:// dx.doi.org/10.5334/ai.1602 PROCON is a new project hosted by the UCL Institute of Archaeology, funded by a European Research Council starting grant (No. 312603). The aim of the project is to test the hypothesis that textile production and consumption was a significant driving force of the economy and of the creation and perception of wealth in Mediterranean Europe during the period of urbanisation and early urbanism in 1000-500 BCE. The overarching question to be answered is: To what extent did textile production and consumption define the development of productive and commercial activities of early urban Mediterranean societies in the Iron Age? The past few years have witnessed a major dynamism in the field of archaeological textile research in Europe, as demonstrated by numerous conferences and publications on the topic, as well as the establishment of large-scale interdisciplinary collaborative programmes, such as the Centre for Textile Research (CTR), funded by the Danish National Research Foundation (2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015), and the pan-European project 'Clothing and Identities -New Perspectives on Textiles in the Roman Empire' (DressID), funded by the European Union Education, Audiovisual and Culture Executive Agency (2007)(2008)(2009)(2010)(2011)(2012). The impetus created by these projects has provided an important arena for the development of new research techniques and approaches. From this basis, the necessary next step is to lead this growing field into answering some of the fundamental questions of archaeology, where evidence for textiles has hitherto been virtually unexplored. It has been convincingly demonstrated that intensive production and consumption of textiles was at the heart of urbanisation throughout the history of the world. The lords of the Inka state extracted heavy tribute of cloth from its peasants, which in turn clothed and sheltered the army, dressed its lords and citizens and filled its storehouses. In 18th-century England, the Industrial Revolution was fuelled by the desire of the nobility and aspiring middle classes to invest in cloth and clothing, with its chance for self-promotion and political investiture. In the ancient past a similar pattern is recognisable in the emergence of the Bronze Age urban state centres of Mesopotamia and the Aegean. There, early written state archives provide abundant evidence of the importance of textile production and consumption in the formation of the political systems synonymous with urbanisation. Archaeologists have focused particular attention on the floruit of urbanism in the 1st millennium BCE in ancient Greece, Italy and Spain (Osborne , 2005). Yet, despite the promising early evidence for the influence of textiles in the Bronze Age eastern Mediterranean, the role of textiles in the formation of these Iron Age Mediterranean urban centres is largely unexplored. The focus of the PROCON project is on the significance of the production and con-sumption of textiles for the development of city-states (as clothing, elite regalia, trade and exchange items) and the implications of this for other aspects of the economy, such as the use of farm land, labour resources and the development of urban lifestyle. This aim is achieved by addressing the following research questions: how was this production and consumption organised; where did the various resources come from; what were the technologies used; and what was the level of organisation? Who was involved in textile production and consumption? What was the quality and quantity of textiles produced, and how did they change over time in response to urban consumer demands? In exploring these questions the project not only follows a functional approach, but also considers the value ascribed to these goods and the customs that came with them. The questions outlined above lead to the following objectives for the PROCON project: 1. To evaluate the availability and the degree of exploitation of the various resources for textile production; 2. To assess the technological and organisational parameters of textile production; 3. To explore the consumption of textiles as clothing and utilitarian goods, and to trace the increased demand both for clothing, through changes in fashion and in wealth accumulation, and for sail cloth, with increased mobility; 4. To identify the modes, means and directions (through time and space) of the resource, of the technology and of textile consumption and exchange; 5. On the basis of the above, to provide a new reading of economic history for the period and area under consideration that sees textile production and consumption as a major economic factor during the urbanisation of Early Iron Age Mediterranean Europe. Using established and novel approaches to textile research, the project results aim to change the landscape of urbanisation research by providing new data sets demonstrating textile production and consumption as major economic and social factors. This project is unique in that it takes developments in a specialist research field (textile archaeology) and applies them to modelling the dynamics behind the broader phenomenon of urbanisation in Europe. In terms of scale, project PROCON is concerned with broad patterns and adopts a Mediterranean-wide rather than a regional perspective, along with recent scholarship on the 1st-millennium BCE Mediterranean (Vlassopoulos, 2007;Riva, 2010). In doing so, the project explores similarities and differences between the different regions as they followed their trajectories towards urbanisation. The economy of textile production is furthermore conceived as a network that stimulated the mobility of goods, people, ideas and technologies in the context of developing urbanisation. The project structure thus encompasses four research strands within the operational sequence (chaîne opératoire) of textile economy: Resources; Production; Product; and Consumption and Exchange (Fig. 1). The project is highly interdisciplinary and will draw on methods from the fields of archaeology, biology, geology, chemistry, art history and classics, examining archaeological textiles (Figs 2-5), textile tools (Fig. 6), palaeoenvironmental remains, iconographic and written sources. The planned research will result in a major step forward in our understanding of the economic and social role of textiles in ancient societies.
1,925
2013-10-24T00:00:00.000
[ "History", "Economics" ]
MicroRNA-deficient mouse embryonic stem cells acquire a functional interferon response When mammalian cells detect a viral infection, they initiate a type I interferon (IFNs) response as part of their innate immune system. This antiviral mechanism is conserved in virtually all cell types, except for embryonic stem cells (ESCs) and oocytes which are intrinsically incapable of producing IFNs. Despite the importance of the IFN response to fight viral infections, the mechanisms regulating this pathway during pluripotency are still unknown. Here we show that, in the absence of miRNAs, ESCs acquire an active IFN response. Proteomic analysis identified MAVS, a central component of the IFN pathway, to be actively silenced by miRNAs and responsible for suppressing IFN expression in ESCs. Furthermore, we show that knocking out a single miRNA, miR-673, restores the antiviral response in ESCs through MAVS regulation. Our findings suggest that the interaction between miR-673 and MAVS acts as a switch to suppress the antiviral IFN during pluripotency and present genetic approaches to enhance their antiviral immunity. Introduction Type I interferons (IFN) are crucial cytokines of the innate antiviral response. Although showing great variation, most mammalian cell types are capable of synthesizing type I IFNs in response to invading viruses and other pathogens. Once type I IFNs are secreted, they activate the JAK-STAT pathway and production of interferon-stimulated genes (ISGs) in both the infected and neighbouring cells to induce an antiviral state (Ivashkiv and Donlin, 2015). Two major signalling pathways are involved in IFN production in the context of viral infections. The dsRNA sensors RIG-I and MDA5 initiate a signalling cascade that signals through the central mitochondrial-associated factor MAVS, ultimately activating Ifnb1 transcription. The cGAS/STING pathway is activated upon detection of viral or other foreign DNA molecules and uses a distinct signalling pathway involving the endoplasmic reticulum associated STING protein (Chan and Gack, 2016). Despite its crucial function in fighting pathogens, pluripotent mammalian cells do not exhibit an IFN response. Both mouse and human embryonic stem cells (ESCs) (Wang et al., 2013;Chen et al., 2010) as well as embryonic carcinoma cells (Burke et al., 1978) fail to produce IFNs, suggesting that this function is acquired during differentiation. The rationale for silencing this response is not fully understood but it has been proposed that in their natural setting, ESCs are protected from viral infections by the trophoblast, which forms the outer layer of the blastocyst (Delorme-Axford et al., 2014). ESCs exhibit a mild response to exogenous IFNs, suggesting that during embryonic development, maternal IFN could have protective properties (Hong and Carmichael, 2013;Wang et al., 2014). In mouse ESCs, a Dicer-dependent RNA interference (RNAi) mechanism, reminiscent to that of plants and insects, is suggested to function as an alternative antiviral mechanism (Maillard et al., 2013). And in humans, ESCs intrinsically express high levels of a subgroup of ISGs in the absence of infection, bypassing the need for an antiviral IFN response (Wu et al., 2018;Wu et al., 2012). All these suggest that different antiviral pathways are employed depending on the differentiation status of the cell. Silencing of the IFN response during pluripotency may also be essential to avoid aberrant IFN production in response to retrotransposons and endogenous retroviral derived dsRNA, which are highly expressed during the early stages of embryonic development and oocytes (Ahmad et al., 2018;Grow et al., 2015;Macia et al., 2015;Peaston et al., 2004;Macfarlan et al., 2012). Furthermore, exposing cells to exogenous IFN induces differentiation and an anti-proliferative state, which would have catastrophic consequences during very early embryonic development (Borden et al., 1982;Hertzog et al., 1994). All these observations support a model in which cells gain the ability to produce IFNs during differentiation. One particular class of regulatory factors that are essential for the successful differentiation of ESCs are miRNAs (Greve et al., 2013). These type of small RNAs originate from long precursor RNA molecules, which undergo two consecutive processing steps, one in the nucleus by the Microprocessor complex, followed by a DICER-mediated processing in the cytoplasm (Treiber et al., 2018). The Microprocessor complex is composed of the dsRNA binding protein DGCR8 and the RNase III DROSHA which are both essential for mature miRNA production (Gregory et al., 2004;Lee et al., 2003). In addition, mammalian DICER is also essential for production of siRNAs (Bernstein et al., 2001). The genetic ablation of Dgcr8 or Dicer in mice blocks ESCs differentiation suggesting that miRNAs are an essential factor for this, as these are the common substrates for the two RNA processing factors (Wang et al., 2007;Kanellopoulou et al., 2005). In this study, we show that miRNAs are responsible for suppressing the IFN response during pluripotency, specifically to immunostimulatory RNAs. We found that miRNA-deficient ESCs acquire an IFN-proficient state, are able to synthesize IFN-b and mount a functional antiviral response. Our results show that miRNAs specifically downregulate MAVS (mitochondrial antiviral signalling protein), an essential and central protein in the IFN response pathway. In agreement, ESCs with increased MAVS expression or knock-out of the MAVS-regulating miRNA miR-673, resulted in an increased IFN production and antiviral response. Our results support a model where the MAVS-miR-673 eLife digest Living cells are under constant attack from disease-causing agents, such as viruses and bacteria. As a result, they have evolved various protective mechanisms to fight off these agents. One of the most important ways that an animal cell protects itself from infection is through the interferon response, which warns the cell of approaching viruses, prompting it to prepare to defend itself. Virtually all healthy cells have an active interferon response, except for stem cells, which have switched off this defensive mechanism, for unknown reasons. This makes stem cells more susceptible to infections. Stem cells are specialized cells that play an essential role in developing the early embryo. The two defining characteristics of these cells -their ability to divide indefinitely, and develop into all cell types -offers great therapeutic potential, as they can be used to 'replace' damaged cells and tissues. However, without an interferon response, stem cells are likely to become infected when moved into a new environment, counteracting their therapeutic benefits. Now, Witteveldt et al. investigate how stem cells turn off this viral defence mechanism, and whether turning it back on will affect their ability to divide and form new tissues. Using stem cells taken from the embryos of mice, Witteveldt et al. found that the interferon response is turned off by specific small molecules of RNA. These small RNA molecules block a protein in the pathway that recognizes viruses and activates a defence. Genetically engineering stem cells to be deficient in these small RNA molecules led to an increased resistance to viral infections. Importantly, modifying stem cells in this manner had no obvious impact on the characteristic traits that give stem cells their therapeutic potential. Temporarily increasing the interferon response of stem cells as they are moved into a new environment could potentially make stem cell treatments more effective. However, more work is needed to investigate whether the same approach can be applied to human cells, and determine what negative effects may be associated with turning on the interferon response. interaction acts as a switch to suppress the IFN response and consequently virus susceptibility during pluripotency. Results ESCs fail to express IFN-b in response to viral DNA/RNA There are two major pathways for sensing intracellular viral infections and consequent activation of the IFN response in cells. One senses dsRNA, usually originating from RNA viruses, with MAVS as a central factor, and the second senses dsDNA, from DNA-and retroviruses signalling through STING (McFadden et al., 2017). It has been shown that mouse ESCs do not produce type I IFNs in response to poly(I:C) transfection, a synthetic analogue of dsRNA classically used to mimic viral RNA replication intermediates (Wang et al., 2013). In contrast, it is still unknown how mouse ESCs respond to immunostimulatory DNA. To study this, two different mouse ESC cell lines (ESC1 and ESC2) were transfected with poly(I:C) and G 3 -YSD, an HIV-derived DNA that stimulates the cGAS/ STING pathway (Herzner et al., 2015). As controls, NIH3T3 fibroblasts and BV-2 microglial cells were included. As expected, the transfection of poly(I:C) did not result in Ifnb1 expression in both ESC lines ( Figure 1A). ESCs also failed to activate Ifnb1 expression upon G 3 -YSD transfection, suggesting that the cGAS/STING pathway was also inactive ( Figure 1B). Similarly, NIH3T3 cells, which have also been previously shown to have a defect in this specific pathway (Cheng et al., 2018), did not express Ifnb1 in response to G 3 -YSD ( Figure 1B). These same cell lines were infected with the (+) ssRNA virus TMEV (Theiler's Murine Encephalomyelitis Virus) and showed that ESCs are at least 30 times more sensitive than NIH3T3 and BV-2 cells, which correlates with the ability of these cell lines to induce Ifnb1mRNA expression ( Figure 1C). The ability of cells to express IFN in response to viruses or immunogenic nucleic acids is assumed to be acquired during differentiation. To test this model, we in vitro differentiated both ESC lines with retinoic acid and determined their ability to respond to poly(I:C). Briefly, embryoid bodies were generated by a hanging droplet method for 48 hr before being cultured in the presence of retinoic acid for 2 or 10 days. Samples from each of these time points were analysed for expression of pluripotency and differentiation markers. The pluripotency markers Nanog and Pou5f1 (Oct4) showed a rapid decrease in mRNA expression during differentiation in both the cell lines ( Figure 1-figure supplement 1A), whereas differentiation markers Neurog2, Gata6 and Gata4 showed a gradual increase (Figure 1-figure supplement 1B) confirming successful differentiation of the ESCs. Next, we compared the ability of ESCs (day 0) and retinoic-acid differentiated cells after 10 days (day 10) to express Ifnb1 mRNA in response to poly(I:C), and confirmed that differentiated cells acquired the ability to synthesize Ifnb1 to similar levels to the positive control cell line, BV-2 ( Figure 1D). Dicer-deficient ESCs acquire an active IFN response Given the relevance of RNAi as an antiviral mechanism in mouse ESCs (Maillard et al., 2013), we next asked if ESCs, in the absence of the central factor for RNAi, ICER, would be more susceptible to RNA viruses. Unexpectedly, Dicer -/-ESCs were more resistant to viruses compared to their wildtype counterparts (previously named ESC2) (Figure 2A, left). Similar results were obtained using the (-) ssRNA virus, Influenza A (IAV) (Figure 2A, right). Importantly, mammalian Dicer has a dual function, being essential for both siRNA and miRNA biogenesis. To determine whether these differences in viral susceptibility were due to the activity of Dicer on siRNA or miRNA production, we compared Dicer -/cells with ESCs lacking the essential nuclear factor for miRNA biogenesis, Dgcr8. The absence of Dgcr8 also decreased TMEV and IAV viral susceptibility, suggesting that miRNAs are responsible for suppressing the antiviral response in ESCs ( Figure 2A). Interestingly, Dgcr8 -/cells were more resistant to virus infection than Dicer -/cells, which supports a dual function for DICER by also acting as a direct antiviral factor targeting viral transcripts for degradation by RNAi. To rule out the possibility of morphological differences influencing viral susceptibility, we performed a virus binding and entry assay which showed no differences ( Figure 2-figure supplement 1). Even though ESCs lack an IFN response, we wondered whether the differential resistance to viral infections were the result of abnormal IFN activation due to the absence of miRNAs. To test this hypothesis, we transfected the dsRNA analogue, poly(I:C) and the immunogenic G 3 -YSD DNA in Dgcr8 or Dicer deficient mESCs, and quantified Ifnb1 expression by RT-qPCR and ELISA. ESCs lacking miRNAs (Dgcr8 -/or Dicer -/-) were able to respond to the dsRNA analogue, poly(I:C) and express Ifnb1 mRNA and protein in a dose dependent manner ( Figure 2B and observations, we blocked IFN signalling using the JAK1/2 inhibitor Ruxolitinib before infecting cells with TMEV. As a result we observed no, or a very mild increase in TMEV viral replication in wild-type ESCs, but a significant increase in viral replication in miRNA-deficient ESCs ( Figure 2C and ESCs were also stimulated with exogenous IFN-b and confirmed that mouse ESCs retain the ability to respond to external IFNs, and, importantly, that miRNA deficiency did not alter ISG expression levels, supporting the hypothesis that the miRNA-mediated silencing of the IFN pathway in ESCs occurs upstream of IFN production ( Figure 2D). To verify that the observed results are solely due to the absence of miRNAs, we rescued the knock-out cell lines by reintroducing Dgcr8 and Dicer and observed that these reverted to wild-type viral replication and susceptibility levels ( Figure 2E,F and Figure 2-figure supplement 2E). As a control, we confirmed rescue of miRNA production by Northern blot ( Figure 2E,F). miRNAs suppress MAVS expression in ESCs To understand where the IFN pathway is silenced in ESCs we blocked the interferon response at defined points in the pathway and measured viral susceptibility. The inhibitor BX795 blocks TBK1/ IKKe phosphorylation and consequently IRF3 transcriptional activity, whereas BMS345541 is an inhibitor of the catalytic subunits of IKK and thus blocks Nf-kB-driven transcription. Both transcription factors are essential for the expression of Ifnb1 and other pro-inflammatory cytokines and initiation of an antiviral response (Lawrence, 2009;Schafer et al., 1998). Both inhibitors increased viral susceptibility in wild-type cells lines, however, the effect was far greater in the knock-out cell lines ( Figure 3A,B and Figure 3-figure supplement 1A-C), suggesting that miRNAs regulate the interferon pathway upstream Ifnb1 transcription. We next aimed to identify the mechanism by which miRNAs silence IFN expression in ESCs, and analysed the proteomes of Dgcr8 -/and the rescued cell line by mass spectrometry. STRING analyses of the expression profiles revealed significant differences in a number of pathways, including ribosome structure/function, mitochondrial activity and the oxidative phosphorylation pathway, which were downregulated in the absence of miRNAs ( Figure 3C, for complete list see Figure 3-source data 1). Measurement of Rhodamine 123 uptake in mitochondria, as an indirect measure for oxidative phosphorylation activity (Scaduto and Grotyohann, 1999), confirmed lower oxidative phosphorylation activity in the absence of miRNAs (Dgcr8 -/and Dicer -/-) ( Figure 3-figure supplement 1D). A search for differentially expressed proteins involved in the IFN response did not reveal any significant changes except for the Mitochondrial antiviral-signalling protein (MAVS), which in contrast to many other mitochondria-related proteins, was upregulated in the absence of miRNAs. This protein has a central role in the RLR-induced (Rig-I-like receptors) IFN pathway, where activated MDA5 and RIG-I receptors translocate to the mitochondria and bind MAVS to ultimately induce Ifnb1 expression (Kawai et al., 2005). Western blot and qRT-PCR analysis confirmed that MAVS was the only factor consistently expressed to higher levels in both miRNA-deficient cell lines, Dgcr8 -/and Dicer -/-( Figure 3D, lanes 2 and 5, and MAVS acts as a switch for IFN expression To confirm the involvement of miRNAs on MAVS expression, a dual luciferase assay system was used where the 3'UTRs of Mavs, Mda5 and Rig-I were fused to a luciferase reporter gene to compare luciferase activity in wild-type and knock-out ESCs. Only the Mavs 3'UTR showed relatively higher luciferase expression levels in the knock-out lines when compared to the empty plasmid, suggesting that the 3'UTR of Mavs is strongly regulated by miRNAs in ESCs ( Figure 4A). For this reason, a miRNA-resistant form of Mavs, lacking its natural 3'UTR, was overexpressed in wild-type ESCs and infected with TMEV to test if cells regain viral resistance similar to miRNA deficient ESCs ( Figure 4B). A 15-fold decrease in TCID 50 and significant reduction in vRNA levels were found compared to wild-type ESCs ( Figure 4C). MAVS overexpressing cells also regained the ability to produce Ifnb1 after stimulation with poly(I:C) ( Figure 4D). All these experiments show that MAVS is a crucial target for the absence of the IFN response in ESCs. miR-673 is crucial to suppress antiviral immunity in ESCs We next aimed to identify the miRNA(s) responsible for the regulation of MAVS in ESCs and selected a number of miRNA candidates based on literature, prediction software and public miRNA expression databases for further investigations. Previous experimental evidence has shown that human MAVS is regulated by miR-125a, miR-125b and miR-22 (Hsu et al., 2017;Wan et al., 2016). However, only miR-125a-5p and miR-125b-5p have conserved binding sites in mouse MAVS. Two additional miRNAs, miR-185-5p and miR-673-5p, were selected based on their DICER and DGCR8dependent biosynthesis pathway, their high expression levels in mouse ESCs and number of predicted binding sites in the Mavs 3'UTR (Tang et al., 2006;Babiarz et al., 2008). We transfected Dgcr8 -/cells with mimics of these miRNAs and measured Mavs mRNA and protein levels by RT-qPCR and western blot, respectively. Results showed reductions in MAVS protein and mRNA levels for all tested miRNAs ( Figure 5A and Figure 5-figure supplement 1A). The infection of miRNAtransfected Dgcr8 -/cells with TMEV resulted in an increase in both susceptibility and viral replication for miR-125a-5p, miR-125b-5p and miR-673-5p, which correlated with the ability of these miRNAs to downregulate MAVS protein levels ( Figure 5B and Figure 5-figure supplement 1B). As an alternative approach, Dgcr8 +/+ cells were transfected with inhibitors to miRNAs miR-125a-5p, miR-125b-5p and miR-673-5p. Western blot analysis showed a clear increase in MAVS protein expression, especially for anti-miR-673-5p ( Figure 5C). Because miR-673-5p showed the largest effect on MAVS protein expression both when depleted and overexpressed, we hypothesize that miR-673 is a crucial miRNA involved on MAVS regulation. We further investigated the role of miR-673-5p in ESCs by creating stable knock-out cell lines for this miRNA by CRISPR/Cas9. Three cell lines were selected based on the genomic deletion and confirmed undetectable expression of miR-673-5p ( Figure 5-figure supplement 2A,B). The absence of miR-673-5p was enough to observe an increase in MAVS expression both at the mRNA and protein levels ( Figure 5D and Figure 5-figure supplement 2C). In addition, we measured miR-673 and MAVS expression levels in the mouse fibroblasts cell line, NIH3T3, which is proficient in producing IFN in response to dsRNA. Mouse fibroblasts had no detectable miR-673-5p, and MAVS protein expression was comparable to miRNA-deficient ESC ( Figure 5D and Figure 5-figure supplement 2B), highlighting the correlation of MAVS expression with the ability of cells to activate Ifnb1 expression in response to immunogenic RNA. Next, miR-673-deficient cell lines were tested for TMEV susceptibility, which showed a consistent decrease in virus replication, similar to that observed in the absence of all miRNAs (Dgcr8 -/-), suggesting this miRNA is essential in regulating the innate antiviral response in ESCs ( Figure 5E). To test the relevance of IFNs on the increased antiviral resistance of miR-673 -/cell lines, we compared their sensitivity to TMEV infections in the presence of the JAK1/2 inhibitor, Ruxolitinib. Whereas inhibition of IFN signalling did not significantly increase the accumulation of viral RNA in wild-type ESCs (Dgcr8 +/+ ), both miR-673-deficient and Dgcr8 -/-ESCs showed a significant increase in viral RNA (a) Transfection of miRNA mimics miR-125a-5p, miR-125b-5p, miR-185-5p and miR-673-5p in Dgcr8 -/cells followed by MAVS western blot. MAVS quantification normalized to Tubulin and relative to wild-type is shown at the top (b) Quantification of TMEV replication by qRT-PCR in the same cell lines as in (a) (n = 3) (c) Western blot analysis of MAVS expression in Dgcr8 +/+ cells transfected with antagomirs against miR-125a-5p, miR-125b-5p and miR-673-5p. MAVS quantification normalized to Tubulin and relative to wild-type is shown at the top (d) Figure 5 continued on next page ESC differentiation with retinoic acid, expression of miR-673-5p became silenced, confirming previous results obtained with alternative differentiation protocols (Knelangen et al., 2011;Zhao et al., 2014;Hadjimichael et al., 2016;Yang et al., 2016), and suggesting that the expression levels of this miRNA negatively correlate with the ability of cells to activate the IFN response ( Figure 5G). Collectively, these data show that the IFN response in mouse ESCs is silenced by the post-transcriptional control of Mavs expression by miR-673-5p. Discussion Several studies suggest that the pluripotent state of a cell is incompatible with an active IFN response (Guo et al., 2015). Both mouse and human stem cells fail to synthesize IFNs in response to dsRNA (Wang et al., 2013;Chen et al., 2010), implying that this characteristic is acquired during differentiation (D'Angelo et al., 2016). Embryonic carcinoma cells, which are still pluripotent, also fail to produce IFNs in response to viral RNA mimics (Burke et al., 1978). In agreement, reprogramming of somatic cells to iPSCs (induced pluripotent stem cells) leads to a loss of IFN response, suggesting the presence of regulatory mechanisms able to switch this antiviral pathway on or off between the differentiated and pluripotent states (Chen et al., 2012). Another feature of pluripotent cells is their attenuated response to exogenous type I IFNs. Mammalian pluripotent stem cells, iPSCs and embryonic carcinoma cells exhibit an attenuated production of ISGs upon type I IFN stimulation (Hong and Carmichael, 2013;Irudayam et al., 2015;Wang et al., 2014;Burke et al., 1978). Why these activities are supressed is still not understood, but it has been hypothesized that type I IFN stimulation could impair their self-renewal capacity, since these compounds are well-known antiproliferative agents and inducers of cell death (Bekisz et al., 2010). Indeed, type I IFNs are capable of inhibiting tumour cell division in vitro and are currently employed as an adjuvant to treat several types of cancers, acting as stimulants of the innate immune cellular response (Bracci et al., 2017). Mouse ESCs express low levels of the RNA sensors TLR3, MDA5 and RIG-I, which could explain their inability to respond to dsRNA although no functional studies support this model so far (Wang et al., 2013). Our data shows an alternative scenario in which MAVS is the key factor for controlling the IFN response. The overexpression of a miRNA-resistant form of MAVS in wild-type ESCs is enough to enable dsRNA-mediated IFN activation, suggesting that dsRNA sensing is not a limiting step in the IFN pathway in ESCs. Regulation of MAVS alone proves to be an efficient mechanism to block dsRNA induced IFN expression compared to suppressing individual dsRNA sensors. The observation that miRNAs only suppress RNA-mediated IFN activation, but not the DNAmediated pathway, leads us to speculate about the reasons for silencing this specific response during pluripotency. Embryonic stem cells, and also earlier stages of embryonic development are characterized by high expression levels of specific retrotransposons (non-LTR) and endogenous retroviruses (LTR), which are a hallmark of their pluripotent state. This is in contrast to most somatic cell types that silence their expression (Yin et al., 2018). These repetitive elements produce cytoplasmic RNA molecules as an intermediate for mobilisation, which can be accidentally recognised as immunogenic or non-self RNAs, as it has been previously shown for the human non-LTR retroelement Alu in the context of Aicardi-Goutires syndrome or for endogenous retroviruses (Ahmad et al., 2018;Chiappinelli et al., 2015;Roulois et al., 2015). Therefore, silencing the RNAmediated IFN response during pluripotency would act as a protective mechanism for aberrant IFN activation by transposon-derived transcripts. Cells that are incapable of activating the RNA-mediated IFN response have developed alternative antiviral defence pathways. The endonuclease DICER can act as an antiviral factor in mouse ESCs by generating antiviral siRNAs (Maillard et al., 2013). Detection of antiviral DICER activity is facilitated in the absence of a competent IFN response, such as in the case of pluripotent cells, but also in somatic cells where the type I IFN response has been genetically impaired (Maillard et al., 2016). These findings are supported by the observation that in IFN-competent cells, the RNA sensor LGP2 acts as an inhibitor of DICER cleavage activity on dsRNA (van der Veen et al., 2018). However, DICER activity has also been reported in other cell lines, independently of their IFN-proficiency capacity . Interestingly, when we disrupt Dicer in ESCs, which inherently lack an IFN response and would theoretically render these cells highly sensitive to viral infections, they become more resistant by acquiring an active IFN response. All these results support the presence of extensive cross-talk between the different antiviral strategies, and suggests that cells have developed mechanisms to compensate for the loss of a specific antiviral pathway. Our model shows that MAVS and miR-673 levels are the key factors regulating the IFN response to dsRNAs during pluripotency. Accordingly, overexpressing MAVS or knocking-out this single miRNA in ESCs is enough to enhance their antiviral response. Interestingly, this miRNA is only conserved in rodents, despite human ESCs also suppressing type I IFNs expression (Hong and Carmichael, 2013). This suggests that either other miRNAs regulate MAVS expression in human ESCs, or alternative mechanisms operate to silence IFN. Interestingly, human and mouse ESCs have been suggested to constitutively express a subset of ISGs to protect them from viruses (Wu et al., 2018). Our proteomics data suggest that, from all the ISGs detected, miRNAs did not significantly affect production of these antiviral factors, such as IFITM1, IFTIM2, IFITM3 amongst others. We have shown that engineering ESCs to acquire a functional IFN response significantly increases their antiviral immunity, highlighting the powerful antiviral effects of IFNs even during pluripotency. Previous findings also support a general role for DICER and miRNAs acting as negative regulators of the IFN response in human and mouse models outside pluripotency (Papadopoulou et al., 2012;Witteveldt et al., 2018). In agreement, an indirect approach to deplete cellular miRNAs, by overexpressing the viral protein VP55 from Vaccinia virus, showed that miRNAs are also relevant to control the expression of pro-inflammatory cytokines during viral chronic infections, but not in the acute antiviral response (Aguado et al., 2015). However, the concept of miRNAs acting as direct antiviral factors is still controversial. It is relevant to mention that some of the results leading to this conclusion have been primarily generated in DICER1 -/-HEK293T human cell line (Bogerd et al., 2014;Tsai et al., 2018), which has an attenuated IFN response due to low PRRs expression (Rice et al., 2014;Witteveldt et al., 2018). We have shown that overexpression of MAVS or silencing specific miRNAs in a transient or stable manner improves the antiviral response of ESCs. These findings are the basis to further study the conservation of the miRNA-mediated regulation of the IFN response in somatic cells and in the context of human pluripotency. All these investigations will provide a deeper understanding and tool set on how to enhance the innate immunity of ESCs and their differentiated progeny, an especially relevant aspect in clinical applications. Stocks of TMEV strain GDVII were grown on BHK-21 cells and frozen in aliquots at À80˚C. Stocks of Influenza A virus strain PR8 (kindly provided by P. Digard, University of Edinburgh) were grown on MDCK cells in the absence of serum and in the presence of 2 mg/ml TPCK-treated trypsin and frozen in aliquots at À80˚C. For TMEV infections, cells were infected for 1 hr with the required dilution, followed by replacement with fresh medium and incubation for the desired time. For the 50% Tissue Culture Infective dose (TCID 50 ) assays, seven serial dilutions of TMEV were prepared and at least six wells (in 96-well format) per dilution were infected and incubated for at least 24 hr before counting infected wells. TCID 50 values were calculated using the Spearman and Kä rber algorithm. Influenza A virus infections were performed by infecting cells in the absence of serum for 45 min with the addition of 2 mg/ml TPCK-treated trypsin. After replacement of the inoculum with fresh serum containing medium the cells were incubated for the desired period. Differentiation of mESCs To differentiate mESCs, they were first cultured as hanging droplets to induce embryoid body formation. For this, a single-cell suspension of 5 Â 10 5 cells/ml was prepared in medium without LIF and 20 ml drops are pipetted on the inside of the lid of a 10 cm petri dish and hung upside-down. The petri-dish was filled with PBS to prevent drying of the hanging drops and incubated at 37˚C, 5% CO 2 for 48 hr. The embryoid bodies were consequently washed from the lids and transferred to petri dishes to further differentiate, all in the absence of LIF. After another incubation time of 48 hr, medium was removed and replaced with fresh medium containing 250 nM of retinoic acid (Sigma-Aldrich) and incubated for 7 days while replacing the medium every 48 hr. After this incubation time, the embryoid bodies were collected and plated on normal gelatine-coated cell culture plates which allowed the embryoid bodies to adhere to the plastic and the cells to migrate from the embryoid bodies. Again, the medium was refreshed every 48 hr for the cells to further differentiate. Northern blot for miRNAs Total RNA (15 mg) was loaded on a 10% TBE-UREA gel. After electrophoresis, gel was stained with SYBR gold for visualization of equal loading. Gel was transferred onto a positively charged Nylon membrane for 1 hr at 250 mA. After UV-crosslinking, the membrane was pre-hybridized for 4 hr at 40˚C in 1xSSC, 1%SDS (w/v) and 100 mg/ml single-stranded DNA (Sigma). Radioactively labelled probes corresponding to the highly expressed ESCs miRNAs miR-130-3 p, miR-293-3 p, and miR-294-3 p were synthesized using the mirVana miRNA Probe Construction Kit (Ambion) and hybridized overnight in 1xSSC, 1%SDS (w/v) and 100 mg/ml ssDNA. After hybridization, membranes were washed four times at 40˚C in 0.2xSSC and 0.2%SDS (w/v) for 30 min each. Blots were analysed using a PhosphorImager (Molecular Dynamics) and ImageQuant TL software for quantification. Oligonucleotides used are listed in Supplementary file 1. Transfections of poly(I:C), DNA, miRNA mimics and Antagomirs To activate the IFN response, cells were transfected with either the dsRNA analogue poly(I:C) (Invivogen) or the Y-shaped-DNA cGAS agonist (G3-YSD, Invivogen) using Lipofectamine 2000 (Thermo-Fisher). Transfections were performed in 24-well format, with cells approximately 80% confluent, using different concentrations of poly(I:C), from 0,5 to 2,5 mg per well (as indicated in the figures) or 0.5 mg of G3-YSD. Cells were incubated for approximately 16 hr for poly(I:C)-and 8 hr for DNAtransfections before harvest and further processing. IFN-b expression was measured using a quantitative ELISA kit (Mouse IFN-b, Quantikine, R and D systems) according to manufacturer's instructions. Cells were transfected with 2.5 mg/ml poly(I:C), incubated for 16 hr after which supernatant was collected and assayed for IFN-b. To activate ESCs with exogenous IFN-b (R and D systems), cells were incubated with 10.000 U/ml of IFN-b for 4 hr, followed by RNA extraction and quantitative RT-PCR. For the miRNA mimics (miScript, Qiagen) a final concentration of 1 mM was transfected into cells using Dharmafect (Dharmacon), incubated for the desired period and further processed. The same procedure was followed for the antagomirs (Dharmacon), but at a concentration of 100 nM. All experiments were performed in 24-well format, with cells at approximately 80% confluency. Quantitative RT-PCR Total RNA from cells was isolated using Tri reagent (Sigma-Aldrich) according to the manufacturer's instructions. 0.5-1 mg RNA was subsequently reverse transcribed using M-MLV (Promega) and random hexamers, and used for quantitative PCR in a StepOnePlus real-time PCR machine (Thermo-Fisher) using GoTaq master mix (Promega). Data was analysed using the StepOne software package. Oligonucleotides used are listed in Supplementary file 1. Luciferase assay The 3'UTRs from Mda5, Rig-I and Mavs were amplified from genomic DNA based on the annotation from UTRdb (utrdb.ba.itb.cnr.it) using primers containing restriction sites. The fragments were cloned in the psiCHECK-2 vector (Promega) at the 3' end of the hRluc gene. Cells in 24-well format were transfected with 250 ng plasmid using Lipofectamine 2000 and incubated for 24 hr. Cells were subsequently lysed and assayed using the Dual-Glo Luciferase assay system (Promega). Luminescence was measured in a Varioskan flash (ThermoFisher) platereader. Proteomics For the total proteome comparison, 6 replicates of the Dgcr8 -/and Dgcr8 resc cell lines were prepared by lysing cells in Lysis buffer (50 mM TRIS-HCl, pH 7.4, 1% triton X-100, 0.5% Na-deoxycholate, 0.1% SDS, 150 mM NaCl, protease inhibitor cocktail (Roche), 5 mM NaF and 0.2 mM Sodium orthovanadate) at 4˚C. Samples were subsequently sonicated 4 Â 10 s, at 2m amplitude, reduced by boiling with 10 mM DTT and centrifuged. The samples were further processed by Filter-aided sample preparation (FASP) by mixing each sample with 200 ml UA (8M Urea, 0.1 M Tris/HCl pH 8.5) in a Vivacon 500 filter column (30 kDa cut off, Sartorius VN01H22), centrifuged at 14.000 x g and washed twice with 200 ml UA. To alkylate the sample, 100 ml 50 mM iodoacetamide in UA was applied to the columns and incubated in the dark for 30 min, spun, followed by two washes with UA and another two washes with 50 mM ammonium bicarbonate. The samples were trypsinized on the column by the addition of 4 mg trypsin (ThermoFisher) in 40 ml 50 mM ammonium bicarbonate to the filter. Samples were incubated overnight in a wet chamber at 37˚C and acidified by the addition of 5 ml 10% trifluoroacetic acid (TFA). The pH was checked by spotting onto pH paper, and peptide concentration estimated using a NanoDrop. C18 Stage tips were activated using 20 ml of methanol, equilibrated with 100 ml 0.1% TFA) and loaded with 10 mg peptide solution. After washing with 100 uL 0.1% TFA, the bound peptides were eluted into a Protein LoBind 1.5 mL tube (Eppendorf) with 20 ml 80% acetonitrile, 0.1% TFA and concentrated to less than 4 ml in a vacuum concentrator. The final volume was adjusted to 6 ml with 0.1% TFA. Five mg of peptides were injected onto a C18 packed emitter and eluted over a gradient of 2-80% ACN in 120 min, with 0.1% TFA throughout on a Dionex RSLnano. Eluting peptides were ionised at +2 kV before data-dependent analysis on a Thermo Q-Exactive Plus. MS1 was acquired with mz range 300-1650 and resolution 70,000, and top 12 ions were selected for fragmentation with normalised collision energy of 26, and an exclusion window of 30 s. MS2 were collected with resolution 17,500. The AGC targets for MS1 and MS2 were 3e6 and 5e4 respectively, and all spectra were acquired with one microscan and without lockmass. Finally, the data were analysed using MaxQuant (v 1.5.7.4) in conjunction with uniprot fasta database 2017_02, with match between runs (MS/MS not required), LFQ with one peptide required. Average expression levels were calculated for each protein and significant differences identified using a two tailed t-test assuming equal variance (homoscedasticity) with a p-value lower than 0.05. Stable cell lines overexpressing DGCR8, Dicer and MAVS Plasmids containing the sequence of mouse DICER (pCAGEN-SBP-DICER1, Addgene), MAVS (GEhealthcare, MMM1013-202764911) and DGCR8 (Macias et al., 2012) were used to amplify the open reading frame using specific primers containing restriction sites (Supplementary file 1). The amplified and digested fragments were ligated in pLenti-GIII-EF1a for MAVS and pEF1a-IRES-dsRED-Express2 for DGCR8 and DICER. Verified plasmids containing the genes of interest were transfected in mESCs using Lipofectamine 2000 and selected with the appropriate antibiotic. After several weeks of selection, colonies were isolated, expanded and tested for expression by qRT-PCR and Western blot. Mitochondrial activity The mitochondria specific dye Rhodamine 123 (Sigma-Aldrich) was used to measure mitochondrial activity. Suspended cells were incubated with Rhodamine 123 at 37˚C and samples were taken at various intervals, washed three times with PBS at 4˚C and the fluorescence measured in a VarioSkan flash (ThermoFisher) plate reader (excitation 508, emission 535). Inhibitors Cells were pre-incubated with the inhibitors BX795, which blocks the phosphorylation of the kinases TBK1 and IKKe, and consequently IRF3 activation and IFN-b production (10 mM, Synkinase) and the inhibitor BMS345541, which targets IKba, IKKa and IKKb and consequently NF-kb signalling (10 mM, Cayman Chemical) for 45 min before infection with TMEV. 24 hr post-infection in the presence of the inhibitor, infected wells were scored and the TCID 50 calculated. For Ruxolitinib (Cell Guidance Systems), cells were pre-incubated for 45 min with 50 mM Ruxotlitinib, infected with TMEV and incubated for 16 hr followed by extensive washing with PBS, RNA extraction and analysis by quantitative RT-PCR. CRISPR/Cas 9 targeting of mmu-miR-673 To create a cell line lacking mmu-mmiR-673-5p, the Alt-R CRISPR-Cas9 System (IDT) was used. Two different crRNAs were designed to target sequences within the pri-miRNA sequence hairpin to induce structural changes disrupting processing by the Microprocessor and DICER. Cas9 protein and tracrRNAs were transfected with the Neon Transfection System followed by cell sorting to create single cell clones. Genomic DNA was purified and screened by PCR followed by restriction site disruption analyses for the pri-miRNA sequence. Genomic DNA of the pri-miRNA sequence of candidates was amplified using primers in Supplementary file 1, and cloned into pGEMt-easy vector for sequencing. miRNA qRT-PCR Total RNA (100 ng) was used to quantify mmu-mmiR-673-5p levels. RNA was first converted to cDNA using miRCURY LNA RT kit (Qiagen). cDNA was diluted 1/25 for RT-qPCR using miRCURY LNA SYBR Green kit and amplified using mmu-mmiR-673-5p specific primers (Qiagen) and U6 as a loading control. Quantitative PCR was carried out on a Roche LC480 light cycler and analysed using the second derivative method. Data availability All processed Mass spectrometry data is provided as Figure 3-source data 1, including LFQ intensity values for each protein detected in each of the samples. All raw data are available from corresponding author upon request.
8,635.2
2019-04-23T00:00:00.000
[ "Biology", "Medicine" ]
The Role of Household Consumers in Adopting Renewable Energy Technologies in Kenya In transition to a low-carbon economy, the adoption of renewable energy (RE) technologies by energy investors, power utilities and energy consumers is critical. In developing countries like Kenya with a high rate of urbanization, this transition requires urban and rural residents’ proactive responses to using renewable energy sources. In this regard, a better understanding of residents’ perceptions about renewable energy investment, RE sources availability, climate change, environmental conservation and other factors can lead to more efficient and sustainable implementation of renewable energy policies. This study investigates the role Kenya’s household energy consumers in urban and rural areas can play in adopting renewable energy technologies. To achieve this, a questionnaire survey was administered among 250 household consumers in Nairobi County, Makueni County, and Uasin Gishu County. Our survey analysis shows that about 84% of the respondents were interested in adopting renewable energy for their entire energy consumption mostly because of solving frequent power outages and high energy cost from the grid system. This perception did not have any correlations with income levels or any other socio-economic factors we identified. Furthermore, about 72% of the respondents showed their interests in producing and selling renewable energy to the national or local grids if government subsidies were readily available. Rural residents showed strong interests in adopting renewable energy technologies, especially solar PV solutions. However, the main impediment to their investment in renewable energy was the high cost of equipment (49%) and the intermittent nature of renewable energy (27%) resources. Introduction Residential energy consumption reduction can play a significant role in mitigating climate change [1]. Residential emissions particularly from urban areas, account for 30%-40% of the global greenhouse gases (GHGs) emissions [2,3]. Since 2012, Kenya's residential power consumption has grown by 28%. This growth occurred mainly as a result of urbanization and improved electricity access in the rural areas [4]. Currently, the residential sector accounts for 31% of the total electricity consumption [5]. In 2018, hoping to dramatically change this energy consumption outlook, the Kenyan government announced that it would supply 100% of its energy from renewable resources. Despite this noble attempt, the question remains as to the extent to which household energy consumers are willing to adopt renewable energy technologies in developing countries like Kenya that has suffered from chronic poverty conditions but is endowed with a high renewable energy generation potential. Kenya's renewable energy market was established as early as the 1970s mostly by foreign investments. The country soon became known as a "donor hub" for solar photovoltaic (PV) facilities [6]. Currently, however, only about 1.2% of Kenyan households have installed home solar systems [4]. Some communities have undertaken community-level hydro projects while other individuals have invested in wind power generation. In an international context, studies found that consumers' decisions to adopt renewable energy technologies were influenced by motivational, contextual, and habitual factors [7]. Some studies emphasized that residents adopted renewable energy technologies because of prospects for economic benefits or social pressure from their peers and neighbors [7][8][9]. Others suggested that environmental motivations induced people to adopt renewable energy [9]. Some research found a strong link among household environmental attitudes, energy consumption and investment patterns [10]. Government incentives would also induce the adoption of renewable energy as the cost appears to be a strong barrier to adoption [7,[10][11][12][13]. Although several studies examined residential consumers' perceptions on renewable energy, these were mainly carried out in developed countries. This study attempts to investigate regional contexts of residential consumers' decision-making for adopting renewable energy technologies in Kenya. Previous studies [14][15][16] demonstrated that consumer perceptions about renewable energy often vary by country. Even within a country studies [17,18] showed that those in rural areas are more likely to invest in renewable energy than those in urban areas. Other studies [19,20] noted that different lifestyles affect technological choices. In order to grow the building integrated photovoltaics (BIPV) market leading to economic and social progress, education of urban residential consumers' is crucial [19]. Despite these studies, scant attention has been paid to residential consumers' attitudes and willingness to adopt renewable energy technologies in Kenya. In addition, we know little about motivating factors and challenges that Kenya's household consumers face in investing in renewable energy technologies. Study Areas This study targets household electricity consumers in both urban and rural areas of Kenya. Considering climatic conditions, urbanization level, household power demand, economic activities, and renewable energy potential, we selected Nairobi County (urban), Makueni, and Uasin Gishu counties (both largely rural) ( Figure 1). These counties have differences in electricity access rates, population distribution, electricity usage, and climatic conditions. The government has focused on renewable energy projects for rural areas [20]. However, the grid extension has progressed relatively slowly in rural areas. The two rural counties of Makueni (south-east) and Uasin Gishu (north-west) are located on the opposite sides of the capital city (Nairobi) with different climatic conditions. The eastern part of the capital city generally receives less rainfall than the western part does. The three counties, therefore, are a good representation of urban and rural areas in Kenya. Nairobi County is the most populous county in Kenya with a population of 3,138,369, according to the 2009 census. Kenya's capital is located here. It has a total surface area of 697 km 2 and is classified as 100% urban. It receives an average annual precipitation of 926 mm with annual average maximum and minimum daily temperatures of 25.3 • C and 12.6 • C, respectively [21]. Currently, Nairobi County has the highest power consumption (45%) and is projected to remain so in the next 20 years. Nairobi county hosting the capital of Kenya has benefitted from a World Bank funded project of public/street lighting program [22]. In addition, it was the main beneficiary of slum electrification project [23]. Demographically, Nairobi residents can be clearly classified as low, middle-high-income groups by residential areas [24]. The main economic activities are manufacturing, tourism, commercial, and financial services. The average annual photovoltaic (PV) electricity output is 1530 kWh/kWp. The average annual wind power density ranges from poor to marginal (0-165 w/m 2 ) although some spots are very good (425-615 w/m 2 ) [25,26]. The County has no known hydro power generation potential [27]. Makueni County is one of the semi-arid counties located in the eastern part of Kenya about 130 km from Nairobi [28]. It has a total population of 884,527, according to the 2009 census [24]. It has a total surface area of 6806 km 2 with urban areas covering about 1% of the total land area. It receives an average annual precipitation of 596 mm with an annual average maximum and minimum temperatures of 28.2 • C and 16.8 • C, respectively [29]. To increase electricity access in rural areas, in 2012, the government set up a 13.5 kWp solar plant, battery storage, and canopy at the Kitonyoni village trading center. This solar project supplies electricity to about 3,000 residents [28]. The main economic activities here are agriculture (especially animal husbandry) and commerce. The average annual photovoltaic electricity output is 1573 kWh/kWp [28]. The average annual wind power density ranges from poor to marginal (0-165 w/m 2 ), although some spots are classified as good (275-425 w/m 2 ) [25,26]. Given the arid and semi-arid nature of the County, it has little hydro power generation potential [27]. Makueni County is one of the semi-arid counties located in the eastern part of Kenya about 130 km from Nairobi [28]. It has a total population of 884,527, according to the 2009 census [24]. It has a total surface area of 6806 km 2 with urban areas covering about 1% of the total land area. It receives an average annual precipitation of 596 mm with an annual average maximum and minimum temperatures of 28.2 °C and 16.8 °C, respectively [29]. To increase electricity access in rural areas, in 2012, the government set up a 13.5 kWp solar plant, battery storage, and canopy at the Kitonyoni village trading center. This solar project supplies electricity to about 3,000 residents [28]. The main economic activities here are agriculture (especially animal husbandry) and commerce. The average annual photovoltaic electricity output is 1573 kWh/kWp [28]. The average annual wind power density ranges from poor to marginal (0-165 w/m 2 ), although some spots are classified as good (275-425 w/m 2 ) [25,26]. Given the arid and semi-arid nature of the County, it has little hydro power generation potential [27]. Uasin Gishu County is located on a plateau in the western part of Kenya about 310 km from Nairobi [24]. It has a total population of 818,757, which is evenly distributed across the County, according to the 2009 census. It has a total surface area of 3351 km 2 with urban areas covering about 6% of the County. It receives an average annual precipitation of 1100 mm with an annual average maximum and minimum temperatures of 23.3 °C and 11.3 °C, respectively [30]. The average annual photovoltaic electricity output is 1793 kWh/kWp [28], while the average annual wind power density ranges from poor to marginal (0-165 w/m 2 ) with some high density spots (275-425 w/m 2 ) [25,26]. This County also has potential for small hydroelectric generation and is one of the main catchment areas of Lake Victoria [27,31]. It benefited from a World Bank funded project of public/street lighting program [22]. Data Collection The primary data was collected through the questionnaire survey that was administered between October and November 2018. Prior to this, a preliminary survey was conducted to make sure that the respondents could understand all the questions. A revised multiple choice questionnaire was then administered to 250 household heads through random sampling in the three counties of Nairobi, Makueni, and Uasin Gishu. We targeted 50 household heads in each county evenly distributed to cover different parts of the country according to the local conditions. Considering Uasin Gishu County is located on a plateau in the western part of Kenya about 310 km from Nairobi [24]. It has a total population of 818,757, which is evenly distributed across the County, according to the 2009 census. It has a total surface area of 3351 km 2 with urban areas covering about 6% of the County. It receives an average annual precipitation of 1100 mm with an annual average maximum and minimum temperatures of 23.3 • C and 11.3 • C, respectively [30]. The average annual photovoltaic electricity output is 1793 kWh/kWp [28], while the average annual wind power density ranges from poor to marginal (0-165 w/m 2 ) with some high density spots (275-425 w/m 2 ) [25,26]. This County also has potential for small hydroelectric generation and is one of the main catchment areas of Lake Victoria [27,31]. It benefited from a World Bank funded project of public/street lighting program [22]. Data Collection The primary data was collected through the questionnaire survey that was administered between October and November 2018. Prior to this, a preliminary survey was conducted to make sure that the respondents could understand all the questions. A revised multiple choice questionnaire was then administered to 250 household heads through random sampling in the three counties of Nairobi, Makueni, and Uasin Gishu. We targeted 50 household heads in each county evenly distributed to cover different parts of the country according to the local conditions. Considering Nairobi County is very diverse, we obtained cooperation from 100 residents from both low-income households (Kibera and Mukuru kwa Reuben slums) and middle-high-income households (Roysambu, Karen and Westlands estates). In addition, we had 50 respondents from the Nairobi County city center. This ensured that the survey captured a representative sample of all income levels and diversities in the County. Since most household heads were either working or had some other errands during weekdays, the questionnaire was administered mainly during weekends except for shops and the Nairobi County city center respondents. The questionnaire was administered to adults only. The questionnaire was formulated after reviewing similar studies on perceptions of renewable energy [7,[32][33][34], renewable energy acceptance and policies [35][36][37], and specific renewable energy technology studies [38][39][40]. In addition, an extensive review of government publications on rural and urban electrification projects [7,24], energy generation and transmission [41], energy planning [5], and reports on energy generation and supply [42,43] was carried out. The questionnaire was divided into six sections. The first section attempted to find out the socio-demographic characteristics of the respondents, including the current source of electricity. The second section focused on respondents' concerns about environmental conservation, air pollution associated with diesel power generation plants, and the types of power they used. The third section investigated their willingness to accept 100% of their power supply from renewable energy sources, and their perceptions about government support for this purpose. The fourth section attempted to find out what renewable energy technologies the respondents would be willing to adopt and invest in. The fifth section looked into the motivations to invest in renewable energy technologies. The sixth section assessed the challenges residents faced in installing renewable energy technologies. The collected data were coded and entered into Microsoft Excel 2016. The data were analyzed by using Microsoft Excel Analysis ToolPak. The data was summarized using both descriptive and inferential statistics. Socio-Demographic Characteristics The first part of the questionnaire on socio-demographic characteristics identified age, gender, household size, educational level, and monthly income (Table 1). In addition, we asked the respondents to state their current electricity power source. Here we found that about 61% of the respondents were males. Three-quarters of the respondents fell within an age bracket of 20-39 years old. Only 3% of our respondents had no formal education and 89% had at least attained high school education. About 77% of those with postgraduate qualifications were from the middle and high-income category in Nairobi County. Those in Uasin Gishu, a rural county, tended to be less educated. Regarding the household size, 70% had four or less persons [21]. Those who had more than five persons tended to be in Uasin Gishu County (39%). About 63% of the respondents earned monthly income of between KES10,000 to 50,000 (US$1 = KES101.2986 on 1 January 2018). About 73% of those who earned less than KES20,000 per month lived in rural counties of Makueni (22%), Uasin Gishu (15%), and the low-income areas of Nairobi County (36%). On the question of their current electricity power source, about 74% of the respondents obtained their electricity solely from the national grid, while 16% did from the combined sources of the national grid and solar PV. About 7% had electricity from a solar mini-grid, while 3% had all their electricity supplies from household solar PV with battery. All the respondents in Nairobi County were connected to the national grid. The majority of those who got power supply from both the national grid and solar PV were from rural counties of Uasin Gishu (48%) and Makueni (32%). All those who received power from a solar mini-grid were from Makueni County. Three-quarters of the households who had all their electricity from household solar PV with battery were from Makueni County while the rest were from Uasin Gishu County. Concern on Environment and Power Generation The second section of our survey focused on respondents' concerns about environmental conservation, renewable energy sources and other power sources the respondents used. We asked the respondents about the extent to which they cared about (1) environmental conservation, (2) air pollution associated with diesel power generation, and (3) the type of power sources they used (renewable or non-renewable sources). The results show that about 98% of the respondents either strongly cared or cared about environmental conservation (Figure 2). Similarly, about 96% either strongly cared or cared about air pollution that was associated with diesel power generation. About 84% of the respondents strongly cared or cared whether the energy they used was generated from renewable sources or not. Previous studies [44][45][46] similarly showed high environmental concerns among household consumers. These consumers depicted other pro-environmental behaviors like adopting green electricity and energy conservation for air pollution control. Here we also tried to find if there are regional variations in response to these three factors. An analysis of variance on environmental conservation (p-value = 7.93 × 10 −11 ), air pollution associated with diesel power generation (p-value = 1.75 × 10 −8 ), and power generation source (p-value = 3.34 × 10 −8 ) showed significant statistical differences among the three study areas. Those who strongly cared about environmental conservation were mostly from Makueni County (72%), low-income category (56%), middle and high-income category (54%), and Nairobi County city center (50%). In Uasin Gishu County 84% cared about environmental conservation. The respondents who strongly cared about diesel power generated air pollution were from Makueni County (64%), Nairobi County city center (56%), middle-high-income category (52%) and low-income category (50%). About 94% of those in Uasin Gishu County stated that they cared about it. Those who strongly cared about power generation sources were from Makueni County (54%), low-income category (48%), middle-high-income category (44%), and Nairobi County city center (44%). In Uasin Gishu County 68% of the respondents cared about it and 30% said they were not sure. These results show that the respondents in Makueni County strongly cared more about the three environmental issues compared to the other two counties. This can be understood from their geographical location in arid and semi-arid areas where the residents have experienced adverse environmental impacts like drought. In addition, their livelihoods (agriculture and livestock keeping) are directly connected to the environment [21]. Although the respondents in Uasin Gishu County also depended on farming, this County received higher rainfall with more forest cover [30]. Environments 2019, 6, x FOR PEER REVIEW 6 of 13 10 −8 ) showed significant statistical differences among the three study areas. Those who strongly cared about environmental conservation were mostly from Makueni County (72%), low-income category (56%), middle and high-income category (54%), and Nairobi County city center (50%). In Uasin Gishu County 84% cared about environmental conservation. The respondents who strongly cared about diesel power generated air pollution were from Makueni County (64%), Nairobi County city center (56%), middle-high-income category (52%) and low-income category (50%). About 94% of those in Uasin Gishu County stated that they cared about it. Those who strongly cared about power generation sources were from Makueni County (54%), lowincome category (48%), middle-high-income category (44%), and Nairobi County city center (44%). In Uasin Gishu County 68% of the respondents cared about it and 30% said they were not sure. These results show that the respondents in Makueni County strongly cared more about the three environmental issues compared to the other two counties. This can be understood from their geographical location in arid and semi-arid areas where the residents have experienced adverse environmental impacts like drought. In addition, their livelihoods (agriculture and livestock keeping) are directly connected to the environment [21]. Although the respondents in Uasin Gishu County also depended on farming, this County received higher rainfall with more forest cover [30]. Regarding responses from different regions, we additionally conducted a multiple regression analysis to find the connection between the three identified environmental problems and the respondents' socio-demographic characteristics. The analysis result shows that education (p-value = 2.42 × 10 −2 ) and income (p-value = 1.8 × 10 −3 ) significantly affected their perceptions about air pollution from diesel power. However, the two demographic characteristics of education and income had no significant effect on their perceptions about environmental conservation and power generation sources. Interestingly, the household size had a significant impact on perceptions of environmental conservation (p-value = 2.19 × 10 −2 ) and power generation source (p-value = 2.85 × 10 −2 ). However, age and gender had no significant effect on the respondents' perceptions. We found that the households with less than three persons and those with more than six persons showed their strong concerns about environmental conservation and energy sources whereas the households with three to five persons cared less about the environment and energy issues. Power Supply from Renewable Energy Sources and Government Support The third part of the questionnaire aimed to assess the extent to which consumers perceived the importance of adopting renewable energy. We tried to find out if household energy consumers wanted to receive all their domestic electricity needs from renewable energy sources. We also sought Regarding responses from different regions, we additionally conducted a multiple regression analysis to find the connection between the three identified environmental problems and the respondents' socio-demographic characteristics. The analysis result shows that education (p-value = 2.42 × 10 −2 ) and income (p-value = 1.8 × 10 −3 ) significantly affected their perceptions about air pollution from diesel power. However, the two demographic characteristics of education and income had no significant effect on their perceptions about environmental conservation and power generation sources. Interestingly, the household size had a significant impact on perceptions of environmental conservation (p-value = 2.19 × 10 −2 ) and power generation source (p-value = 2.85 × 10 −2 ). However, age and gender had no significant effect on the respondents' perceptions. We found that the households with less than three persons and those with more than six persons showed their strong concerns about environmental conservation and energy sources whereas the households with three to five persons cared less about the environment and energy issues. Power Supply from Renewable Energy Sources and Government Support The third part of the questionnaire aimed to assess the extent to which consumers perceived the importance of adopting renewable energy. We tried to find out if household energy consumers wanted to receive all their domestic electricity needs from renewable energy sources. We also sought to find whether these consumers would like to sell energy they generated to the national or local grid. Additionally, we wanted to know how the respondents would respond to government subsidies for household customers' renewable energy adoption. The result shows that about 84% of the respondents either strongly agreed or agreed that they wanted to receive all their electricity demand from renewable sources (Figure 3). An analysis of variance (p-value = 2.48 × 10 −4 ), indicated regional variations, in which we found that 92% of the respondents in Makueni and 100% in Uasin Gishu demonstrated their positive responses. In Nairobi County, middle-and high-income respondents (86%) showed more positive response compared to those in low-income ones (66%) and in the city center (76%). This is probably because among the surveyed households in Makueni County, about 12% had all their electricity demand from household solar PV with battery while about 34% had from a solar mini-grid. In Uasin Gishu County, about 4% obtained electricity from household solar PV while 44% from both the national grid and solar PV. This finding largely corresponds with an Australian study [36], in which home ownership was found to be an important factor for households to adopt renewable energy sources. A multiple regression analysis of the respondents' socio-demographic characteristics of age, gender, household size, education, and income levels did not show any significant effect on their willingness to obtain all their power demand from renewable energy sources. to find whether these consumers would like to sell energy they generated to the national or local grid. Additionally, we wanted to know how the respondents would respond to government subsidies for household customers' renewable energy adoption. The result shows that about 84% of the respondents either strongly agreed or agreed that they wanted to receive all their electricity demand from renewable sources (Figure 3). An analysis of variance (p-value = 2.48 × 10 −4 ), indicated regional variations, in which we found that 92% of the respondents in Makueni and 100% in Uasin Gishu demonstrated their positive responses. In Nairobi County, middle-and high-income respondents (86%) showed more positive response compared to those in low-income ones (66%) and in the city center (76%). This is probably because among the surveyed households in Makueni County, about 12% had all their electricity demand from household solar PV with battery while about 34% had from a solar mini-grid. In Uasin Gishu County, about 4% obtained electricity from household solar PV while 44% from both the national grid and solar PV. This finding largely corresponds with an Australian study [36], in which home ownership was found to be an important factor for households to adopt renewable energy sources. A multiple regression analysis of the respondents' socio-demographic characteristics of age, gender, household size, education, and income levels did not show any significant effect on their willingness to obtain all their power demand from renewable energy sources. Regarding the second question about respondents' interests in selling energy to grids, about 72% of them either strongly agreed or agreed to this idea while about 12% disagreed. An analysis of variance (p-value = 2.53 × 10 −7 ) of their responses showed regional and income-level differences. All the respondents in Uasin Gishu County demonstrated their interests in selling power to grids. In Makueni County, about 72% showed their interests. In Nairobi County, about 74% of those in the middle-high-income level showed high interests while about 64% of those in the low-income level answered positively. A multiple regression analysis of the respondents' socio-demographic characteristics of age, gender, household size, education, and income levels did not show any significant effect on their interest to sell energy to the grids. These findings support previous similar studies in Germany, France, Italy, and Australia [47,48]. Regarding the importance of government incentive for household consumers to invest in renewable energy, about 68% either strongly agreed or agreed while about 19% either disagreed or strongly disagreed. An analysis of variance (p-value = 1.99 × 10 −4 ) indicated regional variations. In rural counties of Uasin Gishu and Makueni, 94% and 85% of the respondents showed their interests in government support, respectively. In Nairobi County, 56% of the middle-high-income respondents and 65% of low-income people wanted government support. In the city center 50% showed interests. Some facts can explain the reasons behind these differences. For example, in the past the government Regarding the second question about respondents' interests in selling energy to grids, about 72% of them either strongly agreed or agreed to this idea while about 12% disagreed. An analysis of variance (p-value = 2.53 × 10 −7 ) of their responses showed regional and income-level differences. All the respondents in Uasin Gishu County demonstrated their interests in selling power to grids. In Makueni County, about 72% showed their interests. In Nairobi County, about 74% of those in the middle-high-income level showed high interests while about 64% of those in the low-income level answered positively. A multiple regression analysis of the respondents' socio-demographic characteristics of age, gender, household size, education, and income levels did not show any significant effect on their interest to sell energy to the grids. These findings support previous similar studies in Germany, France, Italy, and Australia [47,48]. Regarding the importance of government incentive for household consumers to invest in renewable energy, about 68% either strongly agreed or agreed while about 19% either disagreed or strongly disagreed. An analysis of variance (p-value = 1.99 × 10 −4 ) indicated regional variations. In rural counties of Uasin Gishu and Makueni, 94% and 85% of the respondents showed their interests in government support, respectively. In Nairobi County, 56% of the middle-high-income respondents and 65% of low-income people wanted government support. In the city center 50% showed interests. Some facts can explain the reasons behind these differences. For example, in the past the government support on renewable energy projects (especially off-grid) has been in the rural areas. The multiple regression analysis of these responses in connection to respondent's socio-demographic characteristics indicates that education (p-value = 1.62 × 10 −2 ) significantly affected their perceptions about government support. Renewable Energy Technology Choices The fourth section of the questionnaire asked the respondents about what renewable energy technologies they would like to invest in. The question in this particular section targeted only those households (74%) that obtained all their electricity needs from the national grid. The result shows that about 85% would invest in solar PV, while only 2% and 1% showed interests in wind and small hydro, respectively (Figure 4). About 6% would invest in the combination sources of solar PV, wind, and small hydro. The rest had no interest in any form of renewable energy technologies. regression analysis of these responses in connection to respondent's socio-demographic characteristics indicates that education (p-value = 1.62 × 10 −2 ) significantly affected their perceptions about government support. Renewable Energy Technology Choices The fourth section of the questionnaire asked the respondents about what renewable energy technologies they would like to invest in. The question in this particular section targeted only those households (74%) that obtained all their electricity needs from the national grid. The result shows that about 85% would invest in solar PV, while only 2% and 1% showed interests in wind and small hydro, respectively (Figure 4). About 6% would invest in the combination sources of solar PV, wind, and small hydro. The rest had no interest in any form of renewable energy technologies. Here we found different responses by regions. In terms of investment in solar PV, the respondents in Makueni (94%) and Uasin Gishu (90%) wanted to invest in solar PV systems. In Nairobi County, the higher percentage of the middle-high-income group (86%) showed interests in solar compared to those in the low-income group (80%) and in the city center (72%). Those respondents who showed interests in wind power were from Makueni County and the Nairobi County city center while all those who wanted to invest in small hydro were from Uasin Gishu County. The tendency of residents to show interests in solar energy can be found in other developing countries like Qatar [33] and Yemen [15]. We found that respondents' interests in investing in renewable energy was also influenced by financial and climatic factors. In Kenya, solar PV is relatively easy and affordable technology compared to small wind and hydro technologies. On the other hand, only limited areas in western, central, and eastern parts of the country have potential for small and large hydro generation [31]. This explains why all respondents who showed interests in small hydro developments were from Uasin Gishu County. Currently, most of the households who have invested in small hydro have done so through community projects which are relatively larger [31]. Motivation to Invest in Renewable Energy Sources The fifth section of the questionnaire attempted to understand residents' motivations to invest in renewable energy technologies. The survey results revealed that the respondents were largely motivated to prevent electricity supply problems, such as frequent power outages (42%), high power cost (37%), and lack of connection to the national grid (11%) ( Figure 5). Additionally, environmental concerns (6%) and low cost of renewable energy equipment (4%) motivated some respondents. Previous study [6] depicted the three issues as the main challenges in Kenya's energy sector. Within a regional context, we found that frequent power outage was an important factor for the respondents in the Nairobi County city center (48%), while the high cost of power was more problematic for the low-income respondents (52%) in Nairobi County. For the respondents in Here we found different responses by regions. In terms of investment in solar PV, the respondents in Makueni (94%) and Uasin Gishu (90%) wanted to invest in solar PV systems. In Nairobi County, the higher percentage of the middle-high-income group (86%) showed interests in solar compared to those in the low-income group (80%) and in the city center (72%). Those respondents who showed interests in wind power were from Makueni County and the Nairobi County city center while all those who wanted to invest in small hydro were from Uasin Gishu County. The tendency of residents to show interests in solar energy can be found in other developing countries like Qatar [33] and Yemen [15]. We found that respondents' interests in investing in renewable energy was also influenced by financial and climatic factors. In Kenya, solar PV is relatively easy and affordable technology compared to small wind and hydro technologies. On the other hand, only limited areas in western, central, and eastern parts of the country have potential for small and large hydro generation [31]. This explains why all respondents who showed interests in small hydro developments were from Uasin Gishu County. Currently, most of the households who have invested in small hydro have done so through community projects which are relatively larger [31]. Motivation to Invest in Renewable Energy Sources The fifth section of the questionnaire attempted to understand residents' motivations to invest in renewable energy technologies. The survey results revealed that the respondents were largely motivated to prevent electricity supply problems, such as frequent power outages (42%), high power cost (37%), and lack of connection to the national grid (11%) ( Figure 5). Additionally, environmental concerns (6%) and low cost of renewable energy equipment (4%) motivated some respondents. Previous study [6] depicted the three issues as the main challenges in Kenya's energy sector. Within a regional context, we found that frequent power outage was an important factor for the respondents in the Nairobi County city center (48%), while the high cost of power was more problematic for the low-income respondents (52%) in Nairobi County. For the respondents in Makueni County, lack of connection to the national grid was the main motivator (30%). Interestingly, environmental concerns were not considered as a motivation factor by the respondents of Makueni County, Uasin Gishu County, and Nairobi County's low-income group even though 98% of the respondents indicated that they strongly cared or cared about environmental conservation. Previous studies [44][45][46] similarly found that pro-environmental concerns did not always translate into actual actions. Makueni County, lack of connection to the national grid was the main motivator (30%). Interestingly, environmental concerns were not considered as a motivation factor by the respondents of Makueni County, Uasin Gishu County, and Nairobi County's low-income group even though 98% of the respondents indicated that they strongly cared or cared about environmental conservation. Previous studies [44][45][46] similarly found that pro-environmental concerns did not always translate into actual actions. The regression analysis of respondents' socio-demographic characteristics on motivations indicates that income levels (p-value = 1.8 × 10 −3 ) significantly affected their motivation. Low-income respondents were motivated by frequent power outages and high energy cost. On the other hand, for the middle-high-income respondents frequent power outages, high energy cost, and environmental concerns were equally worrisome. Challenges Facing Adoption of Renewable Energy Technologies The sixth section assessed the challenges the respondents faced in case they decided to install renewable energy technologies. About 46% said that the high cost of equipment was the main challenge ( Figure 6). This was followed by intermittent nature of renewable energy sources (31%) and lack of qualified personnel to install (16%). Previous studies on Kenya's renewable energy sector similarly found that high equipment cost was one of the major challenges for Kenyan consumers [6]. In terms of regional variations, those in the middle-high-income 58% found this important, whereas 54% of those in the city center and 52% of those in the low-income groups showed their concerns over the cost. On the contrary, the cost was less important in Makueni (38%) and Uasin Gishu (30%). In these two rural counties, the intermittent nature of renewable energy was challenging whereas less than 30% of the respondents in all groups of Nairobi County indicated this as their challenge. The lack of skilled persons who can install solar PV was pronounced in Uasin Gishu County (36%) and Makueni County (18%). In Nairobi County, this is a challenge for a certain number of the middle-high-income group (14%). The regression analysis of respondents' socio-demographic characteristics on motivations indicates that income levels (p-value = 1.8 × 10 −3 ) significantly affected their motivation. Low-income respondents were motivated by frequent power outages and high energy cost. On the other hand, for the middle-high-income respondents frequent power outages, high energy cost, and environmental concerns were equally worrisome. Challenges Facing Adoption of Renewable Energy Technologies The sixth section assessed the challenges the respondents faced in case they decided to install renewable energy technologies. About 46% said that the high cost of equipment was the main challenge ( Figure 6). This was followed by intermittent nature of renewable energy sources (31%) and lack of qualified personnel to install (16%). Previous studies on Kenya's renewable energy sector similarly found that high equipment cost was one of the major challenges for Kenyan consumers [6]. In terms of regional variations, those in the middle-high-income 58% found this important, whereas 54% of those in the city center and 52% of those in the low-income groups showed their concerns over the cost. On the contrary, the cost was less important in Makueni (38%) and Uasin Gishu (30%). In these two rural counties, the intermittent nature of renewable energy was challenging whereas less than 30% of the respondents in all groups of Nairobi County indicated this as their challenge. The lack of skilled persons who can install solar PV was pronounced in Uasin Gishu County (36%) and Makueni County (18%). In Nairobi County, this is a challenge for a certain number of the middle-high-income group (14%). Conclusions This paper examined Kenya's household residents' perceptions about renewable energy supplies. We also investigated factors that motivated and hindered the adoption of renewable energy technologies in three different regions of Kenya: Nairobi (urban county), Makueni, and Uasin Gishu (rural counties). Even though about 98% of the respondents showed their concerns about environmental conservation, this reason alone did not appear to motivate the respondents to invest in renewable energy. Instead, the respondents mainly wanted to secure the steady energy supply as they had often experienced power outages. The high and fluctuating energy cost from the grid system also motivated them to invest in renewable energy. In rural areas, the respondents were largely motivated by lack of connection to the national grid. These rural residents expressed their needs to receive government support in adopting renewable energy technologies. What hindered the respondents most in adopting renewable energy technologies was the high cost of equipment and intermittent nature of renewable energy resources. The latter reason was particularly prevalent for rural residents. About 96% of the rural respondents and 76% of the urban respondents preferred to have all their power supplies from renewable energy sources. In addition, about 84% (86% rural and 63% urban) expressed their wish to sell electricity they generate to the national grid. The main renewable energy technology that the respondents preferred to invest in was solar PV (85%). Overall, this paper showed that Kenyan urban and rural residential consumers were highly interested in renewable energy. Kenya's national policy to have its 100% electricity supply from renewables can be further expedited with better understanding about regional differences in household consumers' needs. As the past studies showed the importance of understanding regional differences in terms of consumers' perceptions about adopting renewable energy, this study can shed light on this topic through a case study in Kenya. This paper also demonstrated that some of our findings corresponded with past studies about Europe and Australia. Within a policy improvement context, the findings of this paper can better inform Kenya's development partners like the US, EU, China, and Japan. These countries provide funding for major energy projects in Kenya.
9,172.8
2019-08-13T00:00:00.000
[ "Environmental Science", "Economics" ]
The first record of Fistulina hepatica (Schaeff.) With. on Castanea sativa Mill. in Poland This paper discusses details of the locality of Fistulina hepatica recorded on Castanea sativa, a new host species in Poland. Since 2014, F. hepatica has been featured on the list of species under partial protection, and has been marked as “R” (rare species) on the “Red list of the macrofungi in Poland”. A new locality of F. hepatica has been found in Warsaw, in the Mokotów neighborhood, on the premises of the Central Clinical Hospital of Ministry of the Interior and Administration. Two basidiomata of F. hepatica were discovered at the base of a declining sweet chestnut tree. In 2004, the species was included in the list of fungi under strict protection [8]; since 2014, F. hepatica has been on the list of species under partial protection [9]. It is also featured on the "Red list of the macrofungi in Poland", marked as "R" for "rare species" [10]. An infection with F. hepatica is possible through various forms of mechanical damage to the tree trunk. The process of decomposition may last for several dozen years without causing serious damage to the tree structure. With time, the timber breaks and falls apart into prismatic cubes. This results in trunk hollowness at the base of infected trees. Rot develops in the central part of the trunk, i.e., in the tree butt, sometimes reaching a height of up to 6 m [6,11,12]. Basidiomata occur on the tree trunks, roots (sometimes seemingly on the ground), and also inside tree hollows and on tree stumps [12]. Fistulina hepatica is not a very common species in Poland [2]. According to Szczepkowski [13], there are 46 localities of this fungus in central-eastern Poland, most of which are located in various types of conservation areas. First occurrences of F. hepatica in Warsaw were recorded towards the end of the nineteenth century. The localities situated within the present city limits of Warsaw, where basidiomata of the beefsteak fungus have been discovered on oak trees, include Digital signature This PDF has been certified using digital signature with a trusted timestamp to assure its origin and integrity. A verification trust dialog appears on the PDF document when it is opened in a compatible PDF reader. Certificate properties provide further details such as certification time and a signing reason in case any alterations made to the final content. If the certificate is missing or invalid it is recommended to verify the article on the journal website. the following neighborhoods: Bielany -"Las Bielański" Reserve, Wawer -"Las im. Króla Jana Sobieskiego" Reserve, Bemowo -forest park, Pyry, Park Łazienkowski, "Skarpa Ursynowska" Reserve, "Las Natoliński" Reserve, "Dęby Młocińskie" Nature-Landscape Complex, and Mokotów [13][14][15]. In the "Las Natoliński" Reserve, 134 basidiomata of F. hepatica were recorded on 83 oak trees. At present, the most numerous localities of F. hepatica are in Warsaw and in central-eastern Poland [12]. A comparatively high number of infected trees (from five to 20) have been discovered in the following nature reserves: "Las Bielański", "Las im. Króla J. Sobieskiego", "Jabłonna", and "Chojnów" [13]. In Poland, the genus chestnut (Castanea Mill.) from the family Fagaceae is represented almost exclusively by the sweet chestnut (Castanea sativa Mill.). The species originates from Southern Europe, but its exact natural range is presently difficult and probably even impossible to establish, as it has been cultivated as a useful species since ancient times. Currently, its geographical range mainly encompasses Italy, southern Austria, former Yugoslavian countries, Albania, southwestern Hungary, western Bulgaria, Greece, Turkey, and Asia Minor. It is probable that it was first introduced beyond its natural range, for example in England, by the Romans. In Poland, it was planted for the first time in the Royal Botanical Garden of the king John II Casimir, in 1651 [16]. The popularization of edible chestnuts over Europe, beginning from the ancient cultivation until the present, has been discussed by Conedera et al. [17]. As a southern, thermophilic species, C. sativa has found the best conditions for development in western Poland, mainly in Western and Eastern Pomerania, Lubusz Land, and Lower Silesia. It also occurs in Mazovia in central Poland, but only as a rare species. A study of the detailed distribution of the species in Poland, complete with a map, a list of localities, and an evaluation of the degree of acclimatization was compiled by Browicz [18]; at the time, eastern Poland, and especially northeastern Poland, were beyond the range of the species. In Poland, C. sativa Mill. is grown as an ornamental species because of its decorative leaves and spectacular blossoms that appear during the flowering period. In Southern Europe, C. sativa has considerable importance from the point of view of society, economy, and landscape. In Spain, it was concluded that F. hepatica was one of the most important fungal species that lead to the lowering of timber quality [11]. Furthermore, in Greece, F. hepatica is a fungus species often occurring on old, living C. sativa trees [21]. However, in Portugal, some individual occurrences of F. hepatica on sweet chestnut trees were recorded [22]. According to Kotlaba [3], in Czechoslovakia, there were 310 localities of F. hepatica with an identified host plant; in 301 of them (97%), the genus Quercus was identified as a host plant, while C. sativa was identified to fulfill that role only in nine localities. The aim of this work is to present the first locality of F. hepatica on C. sativa in Poland. Material and methods The following field activities have been performed: ■ measuring the trunk diameter (at the height of 1.3 m) and the height of the tree (C. sativa) with basidiomata of F. hepatica; ■ specifying the number of basidiomata on the tree; ■ measuring the heights at which the basidiomata occurred; ■ establishing the geographical directions that the basidiomata were facing. The fungal names were given according to MycoBank [1]. Results The new locality of F. hepatica is situated in Warsaw, in the neighborhood of Mokotów, on the premises of the Central Clinical Hospital of Ministry of the Interior and Administration, at the address Wołoska 137. The fungus was found on September 20, 2017, at the base of a declining sweet chestnut tree (C. sativa) located on a small lawn (N 52°11' 56.4" / E 020°59'55.9") ( Fig. 1). There were individual withered leaves on the side branches of the tree, as well as characteristic cupules thickly covered with sharp spikes. There was a pavement at the distance of 0.5 m from the tree. The host was ca. 7 m high, with a circumference of 103 cm. On the east side, along almost the whole length of the tree trunk (from the height of 20 cm up to the top of the bole) there was an open wound 27 cm wide at the widest place (Fig. 2). The discovered basidiomata were situated directly below the wound, at a height of 3-5 cm above the ground (Fig. 3). They were semicircular and laterally attached to the trunk. The measurements of the first basidioma (length, width, thickness) were 8 cm × 7.5 cm × 3 cm, and the measurements of the second basidioma were 6.5 cm × 6.5 cm × 2.5 cm (leg./det. J. Piętka). On September 27, 2017, it was discovered that the basidiomata had been torn away from the tree trunk and were lying on the lawn in the proximity of the tree, partly dried out. Two other taxa of fungi were identified on the tree: the species Peniophora quercina (Pers.) Cooke on the declining branches, and basidiomata of a fungus representing the genus Stereum on the boughs and on the upper part of the trunk. Discussion In Poland, during severe winters, sweet chestnut trees are damaged to various degrees by subzero temperatures, but are rarely killed by them. The species is characterized by a high resprouting ability; it quickly contains the damage and regrows from the trunk. Traces of damage caused by frost (i.e., frost cracks) are observed in individuals that develop a single, clear-cut trunk [18]. The frequency of occurrence of fungal species in particular habitats may change depending on the observed climate change [23]. Climate change is not generally considered a direct threat to the majority of microfungi, which may encounter only limited obstacles against gradual expansion to the north as the climate becomes warmer. However, in the case of fungal species associated with specific host plants, their fate will depend on the reaction of those plants to the climate change [24]. Wojewoda and Karasiński [25] have stated that many European fungi have become popular in Poland over the last 50 years. On the Polish checklist of vascular plant species of 2002, C. sativa has the status of a cultivated species [26]. However, according to some studies, it is a locally domesticated species, spreading spontaneously as a kenophyte [27]. The growing degree of acclimatization observed in recent years and reflected in the local domestication of many tree species is probably connected with global warming [28]. Trees weakened by urban stress are more easily infected by parasitic fungi. Such fungi penetrate their hosts via wounded the sites, i.e., in the places where tissues have been bared, for instance, in the course of plant care procedures, or where roots have been damaged during ground works, or due to other forms of damage frequent in urban trees, e.g., frost cracks [29]. In light of the above data, more occurrences of F. hepatica on C. sativa are expected to be reported in the future.
2,219
2018-06-29T00:00:00.000
[ "Environmental Science", "Biology" ]
Packaging Beautification Design Based on Visual Image and Personalized Pattern Matching — Visual image technology is widely used in the field of product art design, enriching the visual beautification design effect of products. To improve the design effect of product packaging, a personalized packaging pattern matching technology is proposed based on computer vision image technology. Firstly, based on user needs, a pattern feature extraction technology is proposed, which uses the total variation model and GrabCut model to smooth and segment the image. Secondly, an improved style transfer generative adversarial network model is proposed for transfer training between feature elements and targets. Considering the problem of insufficient detail preservation in traditional transfer models, attention layers are incorporated into the transfer model for improvement. In the pattern feature extraction experiment, the proposed model had the best pixel accuracy in Image 1. In the pattern matching experiment, the proposed model had the lowest mapping loss in both pattern combinations, with a value of 0.135 in the Zhuang brocade pattern and 0.236 in the blue and white porcelain pattern, which was superior to other models. Comparing the effect of different model pattern combinations, in the blue and white porcelain pattern combination, the proposed model had an optimal peak signal-to-noise ratio of 32.32, which was superior to other models. The proposed model has excellent application effects in packaging design beautification. The research content will provide critical technical references for e-commerce product packaging design and intelligent image processing. In the pattern feature extraction experiment, the proposed model had the best pixel accuracy in Image 1.In the pattern matching experiment, the proposed model had the lowest mapping loss in both pattern combinations, with a value of 0.135 in the Zhuang brocade pattern and 0.236 in the blue and white porcelain pattern, which was superior to other models.Comparing the effect of different model pattern combinations, in the blue and white porcelain pattern combination, the proposed model had an optimal peak signal-to-noise ratio of 32.32, which was superior to other models.The proposed model has excellent application effects in packaging design beautification.The research content will provide critical technical references for e-commerce product packaging design and intelligent image processing. I. INTRODUCTION Visual image technology is a technique that processes and analyzes images through computers.This includes image recognition, image processing, and image generation.Visual image technology is widely applied in multiple fields, such as medical image analysis, security monitoring, intelligent transportation, etc. [1].In product packaging design, visual image technology can be used to enhance the design effect of packaging, making it more attractive to consumers [2].Pattern packaging design refers to the use of various pattern elements and styles on the outer packaging of products to attract consumer attention.However, traditional pattern packaging design has shortcomings, such as the inability to meet the personalized needs of different users in packaging design, and the lack of innovation in pattern elements in traditional design, which cannot convey the connotation that the product needs to express [3].Therefore, a packaging beautification design method based on the combination of visual images and personalized patterns is proposed.Images are processed through pattern feature extraction technology to achieve personalized matching of pattern elements and styles.The innovation of the research lies in the emphasis on considering the impact of different pattern elements on packaging design, proposing a multi-model fusion pattern feature extraction technology to effectively extract pattern features and preserve details.Secondly, an improved transfer model is introduced for pattern matching training, achieving optimization of pattern packaging design.This technology has important application value in the field of packaging.While meeting the requirements of packaging beautification design, it improves the detail retention ability of traditional transfer models.Research technology will drive the development of the e-commerce industry and provide new methods and ideas for the beautification design of product packaging. The research content is composed of six sections.Section I and Section II introduces the application of relevant visual images and the latest cutting-edge technologies, and discusses and analyzes the application of visual image technology in fields such as image segmentation and image matching.Section III analyzes the characteristics of packaging design and proposes a feature extraction model and pattern matching model to achieve personalized design of packaging.Section IV is to apply the mentioned technology to specific scenarios and assess the performance of the proposed packaging beautification design technology in practical scenarios.Section V delves in to discussion and finally, Section VI concludes the paper. II. RELATED WORKS Computer vision image is a technique that utilizes computers to process, analyze, and understand images.It is widely applied in fields such as facial recognition, image processing, and object recognition, and researchers all over the world have organized relevant research on this.The study by Penumuru et al. aimed to propose a universal method for automatic material recognition using machine vision and machine learning techniques to enhance the cognitive abilities of material processing equipment such as robots deployed in machine tools and Industry 4.0.The study selected four common materials and prepared and processed their surface datasets.By extracting the red, green, and blue components of the three primary color model as features and applying support vector machines and other classification algorithms, the proposed method has been studied and verified to recognize different material groups [4].The results indicated that the proposed method could be implemented in a manufacturing environment without significant modifications.Secondly, the research of Uthayakumar et al. focused on computer vision-based applications in wireless sensor networks.Research results showed that visual sensors generated a large www.ijacsa.thesai.orgamount of multimedia data in sensors, while image transmission consumes more computing resources.To address this issue, a study proposed an image compression model using neighborhood related sequences.This algorithm performed bit reduction operations and further compressed the image through a codec.The proposed NCS algorithm improved the compression performance of sensor nodes and reduced energy utilization while maintaining high fidelity.Through experimental evaluation on test images, the results showed a better compromise between compression efficiency and reconstructed image quality [5].Finally, Huang et al.'s research aimed to raise the real-time performance of image segmentation.The study introduced a fruit fly model into image segmentation and obtained a fusion image processing technique.By using optimization strategies to search for the optimal segmentation threshold, the model could converge faster and consume less time without sacrificing segmentation accuracy.The research results indicated that this method significantly reduced segmentation time while keeping the segmentation effect basically unchanged [6]. With the development of visual image technology, it has important applications in fields such as image design, segmentation, and matching.Agarwal et al. found that with the development of image editing tools, image forgery activities are on the rise.To protect the authenticity of images, a deep learning-based detection, replication, movement, and forgery image technology was proposed.This technology involved processes such as segmentation, feature extraction, dense depth reconstruction, and ultimately identifying tampered areas.Finally, the technology was applied to specific scenarios, and it had good image visual processing effects [7].Li et al. found that effective image segmentation in image design faced challenges, and proposed a convolutional neural network that combines attention mechanism (AM).The network structure studied consists of a basic feature layer and an attention module, which is utilized to capture global information and enhance features.The experimental outcomes showed that this method was superior to other existing mainstream image processing methods and had fewer parameters, improving the application of visual technology in related fields [8].Chen et al.'s research focused on the importance of image matching in fields such as augmented reality, synchronous localization, and visual design.The study improved the accuracy of feature matching in visual design by incorporating instance aware semantic segmentation into visual feature matching for corner detection and rotation.Research used pixel level object segmentation and semantic information limitation to perform feature matching on adjacent images.The research findings indicated that this method improved the accuracy of feature matching and met the requirements of visual design [9].Hu et al.'s research was dedicated to the study of image segmentation techniques.So, a parallel deep learning algorithm with mixed AM was proposed to enhance the effectiveness of pattern design work.This algorithm extracted pattern feature information from preprocessed images and inputs the images into a mixed AM and densely connected convolutional network module.The mixed AM consists of spatial AM and channel AM.The experiment outcomes denoted that this technology can significantly improve the image processing efficiency of design work, while also improving the processing effect of image data [10]. In summary, computer vision image technology has important applications in many fields.With the advanced visual image and machine learning techniques, problems such as image editing, segmentation, and matching in image design can be effectively solved.However, there are relatively few applications of visual image technology in the field of product appearance.In this regard, applying visual image technology to the packaging beautification design process provides relevant technical guidance for product packaging design and beautification. III. CONSTRUCTION OF PACKAGING BEAUTIFICATION MODEL BASED ON PERSONALIZED PATTERN MATCHING This section mainly focuses on the research of product packaging beautification design, proposing pattern feature extraction models and pattern personalized matching models for product packaging design, and constructing relevant models separately. A. Extraction of Personalized Packaging Pattern Features In recent years, with the continuous improvement of people's quality of life and consumption ability, pursuing personalized consumption has become a social development trend.Personalized product packaging design can not only impress people, but also enhance the competitiveness of the product with personalized patterns.Therefore, in response to the growing demand for personalized packaging appearance, a personalized packaging pattern matching technology based on visual image technology is proposed [11].To meet user needs, it is necessary to fully consider pattern design elements.Taking blue and white porcelain products as a case study, in the design of packaging patterns for blue and white porcelain, it is necessary to extract target features based on consumer needs and product attributes [12].The process of product feature extraction technology is shown in Fig. 1.According to the technical process in Fig. 1, feature extraction of patterns includes pattern, color, and tissue extraction.In the extraction of the above feature information, to meet the personalized design requirements of product packaging, it is necessary to process the above features accordingly.If the extracted pattern features contain a large number of organizational textures, it will have an impact on the personalized information processing of the pattern itself.Therefore, in the study, a Relative Total Variation (RTV) www.ijacsa.thesai.orgmodel is adopted to optimize the feature extraction.The RTV model can make the image texture smooth and highlight the main feature details needed [13].In smoothing processing, any point in the product feature image is defined as p , and the RTV of the image's P points is calculated as shown in Eq. (1).In Eq. ( 1), P N is the set of points adjacent to P. r  represents the degree of smoothness.pq w is the weight between point P and adjacent point s q .P I and q I are the grayscale values of point P and adjacent point q.After completing the image smoothing process, it is also necessary to segment the image in order to better obtain different background features [14].In the study, GrabCut was used as an image segmentation technique to perform local segmentation on the target image.The specific process is shown in Fig. 2. In the target feature map, multiple targets containing T rectangles are defined, and the external background of the rectangle is set to B T , while the internal area of the rectangle is used as the foreground area, which is F T .This expression is shown in Eq. (2). In Eq. ( 2 , , ) In Eq. ( 3),  represents the initial parameters of GMM, and n z is the image matrix.For the given image data Z .: min ( , , , ) In Eq. ( 4), () U  represents GMM parameter learning.The maximum minimum flow strategy is used to segment pixels and obtain the minimum energy, as shown in Eq. ( 5).   , min min ( , , , ) In Eq. ( 5), E represents the energy value.Repeating Eq. ( 3) and Eq. ( 5) until convergence is achieved to obtain the image segmentation result.Considering the issue of color difference in feature extraction, the Otsu segmentation method (OTSU) is adopted to handle the differences between extracted features.The OTSU idea is to segment a single feature, divide the target feature into foreground and background parts through grayscale features, and achieve black and white color gamut division by searching for grayscale levels and OTSU thresholds [15].OTSU image segmentation is shown in Fig. 3.It defines the grayscale image as F , uses F as a matrix of MN  , sets the pixel value to (0255), and uses i n as the amount of pixels with a grayscale pixel level of i .The probability of selecting the grayscale pixel i is denoted in Eq. ( 6). In image segmentation, the foreground and background segmentation thresholds of the image are set to l k .According to the segmentation threshold, there are a large number of pixels that are greater than or less than In Eq. ( 7), there is a relationship as shown in Eq. ( 8). ( ) Then, the expression equation of variance is used to obtain the value of the image segmentation method, as shown in Eq. ( 9). ( )( ( ) ) ( )( ( ) ) In Eq. ( 10),  is the spatial scale adoption number, and the square difference is subjected to deformation processing.The result is shown in Eq. ( 10).( * ( ) ) The traversal is used to obtain the maximum threshold l k between variances, and then re-segment the image through binarization.The result is shown in Eq. ( 11). ( , ) ( , ) 0 The extraction of pattern features is the key to packaging beautification design, and it is necessary to extract the main pattern feature elements from the target, including patterns, colors, and organization, to provide basic elements for subsequent packaging beautification design. B. Construction of a Personalized Pattern Transfer Model Based on Packaging Beautification In product packaging beautification design, it is necessary to combine the extracted multiple style features with the target product, meeting both the product style features and the visual aesthetic design requirements.To effectively match the elements in the pattern with the target product, a personalized pattern transfer model based on the Improved Style Transfer Generative Adversarial Networks for Image to Illustration Translation (GANILLA) was proposed in the study.By fusing the features of different styles of patterns with the target, personalized packaging matching was achieved [16].The GANILLA model was proposed by Samet Hicsonmez et al. in 2020 as an image style transfer learning model.In image feature processing, the original image content details were preserved as much as possible to achieve different style feature transfer methods.Among them, the structural framework of the GANILLA model is shown in Fig. 4. From Fig. 4, the GANILLA model used convolutional layers for downsampling.At the same time, the inverse convolutional layer was used for upsampling, and a concatenated residual layer was used in the model.The inverse convolutional layer was replaced by a sampling operator.In the given pattern element features, it was necessary to combine the extracted pattern elements with different styles to fuse and generate new design patterns [17]. The pattern features are independent data, and the product packaging data is target data.The GANILLA model sampled feature maps through a skip connection generator.At the same time, to better preserve the transmission pattern features, upsampling and skip connections were used to merge high-level and low-level features, thereby improving the image composition quality [18].The distance between the synthesized image and the real image is defined as 1 L , where the number of samples is set to N , the predicted pixels are set to y , and the pixels of the real image are x .The comparison between the synthesized image and the real image is shown in Eq. ( 12).In Eq. ( 12), W , H , and C respectively represent the width, height, and amount of channels of the image.Then, the discriminator adversarial loss is calculated, as shown in Eq. ( 13).   ( , ) log( ) In Eq. ( 13), x  means the input damaged image, G represents the output target, D is the discriminator, and E represents the energy loss value.Then the joint loss is calculated, as shown in Eq. ( 14). In Eq. ( 14), 1  and 2  are both loss optimization parameters.In the actual pattern transfer, although the GANILLA model has good adaptability to the processing of pattern texture features, there are still shortcomings in handling individual feature details, such as the problem of detail loss in the transformation of texture features.In this regard, improvements will be made from two methods: feature analysis and model performance.In the analysis of pattern features, attention SE block modules will be added to the Residual Block layer to improve the model's attention to key positions in the image.At the same time, the addition of AMs can improve the acquisition of useful features without suppressing useless features [19].The SE block module structure framework is shown in Fig.In terms of model performance, considering the addition of AM, the computational complexity of model parameters is increased, and a parameter compression model, Residual block, is introduced to reduce network parameters and floating-point computational complexity in the model.Two generators and discriminators are included in the generative adversarial loss function of the improved GANILLA model.Firstly, the adversarial loss is adopted in the mapping network, and the target mapping relationship is expressed as Eq. ( 15).** ,, , min min ( , , , ) In Eq. ( 15), F represents the mapping target, and F and Y D the discriminators corresponding to the X domain and Y .The image () GX generated by G will continuously approach Y , enhancing the similarity between the two sides of the mapping.Y D can be used to distinguish the two targets y and () GX .Simultaneously, the minimum target in the target G mapping relationship is applied to counter the discriminator.In adversarial training, learning training can be used to learn the mapping relationship between G and D [20].Finally, the various loss functions are combined, and the loss optimization parameter  is introduced to optimize the real image pixel x .The target loss is shown in Eq. ( 16). ( , , , ) ( , , , ) In Eq. ( 16), the larger the loss optimization parameter value  , the closer each pair of discriminators will be. Through the above techniques, the extracted personalized pattern elements can be style transferred to achieve personalized design of packaging patterns. IV. ALGORITHM MODEL SIMULATION TESTING This section conducted performance tests on the two proposed models to evaluate their practical application effects.The main evaluation indicators included pixel accuracy (PA), signal-to-noise ratio (SNR), peak signal-to-noise ratio (PSNR), and loss. A. Experimental Analysis of Pattern Feature Extraction To improve the proposed packaging beautification design model, experimental testing was conducted on the Windows 10 64 bit platform.The processor was a Zhiqiang 64 core processor, the graphics card was NVIDIA RTX4060ti, and the running content was 64G.The experimental data was sourced from the integrated packaging graphic design website, which included 15564 image feature data, including pattern, color, tissue and other feature data.The initialization parameters of the experimental model are expressed in Table I.PA and SNR were introduced as evaluation indicators.12 patterns were selected for feature extraction, and some pattern samples are shown in Fig. 6.In actual feature extraction, the difference between r  www.ijacsa.thesai.orgvalue and  would directly affect the effect of image detail texture processing.Therefore, it is necessary to compare the image feature extraction under different parameters, as shown in Fig. 7. Fig. 7(a) and Fig. 7(b) express the comparison outcomes of r  and  , respectively.Among them, when the r  parameter was set to 0.01, the model training image loss was the lowest, which was 0.012.At the same time, comparing the  parameter settings of the model, when  was set to 2, the training loss of the model was the lowest and the convergence effect could be achieved the fastest.Therefore, in subsequent experimental testing, the parameters were set to 0.01 and the  parameter was set to 2. Image 1 and Image 2 were selected for feature extraction testing, and the test results are shown in Fig. 8. Fig. 8(a) denotes the feature extraction test outcomes of Image 1.According to the results, when the amount of image elements was 120, the PA was the highest, at 0.903.Compared to this, both OTSU and GMM had a decrease in feature extraction performance after the number of pattern elements was 75.When the amount of image elements was 120, the PA of OTSU and GMM was 0.689 and 0.403, respectively.Fig. 8(b) shows the feature extraction test results of Image 1.The proposed model realized the highest PA of 0.909 when the number of pattern elements was 120, while the GMM model gradually decreased in PA after the amount of pattern elements was 100.The highest PA of OTSU and GMM were 0.786 and 0.526, respectively.Finally, the SNR was used to reflect the quality of feature extraction for different model elements.The effect of extracting cluster features for multiple models is shown in Fig. 9. From the data outcomes, as the amount of iterations increased, the image quality of all three models continued to improve.The best performing model was the one proposed, with an optimal SNR of 23.56 at convergence, followed by OTSU with an optimal SNR of 20.65, and GMM with an optimal SNR of 17.56.Fig. 9(b) shows the feature extraction quality results of Image 2. Before 40 iterations, the GMM model performed better than the OTSU model in extracting pattern features.In the early training, the GMM model had better feature extraction performance than the OTSU model. After training, the OTSU model could retain more black and white details during training, which was better than the GMM model.Overall comparison showed that the proposed model had the best feature extraction performance, followed by OTSU, and finally the GMM model.The optimal SNRs for the three models, from high to low, were 25.65, 22.86, and 19.98, respectively. B. Experimental Analysis of Personalized Packaging Pattern Matching In the personalized packaging matching experiment section, the selected pattern features would be used as experimental data, and the proposed improved GANILLA model would be used as the pattern matching model.Meanwhile, the Cycle-Consistent Generative Adversarial Networks (CycleGAN) and GANILLA were introduced as experimental testing benchmarks.In the parameter settings, the Bachsize was 1, the optimization algorithm was Adam, the initialization step factor was 0.0002, and the experimental analysis was completed using the Pytorch platform.Mapping loss (Loss) and PSNR were introduced to reflect the quality of reconstructed images in the model.Two types of styles, Zhuang brocade pattern and blue and white porcelain pattern were selected for packaging matching.Fig. 10 shows the Loss results of different models.Fig. 10(a) shows the packaging matching test results under the Zhuang brocade pattern.In the early stage of testing, both the CycleGAN model and the GANILLA model showed significant fluctuations.Considering the overall situation, it was possible that the two models were unable to accurately recognize the color of the pattern during the early training, resulting in a decrease in image transfer quality.Compared to this, the proposed model had smaller overall fluctuations during training and lower Losses.When GANILLA, CycleGAN, and improved GANILLA converged, the Losses were 0.542, 0.512, and 0.135, respectively.Fig. 10(b) shows the packaging matching test results under the blue and white porcelain pattern.Due to the presence of more feature elements in the blue and white porcelain pattern, it further tested the model's ability to recognize features.Overall, the proposed improved GANILLA model performed the best with a Loss of 0.236 at convergence, while the CycleGAN model and GANILLA model had Losses of 0.956 and 12.35 at convergence, respectively.Finally, the PSNR was used to reflect the quality effect of pattern matching, as shown in Fig. V. DISCUSSION In recent years, with the rapid development of the e-commerce industry, product packaging has played an increasingly important role in attracting consumer attention and increasing sales.The diversification and personalization of packaging design have become one of the important strategies for brand competition.The packaging beautification design based on the combination of visual images and personalized patterns is an innovative design method proposed to meet this demand.The study will conduct in-depth discussions on it.Visual image technology refers to an interdisciplinary technology that utilizes computer vision and image processing techniques to analyze, process, and apply images.It mainly includes image acquisition, processing, analysis, and application.In the field of design, visual image technology has been widely applied in product, web, advertising designs, and other aspects.By processing and analyzing images, functions such as image enhancement, restoration, segmentation, and detection can be achieved, thereby improving design effectiveness and user experience.Packaging beautification design refers to the design and adjustment of the appearance, pattern, color, form, and other aspects of product packaging to meet the aesthetic needs and brand image of consumers, and to attract the attention of target consumers, enhancing the market competitiveness of the product.Packaging beautification design includes various contents, such as the selection and design of packaging patterns, color matching, material selection, position and size of patterns, etc.By cleverly utilizing these design elements, product packaging can be made more attractive, unique, and effectively convey the brand's value and characteristics. A visual image-based packaging beautification design technology was proposed in the study, which utilized advanced image segmentation processing technology and image transfer technology to achieve personalized and efficient development of packaging images.In the experiment of pattern feature extraction, by comparing the image feature extraction under different parameters, it was found that the proposed model achieved the best training loss and convergence effect when the parameters were set to 0.01 and 2. Meanwhile, compared with traditional OTSU and GMM models, the proposed model performed better in PA and SNR, and had higher feature extraction quality.This indicated that visual image technology had significant advantages in packaging image data processing compared to similar technologies, laying the foundation for subsequent packaging beautification design.In the personalized packaging pattern matching experiment, by comparing the proposed improved GANILLA model with CycleGAN and GANILLA models, it was found that the improved GANILLA model achieved better Loss and PSNR ratio results in the packaging matching of Zhuang brocade patterns and blue and white porcelain patterns.This meant that the proposed model could more accurately transfer the colors and features of patterns during the pattern matching process, improving the quality and effect of pattern matching. It can be seen that by using visual image technology, accurate extraction and analysis of image features can be achieved, providing scientific basis and guidance for packaging beautification design.The proposed technology also has significant advantages compared to similar technologies.In the experiments of pattern feature extraction and pattern matching, the proposed techniques have shown excellent results.Therefore, research-proposed technology can make packaging design more creative and personalized, improving the attractiveness and competitiveness of packaging. VI. CONCLUSION Product packaging design is one of the important means to showcase product functions and concepts, and the effectiveness of product packaging design has a significant impact on product competitiveness.Traditional packaging design faces problems such as long design cycles and single packaging design.An intelligent packaging pattern beautification technology was proposed for this.Firstly, based on product positioning, a pattern feature extraction method was proposed, which preserved the main features of pattern elements through pattern smoothing, segmentation, and binarization processing.Secondly, a pattern matching technique was proposed, which used the GANILLA model to train features and experiment with the transfer of pattern features.Simultaneously, AM was introduced to improve the model and enhance image details.In the feature extraction experiment of Image 1, when the number of image elements was 120, the PA of OTSU, GMM, and the proposed model were 0.689, 0.403, and 0.903, respectively.In the SNR test, the optimal SNRs of the proposed model, OTSU, and GMM models in Image 2 were 25.65, 22.86, and 19.98, respectively.www.ijacsa.thesai.org In the packaging pattern matching experiment, the Losses of different models were compared.Under the Zhuang brocade pattern, the sound losses of CycleGAN, GANILLA, and improved GANILLA were 0.542, 0.512, and 0.135, respectively.Finally, the matching effects of different model patterns were compared.In the PSNR test of blue and white porcelain patterns, the improved GANILLA, CycleGAN, and GANILLA had the best PSNRs at convergence of 32.32, 29.32, and 27.03, respectively.The proposed model had excellent application effects in packaging beautification design.However, the study did not provide personalized design for different target groups.However, there are also limitations to the research technology.This technology has a slower efficiency in image processing, and in the future, model parameters can be optimized to improve image processing efficiency.At the same time, image data can be preprocessed to improve the application effect of the technology. www.ijacsa.thesai.orgPackaging Beautification Design Based on Visual Image and Personalized Pattern Matching Deli Chen School for Creative Studies, Chongqing City Vocational College, Chongqing, 402160, China Abstract-Visual image technology is widely used in the field of product art design, enriching the visual beautification design effect of products.To improve the design effect of product packaging, a personalized packaging pattern matching technology is proposed based on computer vision image technology.Firstly, based on user needs, a pattern feature extraction technology is proposed, which uses the total variation model and GrabCut model to smooth and segment the image.Secondly, an improved style transfer generative adversarial network model is proposed for transfer training between feature elements and targets.Considering the problem of insufficient detail preservation in traditional transfer models, attention layers are incorporated into the transfer model for improvement. ),  represents an empty bag.If any pixel F T within B T is initialized with a label, then the label 0 n a  represents the background pixel, and for each pixel in F T , the F T label is initialized with 1 n a  as the possible target pixel.The foreground and background regions are clustered into K-type using a clustering model, and a Gaussian Mixture Model (GMM) is constructed for the foreground and background.The three primary colors of the target pixel n are brought into each Gaussian component of the GMM model, and the n K th Gaussian component of pixel F T is the target pixel, as shown in Eq. (3). nD represents the Gaussian component.The GMM model is applied for parameter training, and the expression is shown in Eq. (4). Fig. 9 ( Fig. 9(a) shows the feature extraction quality results of Image 1. From the data outcomes, as the amount of iterations Fig. 10 .Fig. 11 . Fig.10(a) shows the packaging matching test results under the Zhuang brocade pattern.In the early stage of testing, both the CycleGAN model and the GANILLA model showed significant fluctuations.Considering the overall situation, it was possible that the two models were unable to accurately recognize the color of the pattern during the early training, resulting in a decrease in image transfer quality.Compared to this, the proposed model had smaller overall fluctuations during training and lower Losses.When GANILLA, CycleGAN, and improved GANILLA converged, the Losses were 0.542, 0.512, and 0.135, respectively.Fig.10(b)shows the packaging matching test results under the blue and white porcelain pattern.Due to the presence of more feature elements in the blue and white porcelain pattern, it further tested the model's ability to recognize features.Overall, the proposed improved GANILLA model performed the best with a Loss of 0.236 at convergence, while the CycleGAN model and GANILLA model had Losses of 0.956 and 12.35 at convergence, respectively.Finally, the PSNR was used to reflect the quality effect of pattern matching, as shown in Fig.11. Fig. 11 (Fig. 12 . Fig. 11(a) and Fig. 11(b) respectively show the packaging matching test results of Zhuang brocade pattern and blue and
7,359.8
2024-01-01T00:00:00.000
[ "Computer Science" ]
Initial-Stage Structural Change of Si(100) Surface Induced by Exposure to Ethylene Gas and Annealing The initial stage of the structural change in a clean Si(100)-2 × 1 surface induced by annealing at 640◦C and exposure to ethylene gas was studied by reflection high-energy electron diffraction (RHEED) and scanning tunneling microscopy (STM). The RHEED pattern included SiC spots and STM images revealed SiC particles on the surface, which confirmed the carbonization of the Si surface. At a different area of the same sample, the RHEED pattern showed both twice the periodicity of surface spots which indicates a flat surface region and transmitted bulk Si spots which indicates the initial stage of voids. The STM images of the flat region showed a 2 × n (6 ≤ n ≤ 12) reconstruction. The topography of the 2 × n STM image depended on the bias voltage. The 2 × n reconstruction was clearly induced by carbon impurities; STM images were similar to those in previous studies in which structures were formed by various kinds of impurities or contaminations. [DOI: 10.1380/ejssnt.2006.285] I. INTRODUCTION Silicon carbide (SiC) is a wide-gap (ca. 2.35 eV) semiconductor that has potential for technological applications [1][2][3][4]. One conventional approach to the growth of 3C-SiC(100) films on Si(100) is chemical vapor deposition (CVD) using exposure to hydrocarbons in tandem with surface annealing [5]. However, three crucial problems with epitaxial SiC formation on the Si(100) surface have been reported. The first problem is interfacial voids occurring in the growth, observed by transmission electron microscopy (TEM) [6][7][8][9][10][11][12][13][14]. This phenomenon was discovered by Mogab and Leamy [15] upon cross-sectional observation of a reacted Si sample by scanning electron microscopy (SEM). The activation energy of the Si atom diffusion in bulk was estimated to be 5-7 eV [16][17][18]. Moreover, it was estimated that the number of Si atoms in the produced SiC film was almost the same as that originally in the voids [19]. Sholtz, et al. [14] showed how to prevent voids in an SiC layer less than 3 nm thick. In order to prevent void formation at thickness greater than 5 nm, it becomes necessary to supply Si atoms in the carbonization process [20] and to add a surfactant such as germanium [10,21,22] to stabilize the interface. The second problem is twin formation [12-14, 23, 24]. Prevention of the twin formation is necessary for the production of high-quality crystalline film. In the case of metals, it is known that crystal growth at low temperatures leads to twin formation. In the case of SiC growth on silicon, it is impossible to eliminate twin formation by annealing alone, since the melting point of silicon is lower than that of SiC. In order to prevent twin formation of 3C-SiC at low temperatures, it is critical to supply a surfactant to prevent the cis-type conformation on the -(Si-C) 3 -rings in the crystal. In the present work, structural changes in clean Si(100)-2 × 1 surfaces reacted with ethylene gas were studied with RHEED and STM. More specifically, this investigation focused on the structural change from clean 2 × 1 to 2× n. Although the reported STM images of a carbon-induced 2 × n structure [27,28] seem similar to those of structures without carbon induction [47][48][49][50][51][52][53][54][55][56][57][58], this is not evidence that the structures themselves are the same; differences in bias voltage gave rise to some uncertainty. In fact, our results confirmed the bias energy dependence of STM images. II. EXPERIMENTS All the experiments were conducted in an ultrahigh vacuum (UHV) chamber containing an RHEED electron gun (Biemtron, RHG-303), a phosphor screen, and an STM (Omicron, UHV-SPM). The base pressure of the UHV chamber was less than 2 × 10 −8 Pa. The residual gas in the UHV chamber was checked by mass spectrometer, and more than 98% of the residual gas was hydrogen. Details of the apparatus are reported elsewhere [84,85]. Si samples were cut from polished p-type Si(100) wafers with a thickness of 0.4 mm and a resistivity of 10-20 Ωcm. After ultrasonic cleaning for 5 minutes in ethanol, a sample was mounted on a molybdenum holder with tantalum clips. In order to prevent metal contamination of the sample, the sample holder contained no stainless steel. No metal materials or particles of any kind ever touched the samples; teflon tweezers were used to pick them up. The mounted sample was loaded into the vacuum chamber, pre-baked at 200 • C for 2 hours, and returned to room temperature, and the base pressure of the chamber was less than 1 × 10 −8 Pa. The sample was then flushed at 1250 • C for several seconds several times to obtain a clean surface, as checked by RHEED. The vacuum pressure during this annealing procedure was maintained below 2 × 10 −8 Pa. The temperature of the sample during the annealing was measured with an infrared radiation thermometer (Chino). After observation of the clean 2 × 1 surface by RHEED and STM, the sample was exposed to ethylene gas for 3 minutes at 5×10 −5 Pa. During this exposure, the sample was annealed at 640 • C. After the sample had cooled to room temperature, the surface structure was checked again with RHEED and STM. The acceleration energy of the RHEED electron gun was 30 keV. The diameter of the RHEED screen was 200 mm, and an ICF253 viewing port for the screen was used in this study. When the acceleration energy was higher than 15 keV, the relative intensity of Kikuchi patterns [86] increases. A rotation sector with the pattern of θ = √ r in the polar coordinates was mounted in front of the screen to compensate for the intensity of the RHEED patterns. The typical rotation speed of the sector was 60 rpm. The distance between the screen and the rotation sector was 100 mm, and the distance between the sector and a digital camera (Olympus, C-3040) was 500 mm. Typical exposure time with the digital camera was 4-8 s at f = 8 − 10 with an ISO of 80-100. More details about the rotation sector were described elsewhere [87]. All STM images were taken using a constant current mode. A cut Pt/Ir wire was used for the STM tip. The scanning speed and the tunneling current of all images were 250 nm/s and 20 pA, respectively. It is noted that the vector scanning system allowed us to rotate the scanning direction. No correction of the obtained STM images, including drift compensation, was performed. III. RESULTS First, the RHEED pattern and STM image of the clean Si(100)-2 × 1 surface were examined. Figure 1 dent angle by 1.0 • , almost satisfying the 200 Bragg condition, the intensity of the surface spots was maximized, as shown in Fig. 1(b). Careful observation of the spots along the half Laue zone revealed kinematically forbidden extra spots; the RHEED pattern seems to indicate a 2 × 2 structure [88]. Because a slight change in the incident direction continuously moved the extra spots, they were determined not to be bulk spots but surface spots [89]. The extra spots are attributed to double diffraction from the 2 × 1 and 1 × 2 domains [88]. Figure 1(c) shows a typical STM image of the 2×1/1×2 double-domain structure with some defects, which is in good agreement with a previous study [90]. The whole area of the sample surface was checked by RHEED, and the obtained RHEED patterns were the same as those shown in Figs. 1(a) and (b). After exposing the surface at 640 • C to ethylene gas at a pressure of 5 × 10 −5 Pa for 3 minutes, SiC spots were observed on the RHEED pattern. Figure 2(a) shows the RHEED pattern, in which the intensity of the 2 × 1 surface spots decreased, and two SiC spots appeared. No- tably, the SiC transmission spots disappeared when the incident angle was increased, a fact which agrees with findings in a study by Miki, et al. [77,78]. The contrast of the Kikuchi pattern was blurred compared with that of the clean surface, which indicates that electrons emerging from the bulk were scattered by the surface with increasing roughness, though this explanation is not quantitative. In the same area of the RHEED observation, SiC particles with a diameter of several nanometers were observed in the STM image, as shown in Fig. 2(b). The average size of the SiC particles shown in Fig. 2(b) is 15 nm, which is in agreement with our previous study [91]. In a different area, 3 mm distant from the area shown in Fig. 2, no SiC spots were recognized and the contrast of the Kikuchi pattern was clearer. Figure 3(a) shows a typical RHEED pattern, in which both bulk Si spots and 2-times periodic surface spots appeared. Although the transmitted bulk spots indicate that rough regions, corresponding to voids, are in the observed area, the 2times periodic surface spots and the clear Kikuchi pat- tern indicates that flat regions are also in the observed area. Spots and streaks corresponding to n-times periodic structure were slightly appeared on the RHEED pattern, which were too subtle to recognize on the photo shown in Fig. 3(a). Though the rough regions were too bumpy to observe with STM, a flat region was observed. Figure 3(b) shows the STM image at the flat region near voids. The STM image shows more than fifty steps run along the [001] direction, and the average step width is 15 nm. Magnified STM images at various sample bias voltages for the same area as shown in Fig. 3, are shown in Fig. 4. At sample bias voltages from 2 V to 1.25 V, stripes with the distance corresponding to the number n of the 2 × n structure can be recognized. When the sample bias voltage was decreased to 1 V, the morphology of the observed image changed. Dark patches appeared on the image, and the breadth of the patches corresponds to the number n of the 2 × n structure. Using our STM apparatus, it was experimentally difficult to obtain STM images at an absolute value of bias voltage lower than 1 V since the tip hits the surface at lower voltages, even decreasing the feedback setting of the tunneling current. At a sample bias voltage from -1 V to -1.25 V, the same dark patch seen at the bias voltage of 1 V was observed. At sample bias voltages from -1.5 V to -2 V, the surface morphology gradually changed and dark lines appeared. Though the direction of these dark lines appeared to be random and differed from those at 2 V, it was confirmed that the pseudo-random features in the STM images had the same n-times periodicity as those at 2 V by changing the contrast of the images and also from Fourier-transformed power spectra. From the STM image at the sample bias voltage of 2 V, the frequency of the distances was estimated. Each estimated distance derives from the value n. Thus, the frequency of the value n was estimated. Figure 5 shows the histogram of the value n. The value n distributes from 6 to 12, and the average is 9. The histogram is in fairly good agreement with findings in a previous study [48]. A. SiC particle formation In this section we discuss the formation of SiC particles. First, the adsorption of ethylene molecules on the Si(100) surface is discussed. Yoshinobu, et al. [92,93] studied the adsorbed state of ethylene on the Si(100) surface with electron energy loss spectroscopy (EELS), lowenergy electron diffraction (LEED), and Auger electron spectroscopy (AES). According to their work, chemisorption of the unsaturated hydrocarbons occurs even at the relatively low temperature of 80 K. By heating at 650 K, ca. 40% of ethylene is desorbed, while the remainder is decomposed to CH, CH 2 , and SiH species [92,93]. By heating up to 1000 K, the H adatoms are completely desorbed, and SiC is formed on the Si(100) [92,93], outcome of which is in good agreement with results of our previous study [8]. Based on the temperature programmed desorption (TPD) study by Clemen, et al. [94], the activation energy of ethylene desorption is 38 kcal/mol, and for the di-σ ethylene-Si 2 complex, each Si-C bond has a strength of ca. 73 kcal/mol. On the other hand, Bozso, et al. [95] studied the reaction of Si(100) with ethylene using a molecular beam source with X-ray photoelectron spectroscopy (XPS), EELS, and AES. Based on studies of the characteristic bulk-and surface-plasmon-loss features in the SiC thin film, they showed a surface aggregation from bulk Si on the top layer of the growing SiC film [95]. Their result is in good agreement with the photoelectron diffraction (PED) studies [5,26,[36][37][38][39][40]. Shimomura, et al. [96] used STM and PED to study the ethylene-chemisorbed Si(100) surface without annealing during and after chemisorption. At ethylene coverage of ca. 0.5 ML, p(2 × 2) and c(4 × 2) structures were observed by STM, and the PED results shows that the chemisorbed carbon atoms are on the outermost surface [96]. Butz and Lüth [32] observed the SiC nuclei at a monolayer of carbon by STM. The diameter of the nuclei ranged from 5 to 20 nm [32], which is in agreement with our previous study [8]. De Crescenzi, et al. [97] showed STM images of the aggregated SiC particles at a higher exposure of acetylene (1.0 − 2.5 × 10 −5 Torr for 10 min), combined with annealing at 650 • C during exposure. The size of the particles ranged from 30 to 40 nm [97]; they were larger than those in our previous study [91]. In our previous study [91], we observed aggregates comprised of SiC particles with an average size of 17 nm. The aggregate indicates that SiC particles migrate to the surface during the annealing process. The phenomenon of particle diffusion during bulk solidification is generally known as 'separation theory' [98]. Kitabatake [99] simulated the heteroepitaxial growth of SiC on Si(100), elucidating the mechanisms of carbonization of 3C-SiC/Si(100) as the shrinkage of the [110] row of Si lattice atoms with C adatoms. The typical crosssectional transmission electron microscope (XTEM) image supported the simulation results [99]. However, not all the SiC/Si(100) interfaces are ordered, as shown in the XTEM studies. Scholz, et al. [14] showed the disorder of the interface over several atomic layers. Moreover, the effect of two-dimensional shrinkage on epitaxial growth should be considered [100]. Not the hard-sphere model, but a model considering the covalent chemical bonds of Si-C and Si-Si would be necessary to explain the growth mechanism. Some induced 2 × n structural models have been proposed. Aruga and Murata [44] proposed an ordered missing-dimer defect model. Martin, et al. [45] explained the 2 × n structure by the ordering of excess missingdimer defects. Niehus, et al. [46] explained the 2 × n STM image by a complex missing dimer model with one and multiple dimer vacancies. Natori, et al. [103] investigated the stability of the (2 × n)-ordered missing-dimer structure and the ordering of missing dimers, concluding that the (2 × n)-ordered missing-dimer structure is more stable than the (2 × 1)-ordered surface without missing dimers only if a compressive strain larger than 0.5% is applied parallel to the dimer row. We applied these models to our study. However, we could not explain the bias dependence of the STM images by those models. More theoretical studies might be necessary to explain the STM result shown in Fig. 4. The model of the hydrogen-adsorbed surface can be considered a candidate for the 2 × n structure; considering that hydrogen gas was found to be the major residual gas in our study, we cannot ignore this possibility. Boland [104] reviewed the hydrogen-terminated silicon surface. In his review, however, no 2 × n structure of the hydrogenadsorbed Si(100) surface was indicated. Maeng and Kim [57] prepared a 2 × n structure by exposure to 100 L (5×10 −7 Torr for 200 s) of hydrogen that contains atomic hydrogen by cracking with a hot W filament (ca. 1500 • C), and subsequent annealing of the sample at 600 • C. However, it is not clear that exposure to atomic hydrogen is indispensable to preparing the 2 × n structure. Our 2 × n STM images were compared with those of several previous studies. Kim, et al. [27,28] reported filled-state STM images of a 2 × n structure prepared by exposure to hydrocarbon molecules, and their images are different from ours in Fig. 4; dimer rows aligned well in their images but not in ours. This is due to differences in the 'recipes'; they annealed the sample immediately after exposure whereas we annealed during the exposure. Ikeda and Nagashima [41] prepared their sample by almost the same recipe as ours, and their 2 × n STM image at 2.0 V is in agreement with ours. Hoeven, et al. [47] reported an image at -2 V that is in agreement with ours. Our STM image at 2 V is also in agreement with that reported by Sakurai, et al. [48], though they did not show the bias voltage of their images. Although the histogram they obtained was in fairly good agreement with ours, we could not observe the sub-peak at n = 11. The difference may be due to the sample preparation; they annealed the sample at 1200 • C [48]. Feil, et al. [49], Wei, et al. [51] Lin [55], Lin and Wu [56], and Maeng and Kim [57] observed a 2× n STM image at -2 V, and their images are in agreement with those reported by Kim, et al. [27,28]. Our STM image at 2 V is in agreement with that reported by Johnson, et al. [53], though they did not show the bias voltage of their images. Zhang, et al. [54] reported a 2×n STM image at -2 V, but they also reported that the surface contained a c(4 × 4) structure in the magnified image at a sample bias voltage of -1.4 V [54]. We checked our STM image at -1.5 V, but no c(4 × 4) periodicity was recognized. Consequently, although our 2 × n STM images at 2 V and -2 V are in agreement with those reported by some researchers, no comparable data at the absolute bias voltage less than 2 V was found in the previous studies. Further investigation is necessary to conclude whether our 2 × n STM images taken from the sample prepared by annealing during ethylene exposure are the same as those prepared by different methods. C. Void formation The formation of voids during the carbonization on silicon surface is discussed on this section. Mogab and Leamy [15] described the initial stage of void formation on Si(100) surfaces. According to their findings, the initial stage of the void formation is as follows. SiC nuclei grow more rapidly laterally than vertically with the Si reactant supplied from adjacent unreacted surface regions. At the onset of the reaction, the area fraction of unreacted Si greatly exceeds that of SiC, and impinging ethylene molecules adsorb mainly on unreacted Si. The Si reactant is removed uniformly over the surface as growth proceeds. Continued lateral growth leads to eventual impingement and coalescence of adjacent nuclei. During this growth process, the SiC surface expands at the expense of the unreacted Si surface. Consequently, adsorption of ethylene molecules occurs to an increasingly greater extent on SiC rather than on unreacted Si. Since the diffusion of Si through the region of SiC particles is quite slow, the main sources of Si for further growth become the occluded regions of unreacted Si. These in turn experience an increased demand for Si, as they must now supply a much larger area than during the initial stages of the growth. Compliance with this demand leads to void formation [15]. This description implies the idea of the prevention of void formation. Sholz, et al. [14] introduced a method of preventing void formation. They used propane gas for carbonization instead of ethylene, and found optimal conditions for preventing the formation of micropipes and voids at the SiC / Si interface. However, they mentioned in the last sentence of their paper that in the cases in which outdiffusion defects such as micropipes and voids are prevented, the density of {111} stacking faults and twins in the SiC layer becomes particularly high [14]. In other words, their method contributes to the second problem noted in the introduction of this paper. The origin of void formation is open to debate. Kim, et al. [105] proposed a different formation mechanism of SiC and voids. They speculated that the voids originate from oxygen defects existing in the bulk Si wafer. However, it is difficult to prove this idea. The SiC formation process differs from that on the surface with the oxide. Though the natural oxide thin film completely desorbs at more than 1000 • C [53,106], this is the case of the surface oxides. The oxide interferes with the carbonization process, and the interference so complicates the process that it could not be adequately discussed in this paper. In a previous paper we reported on surface and crosssectional SEM images of voids formed on Si(111) surfaces [107]. We found particle nuclei, averaging 100 nm in size, at the center in every void. Considering the mechanism of the void formation discussed in the previous papers [13,15], we concluded that the particle nuclei serve as lids to the voids, stopping the diffusion from the voids to the surface. Further investigation of the nuclei will elucidate the formation mechanism of voids though we could not address that issue in this study. Some researchers have reported photoluminescence in SiC thin films on silicon surfaces [108][109][110]. In their papers, however, the existence of voids was not mentioned. Further investigation to determine whether the photoluminescence derived from SiC or voids is necessary to clarify the mechanism. D. Reaction scheme of the carbonization Kusunoki, et al. [8] proposed a reaction scheme for the carbonization on the silicon surface. Based on the above discussion, a new reaction scheme has been proposed. Figure 6 shows a reaction scheme of the carbonization of Si(100) surface with ethylene gas. With the exposure of ethylene gas to an Si(100) clean surface as shown in Fig. 6(a), most of the ethylene molecules scatter but some adsorb [8]. The adsorbed molecules decompose [8] and the surface is inhomogeneously covered with hydrocarbons (-CH x ) and seeds of SiC particles, which is schematically shown in Fig. 6(b). During the anneal at 640 • C, Si atoms diffuse from unreacted area and/or pits [15] as shown in Fig. 6(b), so that the area becomes voids (domain α) as shown in Fig. 6(c). At the flat area near the voids without the seeds of SiC particles (domain β), the Si atoms from the voids in domain α pass over the domain α, which induces the diffusion of carbon atoms into subsurface and results in 2 × n reconstruction. At the area with seeds of SiC particles (domain γ), the supply of the Si atoms is adequate to the exposure of the ethylene gas, so that SiC particles are grown on the domain γ. It is noted that the size of the SiC particles depends not only on the annealing temperature but also on the flux of the ethylene gas [91]. This scheme indicates that the reaction on the surface is not homogeneous, and the interaction between the domains should be considered to elucidate the reaction mechanism. Kitabatake [99] simulated the heteroepitaxial growth of SiC on Si(100), elucidating the mechanisms of carbonization of 3C-SiC/Si(100) as the shrinkage of the [110] row of the Si lattice atoms with C adatoms. The typical crosssectional transmission electron microscope (XTEM) images supported the simulation results [99]. However, not all the SiC/Si(100) interfaces are ordered, as shown in the XTEM studies. Sholz, et al. [14] the disorder of the interface over several atomic layers. Moreover, the effect of two-dimensional shrinkage on epitaxial growth should be considered [100]. Not the hard-sphere model, but a model considering the covalent chemical bonds of Si-C and Si-Si would be necessary to explain the growth mechanism. V. CONCLUSION The initial stage of the structural change in a clean Si(100)-2 × 1 surface induced by annealing at 640 • C and exposure to ethylene gas has been studied by RHEED and STM. Three types of domains were observed. In domain α, transmitted bulk Si spots were observed with RHEED. The domain size was so small that the actual RHEED pattern showed the mixed pattern of transmitted bulk Si spots and twice the periodicity of surface spots attributed to the flat area near the voids. The domain α was too rough to observe with STM. The domain α is assigned to voids. In domain β, which is a region near domain α, STM images showed a flat surface with a 2 × n (6 ≤ n ≤ 12) reconstruction. The STM images of the 2 × n reconstructed surface depend on the bias voltage. The 2 × n reconstruction was clearly induced by carbon impurities; the STM images in this study were similar to those in previous studies in which structures were formed by various kinds of impurities or contaminations. In domain γ, which is a region distant from domain α, the RHEED pattern included SiC spots and STM images revealed SiC particles on the surface. Considering the results and the previous studies, an inhomogenenous surface reaction scheme has been proposed.
5,952.8
2006-01-01T00:00:00.000
[ "Physics" ]
Recognition Number of The Vehicle Plate Using Otsu Method and K-Nearest Neighbour Classification The current topic that is interesting as a solution of the impact of public service improvement toward vehicle is License Plate Recognition (LPR), but it still needs to develop the research of LPR method. Some of the previous researchs showed that K-Nearest Neighbour (KNN) succeed in car license plate recognition. The Objectives of this research was to determine the implementation and accuracy of Otsu Method toward license plate recognition. The method of this research was Otsu method to extract the characteristics and image of the plate into binary image and KNN as recognition classification method of each character. The development of the license plate recognition program by using Otsu method and classification of KNN is following the steps of pattern recognition, such as input and sensing, pre-processing, extraction feature Otsu method binary, segmentation, KNN classification method and post-processing by calculating the level of accuracy. The study showed that this program can recognize by 82% from 100 test plate with 93,75% of number recognition accuracy and 91,92% of letter recognition accuracy. INTRODUCTION The growth of vehicles in Indonesia is increasing significantly.BPS Semarang data [1], states that the growth of the average vehicle major cities in Indonesia about 8% per year with growth of road is only about 2-5% per year.Semarang city itself has a growth of private vehicles (cars and motorcycles) as much as 2%.Increased growth of such vehicles should also be balanced with efforts to improve services for vehicles such as the parking and toll systems. Recently, the trends and topics that are interesting about efforts to improve service to the public transportation such as the parking lot and toll system is the recognition of a vehicle license plate numbers automatically or License Plate Recognition (LPR) [2] [3].LPR widely used in the access to the parking lot, vehicle traffic control and security and surveillance vehicles [4].LPR purpose is to improve the effectiveness of the parking system and toll roads.LPR utilizes input vehicle license plate as the vehicle's identity in the system. LPR can be done by image processing on the plate captured using the camera.Parking system in Indonesia are still many who do not utilize the LPR because there are many obstacles in their utilization, especially in motorcycles.Research development still needs to be done in order LPR can be applied efficiently in Indonesia by searching for the best method to reduce the error recognition vehicle license plate numbers automatically.Research on the development of the LPR requires a combination of image processing and artificial intelligence.LPR stages in the process are preprocessing, feature extraction and classification (grouping) [5]. Feature extraction in pattern recognition to facilitate the need for bineritation pixels.The feature extraction can be done by Otsu method.Otsu method is a method that can change the gray level image into a binary set based on the value threshold with digital image pixel color values.Thresholding Otsu is the development of a histogram that can provide a good segmentation results so that the binary image clean of noise salt and papper [6].Previous studies [7], shows that the method of Otsu able to deliver a satisfactory binary image and is very helpful in the process of determining the characteristics of hand geometry.The approach taken by the Otsu method is the discriminant analysis is to determine the variables that can distinguish between two or more groups naturally without specifying a threshold value beforehand.The image characteristic determination result is then performed segmentation [8].Results segmentation is further classified. The method used in good LPR is a method of classification that remains consistent in a large amount of data.This type of classification can be used in a large amount of data is k-Nearest Neighbour (KNN).KNN algorithm classifies based on the shortest distance between the data evaluated by the closest point in the training data [9].Function to find the closest point to the distance Euclidean formula.KNN Classified is based on an analogy comparing the similarity of the test data to training data.N training data given attribute, each record is a point in n-dimensional space.All training data stored in the pattern of n-dimensional space.When the training data that is not known, then KNN looking for a pattern space k closest training data with test data that is not known.Test data is then entered into the most common class (similarities) between the k nearest neighbors.Another advantage of the KNN algorithm is robust against noise training data plate number which does have a lot of noise. Based on this background, then the discussion about the recognition of number plates of vehicles using Otsu method to convert the image into a binary image and KNN as the motorcycle classification at Parking Place of Faculty of Mathematics and Natural Sciences, State University of Semarang and then determine the level of accuracy. This study is expected to develop previous research on the LPR so that it can be implemented in life and know the use of KNN as a method of classification on the recognition of a motorcycle license plate number. METHODS Chronology of the vehicle license plate number recognition system with Otsu method and KNN classification broadly divided into several parts as in Figure 1. Pre-processing At this stage rebrand RGB to gray.The process of changing the RGB value of each character.Function to convert the RGB image into a grayscale on rgb2gray in Matlab is as follows. Figure 3 is an RGB image display plate after becoming image grayscale and resize to 500 × 2000 pixels. The binary image on the plate vehicles forming each character into a matrix (pattern). Then the noise in the binary image is cleaned.Cleaning noise serves to facilitate the segmentation process.Noise or groups of pixels that are less than 11,000 pixels are then removed.Figure 4 is a view of the results of the Otsu bineritation been cleared of noise. Segmentation (Segmentation Line and Character) Segmentation is the process of separating one object with another object in an image.The segmentation process is done after the program to reduce noise in the image. Segmentation in this study is divided into two line segmentation and character segmentation.Line segmentation is performed to determine the position of a horizontal plate that has been used (scanline).Plat form is used only at the top there is only one line matrix recognizable. Character segmentation is done to cut every character on the plate.Cutting character is based on a matrix formed on each character.Figure 5 shows the results of the segmentation of the characters in one of the test data. Calculate the distance here is the result of the calculation of the distance between the binary matrix test data with all training data matrix contained in the database with Euclidean the formula.3. Sorting the results of neighborhood values.4. Selecting the minimum value of neighborhood.5. Check the minimum value of the class, then the test images is also included in the class.6. Recognizable characters according to type of class.Figure 6 shows the results of the recognition of number plates of vehicles as classified by KNN. Recognition Characters that have been identified, and then displays the results in the form of text.The results in text form is then saved automatically to excel report.xlsfile. Post-Processing Process Post-processing is a process of evaluating the level of succes in recognizing the characters.One way to evaluate the success rate is to determine the level of accuracy of the recognition of the license plate number of the vehicle.This level of accuracy is a representation of success in terms of percent.to calculate the accuracy rate is as follows. (%) = ℎ ×100% The value of accuracy by calculating the value of the data that can be recognized divided by the total data is multiplied by 100%.Accuracy value calculation to take into account the recognition accuracy of letters, numbers and license plate number of test images. RESULTS AND DISCUSSION The results of the recognition of the vehicle license plate number is automatically saved on file an excel to do the next process is post-processing. ×100% = 82% The recognition of each character found that there are 25 errors recognize numbers and 26 error letters identify the out of a total 722 characters.Program to recognize as much as 93.75% in number and 91, 92% can recognize the letters as well as an recognition to the license plate number by 82%.The use of this method is still not completely perfect. An error on the recognize of the vehicle license plate number because some of the constraints and limitations that can lead to the recognize of the number plate becomes less perfect.Limitations may occur such as the recognition of data98, the recognition of data errors due to the position of the screws on the cover plate on the character of the plate so that the object has been recognized by the system into other characters.The result of the recognition of data98 is KH4269MM.An error in recognizing a for their object is not recognized (bolt) which change the shape of the object segmentation character.Shape of the object is unknown (bolt) which cover these characters is one of the missing feature.Missing feature is if there is an object unknown patterns that can be included in the classification, for example, the light intensity is too much on the image can be considered to be a new object.Missing feature, object is not known not for cleaning due to noise.One solution that can be done with computation Bayesian [10].Recognition errors can also occur because of the form of font characters on the plate of motorcycles vehicles in Indonesia is still less standardized.Shape font at several different motorcycles when these motorcycles have an official license plate of traffic unit police.Training data available in the database can not recognize, such as the data26 which has shape font less standard so that some characters can not be recognized properly.Character 0, D, 8 and Q indistinguishable eyes and to recognize the system, whereas the size of the test data in the database is large enough.K value being used increasingly large, it can be increasingly able to also test a large data size [10].This study uses k = 1, making the results have been inconsistent because it is not balanced by the number of test data.Shape cutting less precise result that initially can be identified with 0 and D into a U. Figure 8 is the result of the recognition of vehicle registration number plates when the input image is skewed. Figure 8. Results of Recognize data26 Because Plat Leaning Position Position affect vehicle license plate recognition results on data3 very disappointing, there are only two characters that can be recognized correctly.Noise feature occurs in data3 this because the characters figure 2 is not recognized as most forms of objects lost during the cleanup noise.Noise feature is the object which corrupt due to clearance noise statistical [10].Cleaning noise at first to remove the bolts but most of the area code 2 to come clean because it is considered as part of the noise.Some of the other characters can not be recognized because of the position of the vehicle license plate is less precise make cutting plate becomes less precise.Cutting the plate is less precise form has led to a system error in the process of segmentation per character.Data3 position actual plate is tilted and writing numbers too protrudes upward, then when it is done by cutting a rectangular plate the outer area to be in parts of the plate.The area outside the plate to form objects that have been difficult at the segmentation stage.Truncation errors for image segmentation process is not based on the pixel value distribution, the distribution is based on the intensity of light is formed.Cutting at this time with the manual plate form a rectangle regardless of the form of the original object.Object segmentation can be automatically recognized can be done because of differences in the distribution of pixel [12].One solution to recognize position offers may plate is done using masking techniques such as at research on facial expressions [13] that cut the shape object by object on 4 edge angle.Figure 9 shows a plate image data3 after preprocessing, character 3, 6, V and C is regarded as one character so that the recognition of these data to be bad. Figure 9. Binary Image Plate Tilting Plate also affects the physical form of the recognition of this license plate numbers, as in data45 and data76.Physical shape plate is not good as tangled make lighting disturbed image and shape objects that are not recognized.The lighting is very bright on a plate, less suitable when using this method because the method of this study is not the processing of the brightness intensity plate.This resulted in the character are exposed to light is too bright and too dark, the characters are not recognized properly.Example is data45, characters S and G becomes unrecognized due to no separation as the matrix is formed.Light and dark imagery have influenced the distribution of the histogram, changes in light and dark can be done by changing the brightness of the image.Brightly colored image histogram graph makes dominant shifted to the right while the dark image histogram on the dominance of the distribution on the left.If the spread is too bright or too dark then the distribution histogram into to one side, making the process of segmentation becomes difficult due to the high density of the histogram [14].Number plate recognition results data45 and data76 be disappointing.Appearance on the binary image data76 as shown in Figure 10. Figure 10. Binary Image on data76 Recognition results show that the use of Otsu method makes it easy to program in bineritation image thus formed matrix pattern becomes clear and easy to segmentation.KNN method also makes the recognition of more flexible because it is based on the proximity of the existing training data.While the weakness of the study is the execution time and the limitations of the use of functions such as to recognize the position of the plate without previous cut, the shape of the plate, plate position and the position of the character on the motor vehicle license plate is still a lot different.The difficulty in this study also, the number of characters on the license plate are not clear and could be different across the plate with one another, resulting in an error on perkarakter segmentation. Another problem that occurs in a private vehicle number plates in Indonesia are many variations of the shape and size of the plate.This is because a change in the design of vehicle number plates since april 2011, made many license plates are not standardized.Non-standard number plates which have affected the difficulties in the recognition of number plates of vehicles [15]. Conditions under which the arrest has affected the process of recognize of such a vehicle license plate number and position of the image capture of light intensity during an arrest.Position catching image be taken into account precisely because of the position of the catch plate also affects the image object detection, cleanup noise, segmentation and character recognition.Lighting time of arrest has affected the cleaning process noise and the next process in this study does not change the light intensity of the original image. CONCLUSION Vehicle number plate recognition process using Otsu method in this study based pattern recognition process by utilizing the binary vector without the influence of the threshold value determination in advance by adjusting the distribution of the pixel values of the image so get a good result and a binary segmentation better.KNN classification in this study is quite good and proved a great recognize to the vehicle license plate number. Results of recognition accuracy by using the vehicle registration number and classification KNN Otsu method that can recognize as many as 82% of the 100 plate test with 93.75% recognition accuracy of the numbers and letter recognition accuracy of 91.92%. Figure 3 . Figure 3. Plate Image Grayscale 2.3.Feature Extraction (Otsu Method)Feature extraction is the process of finding a new representation of the image.Feature extraction process in this study with thresholding a single the method of Otsu.Otsu method to change the image with gray level pixel value between 0-255 binary image that is only worth 0 and 1. Thresholding in general must define a threshold value beforehand, but not by the method of Otsu.Otsu method has a function to determine a threshold value (T) as follows. Figure 5 . Figure 5. Results of Character Segmentation 2.5.Classification (K-Nearest Neighbour Classification) Classification process is a process of grouping test data into classes that have been determined by using a learning algorithm.KNN classification based on the value of neighborhood test data with training data.KNN classification algorithm: 1. KNN classification process began by determining the value of k value neighbourhood amount of data, the value of k can be k = 1, 2, ..., n.The recognition plate issued only a single result, this research uses the value k = 1. 2. Then calculate the value of neighborhood (distance) test data to each training data obtained based on Euclidean formula as follows. Figure 7 Figure 7 . shows the image of the original plate and after image processing.Image Data98 (a) Original Image and (b) Binary Image The process for calculating the level of accuracy of the use of methods of Otsu and KNN classification on research LPR.Based on the recognition result, the program can identify 82 of 100 plate entirety.Then the accuracy of the total image of the license plate of the test are as follows. Table 1 . Results of Calculation Accuracy Level
4,078.2
2017-05-10T00:00:00.000
[ "Computer Science" ]
Development of an advanced MES for the simulation and optimization of industry 4.0 process . The concept of Industry 4.0 has been developed a lot from a theoretical point of view. However, the real applications on production lines remain few in number, due to the dif fi culties of interoperability between the different production entities and also due to the lack of a control system adapted to the expected fl exibility and to the management of the data generated. This article focuses on the development and deployment of a manufacturing execution system (MES) on a production system 4.0. The development stages of the system are explained in detail. The new functionalities and the expected level of performance impose a new logic in the design of advanced systems for controlling and optimizing production. Finally, a proof of concept of an MES was developed and tested on a new technology platform 4.0 Introduction Today, companies involved in product development in the era of "Industry 4.0" must manage all the necessary information across the product lifecycle, in order to maximize the product-process integration [1]. To this end, new manufacturing systems are able to use advanced functionality to respond to customer demands on time. The flexibility of manufacturing systems applies not only to the physical entities of production, but also to its information and communication technology (ICT) infrastructure. Modern manufacturing systems are composed of cyberphysical systems (CPS) that control production processes through ICT infrastructure. In these systems, CPS controllers sense the status of production operations through the IIoT layer and can manipulate the process through local or centralized dialogue through the MES. Thus, the flexibility of production requires a scalable network architecture, a reconfigurable workshop and an intelligent control system, concepts whose definition is often variable and marked by a contextualized interpretation of the experts. From this context, dominated by transdisciplinary technology and intrinsic complexity, often unrecognized and underestimated, was born the need to acquire a production platform 4.0 ( Fig. 1) which will be used, on the one hand, for the identification and analysis of technological barriers and on the other hand for the experimentation of modeling work on complex production systems 4.0 and their impact on design. In its first version, the production platform 4.0 ( Fig. 1) was delivered with an operating mode close to 3.0 where it produced a small series of identical shock absorbers made up of 3 elements (piston, spring, and piston body). The PLC controls machines and robots without any interaction with its direct environment, the only accessible variable is the number of shock absorbers desired. In its current configuration, this production platform is representative of the way companies operate. 2 Transition to production systems 4.0 The objective is to develop a process of gradual transition towards the digitization of production, thus offering new functionalities in order to improve performance. Based on the new functionalities targeted by the company, in terms of operational flexibility, integrated quality, decision autonomy, predictive maintenance, optimization of energy consumption and the desired level of portability, the idea is to propose a new production system model that defines and integrates a distributed system architecture, cyber-physical systems (CPS), connected technological bricks 4.0, a data storage and processing system and possibly materials that facilitate intra-workshop organ transfer operations. Obviously, the technological aspect is not the only factor of success, other factors such as the management of change, the development of skills, ... contribute to the outcome of such a project which marks a break in industrial practices. In what follows, Section 2.1 presents a historical synthesis of the evolution of production systems through different periods. Section 2.2 is reserved for new concepts to be integrated into the modeling of production systems 4.0. Section 2.3 shows an initiative to model production systems 4.0. Chapter 3 is reserved for the development of MES systems taking into account advanced functionalities. Finally, chapter 4 will conclude this article. Evolution of production systems The production system is a complex system that transforms materials, energy, and knowledge into value-added products and services. Manufacturing has evolved from handcrafted production to mass production and even mass customization. With the introduction of technologies, manufacturing systems have evolved into flexible, reconfigurable and intelligent production systems [2]. Initially, machine tool automation began with the development of numerical control in the 1950s. CIM, computer integrated manufacturing, which emerged in 1970, was a response to manufacturing industries looking for technologies to integrate manufacturing in an overall management process. In the 90s, the CIM was widely distributed under a pyramidal representation of different hardware and software layers. This representation gives a global overview of the data flows which were limited because the sensor networks, the PLC networks, and the computer networks were different, unable to coexist on the same physical support. The base layer represents the different machines used in the manufacturing process. Computer-integrated manufacturing is seen as a system across administration, engineering, and manufacturing [3] where information technology contributes to production control. The 2000s saw the massive deployment of enterprise resource planning (ERP) in the industry, which made it possible to plan and optimize the supply chain. 15 years later, there appears a complementary need for a link to ensure continuity between ERP data and workshop operations through a complete digital channel from the creation of the Production Order (PO) to obtaining the final product, as shown in Figure 2. Today, universal communication networks, such as Ethernet and TCP/IP, are used to interconnect different automated manufacturing systems with organizational functions. As a result, the MESs which provide the functions of execution, control and monitoring of production are now evolving towards functions of supervision and optimization of production with full traceability of manufacturing information. At the center of this Industry revolution, MES has created this missing digital link in the industrial ecosystem. Interfaced with all the connected means of production, it reacts instantly to production activities. MES is the central point of key run-time data, responsible for transmitting the right information at the right time, to both operators and machines. MES is progressing and benefiting from recent hardware and software technologies, particularly in terms of multi-source and multi-support interconnectivity, in order to digitize processes. From there was born the term of "Smart Manufacturing", the intelligent and connected production factory, where data becomes a strategic issue to be protected. Today the occurrence of Cyber-physical Production Systems (CPPS) has radically transformed pyramid architectures into distributed architectures, allowing them to gain adaptability. CPPS is an extension of a CPS dedicated to an industrial production environment. The general principle is to say that a CPPS is broken down into two functional levels, as shown in Figure 3, right side. The low level manages advanced connectivity that provides real-time data acquisition from the physical world and feedback from the high level. The high level is characterized by remote entities for data collection and intelligent analysis, made possible by advanced connectivity. This configuration facilitates the implementation of prediction agents that process data at the machine level to make short loop decisions. Basic concepts for the modelization of production systems 4.0 Over the past 5 years, many researchers have published concepts that interfere, to varying degrees, in the modelization of production systems. Some are generic and can inspire us, others are dedicated to a particular situation or specific context. The superposition and interfacing of concepts generates an increased complexity that is not limited to technical constraints but also extends to the transdisciplinary conceptual modelization of advanced production systems. In this sense, the German National Academy of Sciences and Engineering proposed a maturity index that defines the different stages of the development of Industry 4.0. Figure 4 shows an overview of the 3 key stages in the transition to Industry 4.0, from Schuh et al. [4]. The first step in the industry 4.0 transition process, defined as an "inventory", aims to collect and highlight company data in order to understand how production processes work. The second step, defined as "horizontal integration", is to optimize the process chain using conventional continuous improvement methods. Finally, the last step, defined as "Smart", aims to increase the production process of new features such as autonomy, flexibility, customization, etc. In order to progress in these stages, certain key technologies of Industry 4.0 must be developed and implemented [5]. LU [6] agrees with the crucial role of technologies, such as cloud computing, Big Data/analytics, IIoT, and digital service platforms in projects relating to the digital transition to industry 4.0. The technological offer is abundant and the software solutions claim to cover all the needs of the business, hence the need for a rigorous approach in the choice of tools adapted to the needs and available resources. As a preamble, we summarize in Table 1 the various major concepts that may intervene at different levels in the modelization process. The categorization shown in Table 1 is an initiative to bring together different concepts referenced in the literature in order to simplify understanding. These concepts are marked by a functional interdependence converging towards operational objectives in terms of flexibility, portability, autonomy, optimization... The concept of cyber-physical systems consists of integrating computational processes with physical processes via networks [7]. Computing processes supervise physical processes via a network architecture, conversely, physical processes affect computing processes. The CPS transforms a machine into a connected entity that interacts with its environment by offering a service in the form of a manufacturing capacity. There is also the concept of cyber-physical manufacturing system (CPMS) which is an extension of the CPS dedicated to industrial manufacturing machines. Multi-agent systems (MAS), which are emerging as a new generation of intelligent engineering systems, could increase the autonomy of each CPMS by associating it with intelligent agents for interaction with other nearby machines. Thus, each machine or workstation is represented by agents offering different transformation services to agents of the raw part type, for example. In this configuration, the workshop is an on-demand service offering. This is in line 12, 23 (2021) with the concept of the Smart Manufacturing Object (SMO) which refers to a principle whereby production resources are converted into smart manufacturing objects (SMOs) capable of sensing, interconnecting and interacting, with each other, to automatically and adaptively execute manufacturing logics. All of this brings us to the notion of the architecture of the information system, which is essential for the orchestration of processes. The concept of Service Oriented Architecture (SOA) is a distributed architecture where stand-alone applications expose themselves as services, to which other applications can connect and use the services. Most SOA tools are tailored to business processes and do not have strict resource requirements. Coming from the IT world and adapted to the industrial domain, an SOA relies on a single and integrated communication channel, called the service bus [8] where different robots, machines and applications are available for the manufacturing process as shown in Figure 5. This architecture allows flexibility of flows, and facilitates the reconfigurability of production workshops. However, even if SOA offers a concept intended to be distributed, the organization of services is generally done in a centralized manner and the interactions between the entities are synchronized and coupled [9], hence the emergence of a new concept event-driven architecture (EDA). In industrial applications, an event can be defined by information such as a change of state of a sensor or information at the end of the cycle of a manufacturing operation. An SOA is based on autonomous management of services based on events. Each department is able to react independently to published events, rather than being invited to do so by a central supervisor. Data associated with events must be immutable regardless of their reuse, including by other applications. EDA systems rely on 3 components, the event generator, the distribution channel, and the process of calculation or processing that should result in the production of an action. As a result, interactions in the production system change from a synchronized coupled mode to a decoupled asynchronous mode. Connected sensors, cameras, or 3D scanners can complete the production installation and thus form a distributed IIoT architecture, this will result in a significant production of data. Big-data analytics (BDA) platforms are a response to the need to collect and analyze a considerable volume of varied information, coming from different protocols and communication channels. The operational interest consists in benefiting from remote intelligent computing in order to identify phenomenological interactions that may lead to future corrective actions on manufacturing processes or preventive maintenance. However, some data can be calculated at the edge of the network (EdC) in order to react in real-time. The "digital twin" of the real system, considered as a virtual replica of real machines and operations, can operate on a Cloud Manufacturing (CdM) platform and offer a transdisciplinary collaborative work environment, in synchronous or asynchronous mode. Asynchronous mode (DTS), uses the virtual simulation capability to validate a production plan in terms of scheduling, flow, etc. Once the plans are validated in the cloud they will be integrated into the physical system. The synchronous mode (DDT) allows via ascending data to follow the operational performances, the machine conditions, the energy consumption, the quality of the product ... but also to control the production by actions descending towards the manufacturing workshop in real-time. Based on the concepts defined and cited above, we propose an initiative for modelization production systems 4.0. Modelization initiative of production systems 4.0 The approach we propose consists of analyzing the existing production system in order to superimpose on it a technological layer adapted to the desired functionalities and to the investment potential of the company. Before starting the transition process, we consider that the company already benefits from a CIM architecture (Fig. 6) which operates on a model close to 3.0, and that its manufacturing process has already been improved, in particular through the use of Lean manufacturing. This starting point is crucial for a successful transition to advanced systems. The transformation of the workshop involves 5 aspects. The material aspect, in particular through the acquisition of equipment for transfers between machines, integrated quality control, or additive manufacturing machines. Added to this, is the reconfiguration of existing machines in terms of controls in order to make them compatible with a connected production system. The second aspect concerns the superposition of an IoT layer, made up of sensors, cameras, and dedicated networks which should be interoperable with the existing network. We observe that the interconnection is also evolving, recent sensors and actuators that communicate via a TCP / IP protocol will be directly connected to the MES without going through the PLCs. The third aspect concerns the acquisition of software, in particular to manage the workshop. In addition, there is the development of a specific digital management and industrialization environment to which the control functions will be transferred, without forgetting the software interfacing issues called upon to dialogue during the execution of a production plan. The 4th aspect concerns the computational capacities, remote or local, necessary to support a production system 4.0 equipped with advanced functionalities, in particular when using cameras with expectations of real-time responsiveness. The 5th aspect concerns the programming of machines. In fact, in 3.0 processes, machine programs have a configuration dedicated to mass production. The machines become a single entity that ensures the smooth flow of a production plan. In contrast, in 4.0 processes each machine is a separate entity and can be considered as a CPS, or included in a CPS, connected, intelligent, and autonomous. This modularity at the workshop level leads us to design specific manufacturing programs for a modular, autonomous, adaptable, and reconfigurable production plan. In conclusion, the modeling of production systems 4.0 explores different concepts with a view to interlocking them and constituting an integrated and intelligent system. The concepts interfere with different fields and expertise, their interoperability constitutes a major technical challenge. A large majority of the articles consulted evoke a multi-scale complexity that is not limited to the sole issues of method, interfacing, security and competence. It also concerns the industrial strategies pursued by the major designers of digital platform solutions and the manufacturers of 4.0 technology systems for positioning and leadership purposes. Advanced manufacturing execution system (MES) Following on from the modelization developed in Section 2.3, considered as a first-level generic model, we introduce below an approach of an evolved control system. Indeed, manufacturing systems 4.0 and their advanced functionalities demand to review the expectations of the control system. To this end, we will address the aspects of planning, modularity, digital twin and finally the architectural dimension of the system. The challenge for us is to redefine the new functionalities that a steering system should cover in terms of industrialization, optimization, design, etc. phases. Planning The planning phase of production operations is a crucial technical aspect which requires knowing, on the one hand, the production capacity of the production machines, this parameter is classic and not very variable, and on the other hand, to know the way in which the production operations are constituted and structured, this parameter is recent and was introduced recently by various works published in 2019 [10,11] by Prof. Urbas from TUD 1 University. A production system can be defined as a set of modules linked by flows, whose function is to transform raw materials into a product. The overall flexibility of the production system follows logically from the flexibility of the modules and the flows that compose it, Figure 7. Modularity is not limited to change in the layout of the workshop, but it must be endowed with a flexible structure that allows to increase the production capacity or to integrate new functionalities [12]. In addition, Bloch [13]of the IAT 2 institute published in 2018 works in which he considers the issue of modularization of operations as an approach that meets the growing demands for flexibility in the manufacturing industry. It also addresses the issue of conventional control systems that do not properly support flexible production systems. Indeed, the interconnection of machines, robots, scanners, controllers, actuators, and sensors transform the "basic production operation" entity into modules that encapsulate machine programs but also information related to its operational environment. We retain here that "4.0" planning requires, on the one hand, upstream work to reconfigure the control-command systems of production machines. On the other hand, an encapsulation of basic operations to make them compatible with a production system 4.0. Modularité Modularity is a principle of Industry 4.0 and one of its essential functions [14]. Modular production systems offer the possibility of adapting and adjusting the production plan in a more comfortable and useful way. Modularity is defined as the shift from linear planning to agile planning that can adapt to changing circumstances and requirements, without the need for sophisticated reprogramming work. According to Ghobakhloo [15], who published in 2018 an article entitled "The future of manufacturing industry: a strategic roadmap toward Industry 4.0", modularity involves all levels of production, including the agile supply chain and flexible systems material flow. To this end, we have created several standardized parametric modules which represent all the production operations adapted to the potential of the 4.0 platform. The parametric dimension expresses a variability that allows the modules to adapt to the context. Modules can be defined in different ways, it all depends on the standardization sought. We have therefore chosen to group the modules into 5 categories, summarized in the first column of Table 2. In order to generate a flexible production plan, we distinguish in our approach 2 types of production plans. The first so-called "initial" is a first projection which takes into account the constraints of inter-module anticipation. This first initial plan is not feasible as it is, it only gives a global overview and can be used if one wishes to distribute the manufacture of a product to other digitized and connected factories. In this perspective, this would mean that the components of a product can be manufactured in different remote and connected manufacturing units where the optimization of production will no longer be limited to the capacity and constraints of the platform 4.0 but will be extended to other digital production sites. In this future configuration, it would be necessary to share data in order to allow MES to interconnect and identify solutions for outsourcing the production of certain components. The second so-called "optimized" production plan will be subject to the capacity constraints of the 4.0 platform and to the optimization criteria. To this end, the MES will propose several possible scheduling and flow scenarios to be adjusted according to the weighting of the optimization criteria. The criteria used are cost, time, or energy. The optimization can be single-objective or multi-objective, with the objective of generating a production plan that offers the best production time at the lowest cost, for example. Once the plan has been optimized, validated, and saved, the MES system can plan its execution taking into account the current production load. Indeed, the production data available in real-time should allow the MES to plan all the modules of the production plan on hold or, failing that, certain modules which can be manufactured. The digital twin, developed for piloting, may be of interest in this phase for the purposes of simulation and verification of the progress of manufacturing processes. The last point concerns the management of the production plan in dynamic mode, which will be discussed in the last section. Digital twin Cyber-physical models combined with other digital concepts have paved the way for the development of the digital twin [16]. The digital twin, also called "virtual replica" or "coupled model", which appeared in the 2000s, is beginning to fit into industrial practices with the prospect of considerable optimizations both in the simulation phase in "decoupled" mode and in the production management phase in "coupled" mode [17]. In an industry in full transformation, the digital twin allows for better crossfunctionality across the entire value chain, both internally and externally. The deployment of this technology is also accelerated by the deployment of IIoT and an adapted IT architecture. Sharing information through the digital twin between the different businesses involved offers a real advantage, especially in remote mode. The digital model of a product is built from the start of design, System information and physical knowledge are recorded and will come to interact with manufacturing processes. The coupled model offers real-time accessibility to process monitoring and machine status, due to the ubiquitous connectivity it becomes possible to activate commands via the digital twin. The decoupled model can be seen as a mirror image of the real production system, capable of simulating, in an immersive or virtual environment, the offline execution of a production plan. This mode allows engineers to interact with the various stakeholders involved in order to validate or optimize many parameters before going into real production. In our case, the objective is to develop a digital twin coupled to the production processes of the platform 4.0 (Fig. 8). To this end, we have modeled the material elements of production and also the immaterial elements represented by various processes such as production modules, machine g-codes, robot trajectories… and their resources. The digital twin is simple in its description but complex in its deployment, for several reasons. The first lies in the modeling of material elements, which calls for advanced skills in the creation of virtual dynamic scenes that can show object kinematics and, for the most advanced models, a representation of phenomenological behavior. The second reason concerns the process of coupling the material elements to the immaterial elements via a digital platform, a network architecture and sensors that collect data. The third reason lies in taking into account the aging and changes in its physical clone. The issue of digital twins, mentioned by many researchers in recent publications, including Malykhina and Tarkhov [16] who consider that the digital twin is the basis of the industry of the future, Xiang et al. [18] consider that the digital twin is a crucial and relevant research topic for the challenges of the industry of the future and finally Tao et al. [17] see the digital twin as a promising technology that will help make smart manufacturing 4.0 a reality. In the following section the digital twin of the platform will be presented in "interface 6" with a use coupled to the platform 4.0 or decoupled for simulation. Management and manufacturing operations Production management consists of ensuring the successive or simultaneous execution of manufacturing operations in accordance with the qualitative and quantitative requirements of customers. Management takes into account both the manufacturing modules and the necessary logistics in terms of tools, raw materials, maintenance, and the hazards that may arise during production. Industrialization, which is preparatory work, upstream of production, aims to define all the operations grouped into modules which will be implemented in a flexible production plan. To this end, the functionalities of the MES that we wish to develop should respond, mainly, to its ability to design a production plan from identified and recorded operations as well as to its ability to drive production through data. The Figure 9 summarizes the general and functional architecture of the MES where there are 6 interfaces. The 1st interface is dedicated to the supply of raw material or components necessary for the assembly of the finished product. This function is fairly standard in its operation and does not currently represent a challenge for us in terms of research. Note that the chain of operations necessary for the acquisition, management, and renewal of the stock of materials is managed in a conventional manner. Only the inventory status is recorded manually on the MES. The second interface entitled "operations" is reserved for the definition of basic production operations. The implementation of basic operations in MES is organized in two stages. The first takes place on machine-specific systems, such as CAM tools dedicated to the generation of tool paths for CNC machining machines. Indeed, all the operations necessary for a product production plan are validated upstream. Then comes the second step, which consists of preparing their integration into MES through an encapsulation action in order to make them compatible with its operational environment. This process is based on advanced technical knowledge of the platform 4.0 and obeys specific codification in order to be identifiable in different interfaces. Production operations of the "standardized robot trajectory" type are interesting illustrative examples to mention. We have analyzed all the possible trajectories of the robots and we have succeeded in defining 33 trajectories that can be used in a combined way and thus cover all the needs in terms of transfer in the workspaces of the platform 4.0. This approach helps to increase the flexibility of operations and thus facilitates the design of a production plan by assembling standardized operations. Table 3 shows in the 2nd column an example of standardized production operations that can be requested in different production programs. The interest here is to identify or define all the operations necessary for manufacturing, controls, transfers, assembly, etc. which make it possible to ensure the production of a component, from the initial stock of raw until delivery of the manufactured and inspected part that could be recovered from the accumulation table. You will notice that the list of operations is displayed in a linear, unoptimized schedule, where the only objective is to verify the logical and consistent flow of production operations. In addition, some operations are interrelated by immutable prior art constraints to which the engineer should be attentive and report them at this stage. Some operations may require a specific setting, such as the frequency with which we want to control the dimensional or geometric quality via the 3D scanner, characterized by a percentage vis-a-vis the overall number of parts to be produced. In the case where we observe a dimensional drift, we could vary this parameter so that it adapts to the acceleration of the drift, which allows us to optimize the use of the 3D scanner, which is computation- ally intensive and time consuming. Another example that contributes to the flexibility of the platform is the stroke and shape of the fingers of the robot gripper, capable of gripping the part. Indeed, the typology of the different components that will be manufactured suggests that we maintain the "part grip" aspect as an input parameter for transfer programs. The 3rd interface entitled "industrialization" refers to the preparation of modules, where each module can group together, in a precise schedule, several operations from the previous phase. These modules can constitute a logical production operation or be dedicated specifically to a product. For various reasons, any module must be able to be simulated and executed. I cite the example of a specific recurring need for dimensional control of a component using the 3D scanner integrated into the platform. To this end, the engineer can develop a specific module which consists of using the robot to transfer the component to the rotating plate of the scanner, activate the scanner, recover the point cloud and finally transfer the component to a secure storage space. This interface allows production engineers to prepare modules by overcoming the issues of interoperability, robot programming, scanner control, etc. which provides a real advantage in productivity. Based on the previous example, we can concretely illustrate the concept of manufacturing module. Several strategies can intervene in the definition of modules from the operations defined in the previous step. The first is to define modules that group together a minimum of operations under an identification that can be understood by the different people involved in development and production. The advantages are multiple, in real-time, this strategy increases the flexibility of production, in particu-lar by its ability to reorganize the production plan in order to avoid the occurrence of undesirable events on the platform. On the other hand, in deferred time, this strategy offers the possibility of enhanced optimization in the scheduling phase of production modules. To this end, it can be seen in Table 4 that certain modules group together a single operation, for various reasons. The "Lathe machining À Pawn" module groups together a single "Prog_Tour_1" operation, specific to the machining of a precise component. This choice is justified by the importance of the operation and the need to make it visible in the production plan. The "presence control" module for the crude in the lathe chuck also groups together a single operation, this is justified by the frequency at which this module is requested, which is lower than the production rate of the components concerned. In general, it is the engineers' experience that makes the difference in the configuration of the modules, they anticipate production constraints and productivity requirements. The second groups together a maximum of operations in order to facilitate visibility of the production plan, but may result in an increase in the number of modules to respond to different situations. The interoperation prior constraints make it easier to group them into a module; therefore, the parameterization of certain operations is transferred to the module which returns the value of the parameter to the operation concerned in the production phase. Case 1 and Case 2 of Table 4 are two examples of modules that define a transfer of the workpiece from the chuck to the accumulation table by stressing the two robots. These two cases are distinguished, for one, by an additional "quality control" type operation of the machined In the end, we note that the grouping of operations into modules gives them more visibility and becomes explicit to users of MES systems. The 4th interface called "production plan" is dedicated to finalizing a production plan for a given product. The production engineer collects all the modules necessary for the execution of a given production order and imposes on some of the constraints of inter-module anticipation. In addition, some modules are configured, in particular the quality control modules, which must be activated in proportion to the number of components manufactured. In this phase, the MES should offer several options, the first consists of taking into account the constraints imposed by prior art and proposing all possible scenarios for scheduling production modules for a defined product. The Figure 10 illustrates a production plan for the manufacture of a Pawn organized into several possible flow scenarios, two of which are of particular interest to us. The first flow transfers the machined part from the lathe chuck to the 3D scanner, activates the scanner to measure and log dimensional deviations, then transfers it from the scanner to the conveyor. The second flow transfers the machined part directly from the lathe chuck to the conveyor without going through the 3D scanner. The alternation between these two flows is conditioned by the frequency of dimensional control that one wishes to apply to a population of machined parts. This parameter, controlled by the MES in the "production control" phase, can be fixed or varied depending on the evolution of dimensional deviations observed over time. The historization of dimensional deviations of machined parts from their CAD model is a real source of optimization to maintain production at an optimal rate. Indeed, this dynamic quality control makes it possible to limit the number of parts rejected for dimensional and geometric non-conformities. Concretely, we identify two actions that can be integrated and managed by the MES in real-time. The 1st is characterized by an action, at the level of the "machining operation" module, of the type readjustment of the "depth of cut" parameter in order to compensate for the measured deviation and to approach the nominal value of the dimension, for example. The second action is characterized by a spacing of the inspection frequency of the machined parts, especially if it is observed that the drift is relatively stable and evolves in a regular manner. This action makes it possible to less mobilize the mobile robot and the 3D scanner for the manufacture of the "Pawns" component and potentially to reassign them to the manufacture of other components. The second consists of optimizing the results from the first option, by confronting them with optimization criteria and objective functions, such as the minimization of the execution time of the production plan, the minimization of production costs or the minimization of energy consumption. The optimization can relate to a single criterion, single-objective, or to the combination of several criteria, multi-objective. Concretely, he uses digital methods to vary module parameters such as depth of cut, feed rate, robot speeds, etc. in order to identify the best configuration that optimizes the production plan. The MES, in the "production control" phase, can make them evolve to make them compatible with the workload plan being implemented. In this phase, the different scenarios of the production plans can be simulated using the digital twin which will be discussed in "interface 6". The 5th interface called "piloting" is a logical continuation of the previous phase. The objective here is to prepare for the launch of production and to ensure that it is managed in real-time. In this interface, the production plans, selected and validated in the previous phase, will be loaded and positioned in order of priority execution. The MES manages the synchronization of production plans with the capacity load of the platform. In our approach, synchronization is carried out at the level of the modules, or grouping of modules, which make up the production plan, the interest is to bring operational flexibility to the level of the modules. In its preparation phase, the MES takes into account the load plan of each machine, scheduled shutdowns, stocks, initial scheduling, the availability of reference documents, the constitution of batches, etc. In its active phase, it takes into account the real-time balancing of flows, failures, measured deviations, contingencies, traceability and batch release, etc. To this end, the MES developed here will be endowed with a level of autonomy and must therefore be able to make decisions based on ascending data almost in realtime. Thus, the ordering of modules, or grouping of modules, becomes dynamic and its updating depends on the one hand on the evolution of the situation in real-time and on the other hand on the hazards encountered. Interface 6 entitled "Digital twin" (Fig. 11) can be used from interfaces 4 or 5. In fact, in the preparation phase "interface 4", it offers the advantage of visualizing in 3D the progress of a production plan in offline simulation mode, while the real platform continues to operate. The 3D simulation is a crucial step which makes it possible, on the one hand, to validate the various organization options or to check the ordering of the modules, and on the other hand to prevent any type of dysfunction, such as collisions, inconsistencies, flows, etc… which constitutes an advantageous decision support tool. In the "interface 5" production control phase, the digital twin finds its full potential. The ascending data of the physical objects of the platform, via the MES, are placed in their context, and make the digital twin evolve in real-time under the same conditions as the physical object. As a result, engineers benefit from a multi-view interactive 3D visualization of the manufacturing process. 3D objects can display local contextualized information such as performance, rate, machine status, progress, failure, etc. it thus becomes easy to access manufacturing data locally or remotely, which opens the way to new perspectives in telecollaboration terms, in particular on topics such as predictive maintenance, robotics, industrial IT, etc. In this activity, we distinguish three descending data flows that can trigger actions at the platform level. "Production orders" type data activated or manually scheduled, "anomaly management" type data activated remotely, and finally "specific measures" type data activated by the MES. The portability feature of the digital twin makes it easy to view, share and manipulate the platform's data streams. Architecture Based on expectations in terms of functionality, performance, and management, we are initiating a process of overall architecture design. As shown in Figure 12, the definition of a new architecture is built by a nesting of 5 spaces. The first is reserved for the design process which must integrate a certain variability at the level of some product parameters. Digital continuity between the design phase and that of the industrialization and production phases will allow simultaneous global optimization of the product and its manufacturing process. The idea behind this approach is that the data generated during the production phase has the possibility of changing some design parameters, via on-board or remote intelligence, with the perspective of maximizing or minimizing a single-objective or multiobjective function. Take the example of an objective function that aims to minimize energy consumption during the machining phase. To this end, the optimization can relate to the variation in cutting speeds but also to the geometric variation of the member so as to minimize its passage over the energy-consuming machine. The digital continuity between design and industrialization and production can generate a very promising global optimization context [19,20]. The second area is dedicated to the import and recording of standard parametric operations. Basic operations are generated by specialized skills through specific tools. Certain operations must be encapsulated in order to make them compatible with the platform's control and command system. Take the example of the transfer function, one of the functions mentioned in Table 2, provided by 33 trajectories that cover almost all of the platform's needs. These trajectories have been programmed with parametric attributes by a specialized skill in order to feed a library of standard trajectories, which can be used by operators according to their needs. All basic operations should be imported and listed in this space before proceeding to generate an initial production plan. The 3rd space (in blue) consists of two functionalities which will interact with the digital twin of the platform 4.0 for different reasons. The first feature allows you to design a flexible production master plan from various predefined modules. This so-called initial plan orders operations in a configuration that does not take into account the limits of the 4.0 platform. The point here is to allow yourself the possibility of having some components manufactured on other remote and interconnected platforms. The second functionality allows you to launch and manage a production plan. From its initial version, the production plan is adapted to the configuration and load plan of the platform and may be subject to optimization. In the production phase, management consists of supervising the execution of all production sequences and reacting with appropriate actions in the event of undesirable events, with a feedback of data and performance indicators that can be viewed through the digital twin. The right-hand side shows schematically the production system, characterized on the one hand by production machines and equipment connected to the controller and on the other by IIoT sensors connected to a local computer. The mass of data generated, through the design, industrialization, and production phases are located in a data lake with a gateway to the cloud. This architecture allows for cross-integration of data, for simultaneous optimization of the product and the production process. Conclusion In conclusion, the previous sections have given us an overview of what a digitalized production system can be. The research work we are currently carrying out on the platform 4.0 will allow us to develop, experiment, and formalize structuring methods and recommendations that will facilitate the digital transition in the scope of industrial production. The work presented in this article shows that MES play a primordial role in intelligent manufacturing processes. They also demonstrate that the notion of operational flexibility must be taken into account in the design phase of the advanced production control system. The modeling that we carried out showed the relevance of our methodological choices and research orientations. The perspectives consist in developing the autonomy of the "piloting" function as well as multicriteria production optimization algorithms.
9,751
2021-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Molecular Dynamic Simulation of Short Order and Hydrogen Diffusion in the Disordered Metal Systems Main concepts of Hydrogen permeability (HP) mechanism for the pure crystal metals are already stated. There are well-founded theoretical models and numerous experimental researches. As far as disordered systems (in which Hydrogen solubility is much more, than in crystal samples) are concerned, such works appear to be comparatively recent and rare. Particularly, they are devoted to Hydrogen interaction of with amorphous structures. Deficiency of similar researches is caused by thermo-temporal instability of amorphous materials structure and properties. by the other metals additives.In the plasma-arc (PAM) and electron beam (EBM) refining melting of the Nb, Zr, Ta, etc. metals, the necessity of impurity elements (especially hydrogen and iron) transport research in such melts arises.Electric and magnetic fields affect to liquid metal and its impurities during melting have been indicated in the number of researches as well. Therefore research of an electric field intensity affect to the impurity elements transport properties in the liquid metals is very urgent. Amorphous and liquid systems based on Fe, Pd, Zr, Ta, Si with and without Hydrogen researches are presented in this work.Short order structure experimental results and molecular dynamics simulations are considered.Partial structure factors, radial distribution function of atoms, its mean square displacement and diffusion factors are calculated.Hydrogen concentration affect to its mobility and short order parameters in the system are analyzed.Electric field intensity affect to liquid metals are compared with literary data on impurities removal from Zr and Ta in plasma-arc melting in the Hydrogen presence. Molecular dynamics calculation method Molecular dynamics method (MD) had been primary proposed in (Alder & Wainwright, 1959).The method allows particles real-time motion analysis using classic equations.So far it's the only numeric method for dens medium dynamic research.Generally accepted nowadays MD calculation scheme is the following.A system consisting of several hundred particles with the given interparticle interaction potential is considered.Classic equations of the particles motion are numerically resolved using Verlet algorithm (Verlet, 1976).It calculates i-particle coordinate on the following (k+1)-step by coordinates on given k-and previous (k-1)-steps. () ( ) (1 ) ( )(1 ) i ii i Fk t rk rk rk m Where r i -radius-vector of particle, m -its mass, F i -resultant force and Δt -time step.Velocity doesn't take part in calculation.Other algorithms of motion path calculation are considered in (Polukhin & Vatolin 1985).Periodic boundary conditions are used in motion equation solution, i.e. if some particle with p i -momentum exits through cube face, then other particle with the same momentum enters through opposite face symmetric relatively plane in the center of cube.Interaction in the MD models is defined as the pair interaction potentials resultant force in the pair approximation models.Temperature of system is defined basing on its total kinetic energy.Diffusion factors are calculated from mean square displacement of the particles in model 2 () i rt by the major of steps.(2) Where <r 2 (t)> -mean square displacement of Hydrogen atoms at t -time. Disorder systems short order is characterized by of radial distribution function of atoms g (r) (RDF) and its Fourier transform -structure factor s(k). Radial distribution function RDF Where ∆N ─ number of particles in a spherical layer thickness ∆r on r distance from the chosen particle: L ─ cube edge length of basic cell and N ─ number of its particles.Structure factor s(k) is defined by following equation: Where k is wave vector, k = (4  Sin 2Ө)/ and r m is RDF attenuation radius. Minimum k value in the MD -experiment is inversely proportional to main cube edge and calculations for smaller k, have not physics sense.Final configuration for RDF in our calculation was chosen its constant value.This condition needs no less than 10000 steps. Coordination number had been calculated by following formula: Molecular dynamics calculation had been done using microcanonical (NVE) ensemble.The particles of system were randomly distributed in the basis MD -cell.Interpartial potentials and its numerical value factors had been taken from works of (Varaksin & Kozjaychev, 1991, Zhou et.al, 2001, Rappe et.al, 1992).General questions of this method using had been considered in details by authors (Polukhin & Vatolin, 1985). Hydrogen in amorphous and recrystallized Fe-Ni-Si-B-C-P alloy (experiment) Experimental researches of Hydrogen absorption affect to structure and physical-chemical properties of transition metals (Palladium and Iron) alloys are presented by works (Pastukhov et.al, 1988).This researches indicated, that Hydrogen absorption leads to considerable shift of structure relaxation start and finish to the higher heating temperature interval.This process provokes significant modification of the amorphous (Iron based) material strength properties and leads to increased embrittlement.All mentioned changes are adequately displayed on the atoms distribution curves (fig.1), obtained from diffraction experiment data (Vatolin et.al, 1989). Hydrogen permeability of the amorphous and recrystallized Fe based (Fe 77.333 Ni 1.117 Si 7.697 B 13.622 C 0.202 P 0.009 ) alloy membrane (25 micron thickness) was researched by stationary stream method (Pastuchov et.al, 2007).Recrystallized alloy was prepared by vacuum annealing at 400 0 from amorphous specimen. Molecular Hydrogen injection to input side of degasified specimen at maximal acceptable temperatures (300° for amorphous and 400° for recrystallized specimens) didn't lead to noticeable output stream increase.At 10 torr Hydrogen pressure the stream achieved 3.8•10 12 sm -2 /s value.Hydrogen medium glow discharge had been used in order to delete the specimen passivation layer.Hydrogen ions, formed in glow discharge, simply penetrate to the specimen bulk (Lifshiz, 1976).We observed significant penetrating stream in this procedure.All researches have been carried out at 2 torr Hydrogen pressure, when the discharge is most stable. Temperature dependences of the stable (stationary) Hydrogen stream had been defined for amorphous and recrystallized specimens.Lower limit of the researched temperature interval was defined as reliable stream registration possibility which had been stated as 125 0 C for amorphous and 200 0 for crystal specimens.Most impotent difference between two states of the researched alloy is observed as non-monotonic output stream increase at temperature expansion in amorphous state.Hydrogen stream stabilization has different nature in amorphous and recrystallized specimens.But both situations are characterized by rapid increase of output stream with characteristic 3060s stabilization times.Amorphous membrane is characterized by very elongated hydrogen output with 6000s stabilizing time after rapid output increase at temperatures from 125 0 C up to 225°C.Hydrogen stream dependence on inverse temperature is illustrated by fig. 2. The dependence isn't monotonous and has maximum in 200°C region. The stream increases from 125 0 and achieves maximum 3.3•10 13 sm -2 s -1 value at 200° .Subsequent heating demonstrates anomalously sharp decrease.Second specimen follows to classic Arrhenius dependence with activation energy 17.9 kj/mol and maximum stream value 2.7•10 13 sm -2 s -1 at 375 0 C (fig. 2).Amorphization of the alloys leads to considerable free volume increasing, which increases Hydrogen permeability, solubility and diffusion.Special attention should be directed to the Hydrogen permeability changing (by order) effect with comparatively low solubility increase.This effect is explained by competition from amorphization-elements, which occupy large Bernal polyhedron-cavities, first of all in the high amorphization-elements concentration region (Polukhin et.al, 1997)."Overextended" stream yield to the stationary value evidently related to reversible diffusant capture (Herst,1962).Thus Hydrogen escape probability from the traps increases faster than capture probability.It was experimentally showed, that at temperature increase up to 200 0 C, low increase Hydrogen streams observed in reality.Its decrease begins after 200 0 C.Such behavior is proper namely for the traps with activation energies of escape and capture Е esc E cap .Penetrating stream decreasing in amorphous specimen for temperature interval from 200 0 C up to 300 0 C most probably is related to surface processes.Since penetrating stream is three orders less than incident stream to input surface (V f 10 16 cm -2 s -1 ), balance of streams is written as are streams of ion-induced reemission and thermal desorption on input side.Term C i is Hydrogen concentration in no-violated alloy structure near input surface, b i -pre-exponent factor.Maximal obtainable concentration in near-surface layer C max at room temperature (when thermal desorption is negligible) is estimated as 10 18 at/cm 3 (Grashin et.al, 1982, Sokolov et.al, 1984).In assumption, that 2 -concentration n on output side much less than C i , for stationary penetrating stream, we obtain following expression where E d -diffusion activation energy.The rates of diffusant capture and release are equal in stationary state and do not affect to stationary stream intensity.where E a =E d -E i ~ 19.6 kj/mol in our calculation, that is close to Е а = 17.9 kj/mol, obtained experimentally.Diffusion activation energies related to specimens structure, obviously.Excess free volume presence in the amorphous alloy provides less energy consumption for the Hydrogen atom jumps from one interstice to another.Besides some part of interstices may perhaps be wrong Bernal cavities, i.e. be deformed.Thermodesorption activation energy in recrystallized alloy is less than in amorphous one E i cr < E i am .This fact could be explained by surface reconstruction and changing of the passivation layer to the Hydrogen desorption. Hydrogen effect to the short order structure for liquid, amorphous and crystal silicon Due to its semiconductor properties, Silicon had been found wide application in the recent microelectronics and electronic technique.Hydrogen has been generally recognized to play impotent function in the different complex formation in the amorphous Silicon.Attention to the Hydrogen behavior in Silicon is explained by its affect to physicalchemical properties, which gives opportunity of new materials with necessary properties development. Hydrogen diffusion in crystal Si was researched by TBMD (tight binding molecular dynamics) method (Panzarini. & Colombo, 1994).The model considered single Hydrogen atom in 64-atoms super-cell of Silicon.On the TBMD data the authors supposed, that Hydrogen diffusion mechanism in crystal Silicon acts according to Arrhenius low, and there are not other "anomalous" mechanisms but, for example, the single skips.Amorphous Silicon short order structure had been researched in the works of (Pastukhov et.al, 2003, Gordeev et.al, 1980).It had been found, that amorphous Silicon retains covalent bond type with coordination number Z=4.2, in difference with melt, where the bond has metallic character (Z=6.4). The data of experimentally estimated values for Hydrogen diffusion factors in amorphous Si are limited, and published results are not in good agreement.The experimental research data on amorphous Silicon Hydrogen permeability are presented in the work (Gabis, 1997). For Hydrogen transfer through amorphous Silicon film the author used model, where besides diffusion, low rate of the processes on surface, as well as capture and temporal keeping of the Hydrogen diffusing atoms in traps had been taken into consideration.It's the author's opinion that Hydrogen transfer related to local bonds Silicon-Hydrogen reconstruction. We used interparticle potential (Tersoff, 1986) and MD method to calculate structure parameters and diffusion factors of Si and H in crystal, amorphous and liquid Silicon (Pastukhov, 2008). Calculations had been carried out for system, containing 216 Silicon and 1 Hydrogen atoms in basic cube using periodic boundary conditions.Cube edge length had been had been taken according to experimental density system under consideration at 298К temperature.Molecular dynamic calculation results are presented on fig.5, 6 and in table 1. Valent angles mean values were found from first and second coordination sphere radii using following formula: Experimental data analysis obtains, that, in certain approximation, there is one metastable equilibrium configuration of atoms with coordination number 4 in the c-Si a-Si materials with Hydrogen as well as without it.First peak sharpness of intensity curve (fig.6) indicates comparatively large ordering in а-Si.First and second maxima of RDF curve practically coincide.Differences are observed in consequent part of curves.Third maximum of RDF for а-Si is practically absent. Computer calculations for Si-H model found, that Hydrogen diffusion mechanism in crystal Si with n -conductivity type is realized by electro-neutral Hydrogen atoms migration through tetrahedral interstices according to the same principle as screened proton diffusion in the amorphous transition metals (Vatolin et.al, 1988).However Hydrogen atom moving path trough matrix nodes accompanied by Si-Si bond breakage due to Si atom 0.05nm shift from the node occupied and formation of chemical bond Si-H and free Si bond, left in the lattice node. Hydrogen diffusion in the amorphous Pd-Si alloy Model system (Pastukhov et.al, 2009), used in MD method for Hydrogen behavior research in the amorphous Pd-Si alloy at Т=300К temperature, was presented by 734 Palladium particles, 130 Silicon particles and 8 Hydrogen particles in the cubic cell with 2.44869nm edge length.Motion equation integration was carried out with 1.8·10 -15 s time steps.Short order structure analysis of the amorphous Pd materials (Sidorov & Pastukhov, 2006) and Pd-Si (15 а .%)with Hydrogen had been carried out using partial functions g ij (r) of Pd-Pd, Pd-Si and Pd-H pairs (Pastukhov et.al, 2009) (fig.7 and 8). Second peak of g ij (r) curve for Pd-H (fig.7) has change symmetry shoulder in comparison with g ij( r) curve for Pd-Pd. Refer to (Herst, 1962), distances, related to second g(r) peak are formed by 3 types of contact: a) two Pd atoms through Pd atom ( = 2r 0 ); b) two Pd atoms trough ч two Pd atoms (r = 1.732r 0 ); c) two Pd atoms through three Pd atoms (r = 1.633r 0 ).More easy Hydrogen affected turns contact of Pd-Pd atoms, realized by b) -type.Amorphous Palladium structure changes, due to Hydrogen presence, are caused by re-distribution of formed distances to its increasing (right sub-peak of RDF second peak).Observed second peak splitting inversion of g ij (r) for amorphous Palladium with Hydrogen obtains information about short order reforming of metal.Second peak differences of g ij (r) for Pd-Pd and Pd-H indicate strong Hydrogen affect to Palladium matrix structure.The effects observed in MD -model allow assumption about micro-grouping presence, which are identified as stable hydride structures, indicating to high degree of dissipative structures of Pd-H, Si-H -types presence (Ivanova et.al,, 1994, Avduhin et.al, 1999). Hump, observed close to 3.44nm -1 (fig.9) on our calculated and experimental (Polukhin 1984) structure factor curves for amorphous state with Hydrogen as well as its absence, hasn't so far interpretation.Authors (Polukhin & Vatolin, 1985) have shown by statistic geometry method, that most often Voronoy-polyhedrons occurred in the amorphous metals are recognized as polyhedrons with 12, 13, 14, 15 coordination numbers for given sties-atom.Amorphous Pd-Si alloy structure model is supposed to consist from Palladium microgroupings, characterized by distorted triangle pyramid form with 2.5Å leg (Pd-Si) and regular 2.71Å leg (Pd-Pd) base triangle (Polukhin, 1984) Separately chosen Pd, Si and H atoms motion in our model is different by its character. As MD model calculation show, not only Silicon atoms can affect Hydrogen mobility, but Hydrogen itself can change considerably diffusion of the other components of alloy.For example, Pd-Si system without Hydrogen has D Si = 4.93•10 -6 cm 2 •s -1 , but D Si = 2.53•10 -6 cm 2 •s - 1 with Hydrogen presence.There are different energy zones in an amorphous system, which lead to different time mode of Hydrogen diffusion.Therefore low of defunding particle energy change in the amorphous metals should be statistic nature and be defined by the cavities type distribution type.Due to little part of the octahedron cavities, three types of diffusion process are possible in the amorphous metals.That are octahedron-octahedron, octahedron-tetrahedron-octahedron, octahedronoctahedron, tetrahedron-tetrahedron. Due to v o l u m e c h a n g i n g i n the hydrogenization process for crystal is similar to that for amorphous alloys (Kircheim et.al, 1982), this fact indirectly proves, that hydrogen occupies similar Bernal polyhedrons (tetrahedrons and octahedrons).Interstice diffusion factors in disordered material can be calculated as temperature and concentration function with hydrogen energies distribution and constant saddle point energy according to Kircheim formalism (Kircheim et.al, 1985). 12 0 1* 0 00 ( 1) exp , exp Here D 0 * -pre-exponent factor, E 0 -mean activation energy, equal to difference between mean Hydrogen energy, calculated, from energy distribution function and saddle point constant energy E 0 -E g , (if energy distribution is Gaussian). Hydrogen diffusion factors are calculated from equation (11) dependent on its concentration for amorphous Pd 83 Si 17 alloy at T=298K temperature.It was obtained, that D H increases depending on its concentration increase. Basing on D H temperature dependence, diffusion activation energy value was estimated as E 0 = 18.9 kj/mol.It should be noted, that for crystal alloy activation energy is higher.It's equal to 26kj/mol independently on Hydrogen concentration. According to Richards theory (Richards, 1983), there is Hydrogen probability to occupy low energy interstices, that are large faces polyhedrons. Due to Hydrogen concentration increase, it occupies low energy interstices forcing H atoms to overcome higher energy potential barriers.Thus it neutralizes one of the factors, which decreases diffusion mobility. On the other hand, Hydrogen atoms location in the higher energy interstices leads to activation energy decrease. Described mechanism does not affect to diffusion, due to most part of Hydrogen atoms, absorbed by metal, have been found in low energy interstices (which are traps for H atoms). Sharp diffusion factor increase takes place only after traps saturation by Hydrogen. Hydrogen diffusion in the amorphous Ni-Zr alloys Computer calculation of the amorphous Ni-Zr and Ni-Zr-H alloys structure and properties are presented by fig. 10, 11 and table 2. The model system unlike (Pastukhov et.al, 2009(Pastukhov et.al, , 2010) ) contained 640 (360)particles of nickel, 360 (640) particles of zirconium and 1(2) particles of hydrogen in the cubic cell.The movement equations integrating were carried out by time steps of 1.1・10 -15 s.General structure factors for Ni 64 Zr 36 (alloy 1, curve 4) Ni 36 Zr 64 (alloy 2, curve 1), with Hydrogen and without it are presented on fig.10.All curves have diffused interferential maxima proper to amorphous state, which indicates, that amorphous state is saved with Hydrogen absorption at low as well as at high hydrate-forming element and Hydrogen concentration in an alloy.Increasing the number of H -atoms in a MD model for 1-alloy initially results in structural factor peaks displacement to the low dispersion vectors (S) and in main peak height (h) increasing.Then the displacement vice versa results in the high S-and low h-values.It testifies to the quantity of H-atoms affects to amorphous alloys structure.All peaks of a(s) became more relief, oscillations extend to higher scattering vectors.The authors (Sadoc et.al, 1973, Maeda & Takeuchi 1979) proved, that icosahedrons type of atoms packing is dominating in amorphous metals structure, where high polyhedron concentration with coordination number 12 takes place.The main structural factor maximum height and form of the bifurcated second peak are determined by contacting polyhedrons quantity and their type of bond (Brine & Burton, 1979).The amorphous alloy short order therefore can be described with the help of a coordinating icosahedron cluster, which is the basic structural unit of NiZr 2 crystal. Hydrogen in such a structure can be located in numerous tetra-cavities, formed by Ni and Zr atoms (Kircheim et.al, 1988).For 2 -alloy, that is close to NiZr 2 composition (curve 1), two first maxima location of a(s) curve corresponds to averaged location of the interference lines for crystal NiZr 2 compound.Hydrogen atom including in the MDmodel (curve 2) leads to strong diffusion and height decreasing of relatively good resolved structure factor peaks due to Hydrogen penetration into numerous cavities of the amorphous structure.Hydrogen atoms probably form with Zr some kind of quasi crystal ZrH 2 lattice ( Sudzuki et.al, 1987).This assumption reveals in a better resolution of short and long diffraction maxima (3, 6 curves) of structure factors for alloys with high contents of Zr and H atoms. Partial g ij (r) radial distribution functions of model systems and short order parameters are presented in Fig. 11 and in the table 2. For all low and zero hydrogen alloys, the shortest inter-atomic distance of Ni-Ni pair remains constant (0.240nm), decreasing up to 0.230nm when H increases up to two atoms.Inter-atomic distances of N-Zr and Zr-Zr pairs considerably decrease with growth of Zr and H concentration.We note that rNi-Ni and rZr-Zr are close to Ni and Zr atoms diameters (0.244 nm and 0.324 nm) correspondently, and the distance between Ni-Zr atoms is somewhat less than the sum of the Ni and Zr atoms radii, that is confirmed by diffraction experiment results (Buffa et.al, 1992).This fact confirms bond formation between these elements due to hybridization of vacant 3d -electron band of Ni and 4d-band of Zr Hafiuer et.al, 1993).Calculated diffusion coefficients of hydrogen for amorphous Ni-Zr-H alloys are presented in the Table 2.The value of D H varies from 2·10 -4 up to 1.2·10 -5 m 2 ·s -1 , in the same limits, as diffusion coefficients of H atoms in an icosahedron TiNiZr alloy (Morozov et.al, 2006).As it follows from the Table 2, The activation energy of hydrogen diffusion for amorphous Ni 64 Zr 36 alloy was estimated in the 298-768К temperature interval.A value of Е=0.1 V was obtained.This result on H atoms diffusion may be explained by various energy position (Richards, 1983, Kircheim et.al, 1988) in the disordered materials.Deep potential wells acts like traps (octa-cavities) and are occupied by hydrogen initially.Then hydrogen occupies interstices with high energy values (tetra-cavities) and an abrupt increase of D H is observed. Hydrogen and electric field effect to Iron impurities diffusion in the Zr-Fe melt Iron and Zirconium diffusion factor dependence on electric field intensity and Hydrogen presence in the molten Zirconium had been analyzed in the terms of molecular dynamics (MD) method.Model system for research of Iron and Hydrogen ions behavior in the Zr-Fe-H melt at Т=2273К temperature and electric field presence contained 516 Zirconium particles, 60 Iron particles and 1Hydrogen particle in cubic cell with a=2.44195 nm cube edge.Integration of the motion equation was carried out by 1.1·10 -15 s time steps.Interparticle potentials and its parameters had been taken from (Varaksin & Kozyaichev, 1991, Zhou et.al, 2001).Calculation results of impurities migration in the molten Zirconium are compared to experimental data (Lindt et.al, 1999, Ajaja et.al, 2002, Mimura et.al, 1995).Partial radial distribution functions g ij (r) for Zirconium-Iron melt are presented on fig.12.Most probable inter-atomic distance in first coordination sphere is close to sum of atomic radii for Iron and Zirconium (r Zr-Fe = 0.29nm, r Fe = 0.130nm, r Zr = 0.162nm). This results comparison to computer simulation data for Ta-Fe melt (Pastukhov et.al, 2010, Vostrjakov et.al, 2010) reveals sufficiently close character of the radial distribution function for large dimension atoms, namely Ta-Ta (0.29 nm, r Ta = 0.145nm) and Zr-Zr (0.324nm, r Zr =0.162nm).Iron and Zirconium diffusion factors in the Zirconium melt in the presence, as well as absence of electric field and Hydrogen at 2273K had been calculated by means of MD method (fig.13 and 14).Diffusion factor of Iron (D Fe ) in the Zirconium melts with Hydrogen linearly depends on electric field intensity (E) and Iron concentration ( Fe ).Hydrogen diffusion factor negligibly decreases from 2.16•10 -4 cm 2 •s -1 to 1.94•10 -4 cm 2 •s -1 , if electric field intensity increases from 900 to 1020 v/m.Hydrogen inducing into system at Fe ≈0.1% decreases D Fe value from 7.86•10 -5 to 6.36•10 -5 cm 2 s -1 , and electric field 1020 v/m intensity applying decreases D Fe to 5.23•10 -5 cm 2 •s -1 (fig.14). Calculation results of D Fe changing in dependence on E value had been compared to evaporation constant rate for the Fe-ions from Zr, calculated by equation (Pogrebnyak et.al, 1987, Vigov et.al, 1987) Fig. 12. Partial radial distribution functions g ij (r) for Zr-Fe melt at 2273К, calculated in terms of MD model. / 2 () exp where -ion vibration frequency (10 13 •s -1 ), Fe -impurity concentration, -Fe evaporation heat, I -first ionization potential (V/Å), E -electric field intensity, W -electron exit work, q -ion charge.Values of , I, W, R had taken in electron-volt, E -in volt per angstrom.Diffusion factor D directly depends on rate (k) and time (t) evaporation of main metal (Kuznetsov et.al, 1968).The log k and log D Fe on Е dependences (fig.15) are relatively similar. Thus assumption is possible, that limiting factor of Fe removal from the Zr melt is diffusion of Fe.The Hydrogen is considered as light intrusion impurity into metals with different cell type.Therefore Hydrogen diffusion is significant problem in researching of high temperature metals refining.Impurities have less action upon Incoherent diffusion.Therefore this kind of diffusion becomes dominating at high temperature (Maximov et.al, 1975). We had compared Hydrogen diffusion factors in Ta at 3400K (Pastukhov et.al, 2010) and in Zr at 2273K (Ajaja et.al, 2002).This values are 1,7 10 -5 and 5,01 10 -4 cm 2 ·s -1 respectively.The authors (Maximov et.al, 1975) explain such difference due to Hydrogen diffusion activation energy (E a ) dependence on atomic metal mass, its Debye frequency, modulus of elasticity and volume change at Hydrogen addition.Calculated values of Е а for different metals (Flynn et.al, 1970) are in quantity agreement with experimental data.Temperature dependence D H at high temperatures is described in the term of theory (Flynn et.al, 1970).(Shmakov et.al, 1998) calculated D H in Zirconium at 2273K without electric field influence as 3.862·10 -4 cm 2 ·s -1 , which low differ from our calculated value D H =5.01•10 -4 cm 2 •s -1 .We have estimated diffusion layer thickness (x) by ( 13) equation (Flynn et.al, 1970). Calculation were carried out basing on Fe -experimental time -dependence (Mimura et.al, 1995).Value of Fe we calculated by MD -method. In the equation 0 and (х,t) are impurity concentrations in initial and refined Zirconium (Flynn et.al, 1970) and t is time of refining.Calculated (x) value is equal 7•10 -2 cm.By the order of value it's close to data on Silicon borating (Filipovski et.al, 1994), which is 1.6-1.8•10 -2 cm.It should be noted, that zone thickness of Zirconium shell interaction with molten Uranium is 0.2•10 -2 cm (Belash et.al, 2006). We carried out calculation of Iron removal rate (G) from Zirconium by Iron concentration decrease during matched time intervals of plasma-arc-melting (PAM) with Hydrogen basing on the experimental data of (Mimura et.al, 1995) Hydrogen and electric field effect to Iron impurities diffusion in Ta-Fe melt Hydrogen and Iron atoms radial distribution functions and diffusion constants had been found by MD method in the Tantalum melt at 3400K in the presence and absence of outer electric field (fig.16). The model system was presented by 486 tantalum, 1 iron and 1hydrogen atoms in a cubic cell of 2.13572 nanometers cube edge length.Computer experiment data have found short order of Ta-Fe-H system at 3400K is close to Tantalum structure: first maximum at R Ta-Ta ≈ 0.29 nm corresponds to Tantalum atom radius ≈ 0.292 nm.All RDF maxima diffusion is observed at electric field and Hydrogen in the Ta-Fe system.This fact may indicate liquid transition to more disordered structure. Unusual kind of RDF curve for Ta-H atoms pairs obtained at electric field 1020v/m intensity (fig.17): first maximum of the curve is bifurcated, besides first sub-peak at r 1 = 0.22nm corresponds to one of most probable distance Ta-H and second sub-peak at r 2 = 0.24nm corresponds to Ta-Fe without electric field. Dynamics and local structure of the close to Hydrogen surrounding for the ternary interstitial alloy should depend on Hydrogen concentration, temperature and solvent structure short order.There is no conventional opinion about Hydrogen location in such systems.Since the Hydrogen atom radius is 0.032 nm, it can occupy octahedron (0.0606 nm), as well as tetrahedron (0.0328 ) (Geld et.al, 1985) location.System Ta -H particularity is that starting from the concepts of geometry, distance between Tantalum atom and octahedron interstice centre in Tantalum cell more, than Ta (0.146nm) and (0.032nm) radii sum.Distance Ta -H remains less than equilibrium distance, and Hydrogen not always occupies interstice centre.At the same time, distance from the octahedron interstice centre to the second neighbors, R 2 =a/√2=0.234nm is more than Tantalum plus Hydrogen distance.That leads to Hydrogen atoms shift from octahedron centre to whatever neighbor of Tantalum atom. Dynamics and local structure of the close to Hydrogen surrounding for the ternary interstitial alloy should depend on Hydrogen concentration, temperature and solvent structure short order.There is no conventional opinion about Hydrogen location in such systems.Since the Hydrogen atom radius is 0.032 nm, it can occupy octahedron (0.0606 nm), as well as tetrahedron (0.0328 ) (Geld et.al, 1985) location.System Ta -H particularity is that starting from the concepts of geometry, distance between Tantalum atom and octahedron interstice centre in Tantalum cell more, than Ta (0.146nm) and (0.032nm) radii sum.Distance Ta -H remains less than equilibrium distance, and Hydrogen not always occupies interstice centre. At the same time, distance from the octahedron interstice centre to the second neighbors, R 2 =a/√2=0.234nm is more than Tantalum plus Hydrogen distance.That leads to Hydrogen atoms shift from octahedron centre to whatever neighbor of Tantalum atom. Hydrogen shift from geometric centre of octahedron position during heating of researched system is possible.According to computer experiment data, distance between nearest Tantalum and Hydrogen atoms changes at electric field and Hydrogen presence, but Ta-Fe remains constant (table 4).7.·10 -4 cm 2 s -1 , which some less, than Iron diffusion constant increase: from 1.3·10 -5 to 1.5·10 -4 cm 2 s -1 .Thus, Hydrogen increases Iron atoms mobility more than electric field.This fact is in god agreement with Ta-H and Ta-Fe bonds strength. Conclusion Amorphous and liquid systems structure for Fe, Pd, Zr, Ta, Si with presence and absence of Hydrogen atoms had been researched by means of x-rays diffraction and molecular dynamic methods.Strong affect of H atoms to amorphous matrixes Fe-Ni-Si-b-C-P, Pd-Si and Ni-Zr structure had been obtained. Observed RDF changing at Hydrogen presence had been revealed in better resolution of the close and distant maxima could indicate to stable hydride bonds like Pd-H, Si-H, Zr-H formation. Calculated by MD model Hydrogen diffusion constants increase on H concentration and hydride forming element presence in alloy (system Ni-Zr-H).Not only amorphous alloy component (Pd-Si-H) affects to H atoms mobility, but Hydrogen atoms can considerably change other components (Si) diffusion.Refining processes of the liquid high-melting metals, like Zr, Ta, containing Fe impurities can be analyzed by MD method for PAM and EBM melting technologies.The method gives opportunity to estimate limiting stage of process, electric field affect and Hydrogen presence in system to Fe diffusion constant in the melts. The researches had been carried out with financial support of Minobrnayka.Federal contract 16.552.11.7017, science equipment of CKP "Ural-M" had been used. www.intechopen.comMolecularDynamic Simulation of Short Order and Hydrogen Diffusion in the Disordered Metal Systems 283 determines location probability any atom at r distance from the chosen atom and described in MD model by the well-known formula: Fig. 3 . Fig. 3. Stable (stationary) Hydrogen stream through amorphous membrane.Surface processes and correlation of E d and E i values define stationary stream temperature dependence.Concentration C am for an amorphous alloy has max value up to 175° temperature and penetration rate is defined by diffusion, at that stream increases.Following temperature increase leads to exponential C i concentration decrease, and E d <E i correlation leads to stream decreasing.Input C cr concentration for recrysallyzed alloy decreases in all temperature interval (fig.4, curve 2), and Е d > E i relation leads to classic Arrhenius dependence J  exp(-E a /RT) (9) Fig. 5. RDF of the Si-Si (1) and Si-H atoms (2) for crystal Silicon with Hydrogen at 278К. Fig. 7. Partial RDF in an amorphous Pd-H -system. Fig. 9 . Fig. 9. Structure factor for amorphous Pd-Si with Hydrogen obtained by MD -model calculation.Mixed type micro-groupings occurred, in d 85 Si 15 alloy, are formed with most presence of BCC and FCC polyhedron type.Particles number does not exceed 13-14 in one cluster. Fig. 13 . Fig. 13.Dependencies of D Fe and D Zr on Iron concentration at 2273К (MD -calculation). Thus equation (8) doesn't include interaction parameters of Hydrogen with traps.Approximation results are displayed by solid curves at fig. 3. Energy values E d am = 40.8,Eiam=86.7 kj/mol for amorphous, and Е d cr = 71.2,Eicr=51.7 kj/mol for crystal specimens give good agreement with experiment data.Concentration calculation on input membrane side by (6) and (7) equations accounting thermodesorption activation energies is illustrated by fig.4.Parameter C max , used in calculation does not any effect to activation energies, but affects only to pre-exponent factors. Table 2 . Short order parameters for amorphous alloy in the Ni-Zr and Ni-Zr-H systems. Table 3 . Dependences of L and G on mean residual Iron content in Zirconium for PAM process.C Fe -mean Iron concentration: initial (15min.),middle (90min.),and final (165min.)stage of melt. Table 4 . Partial inter-atomic distances in the Ta-Fe by MD calculation.
7,533.4
2012-04-05T00:00:00.000
[ "Materials Science" ]
Radiation Tolerance and Charge Trapping Enhancement of ALD HfO2/Al2O3 Nanolaminated Dielectrics High-k dielectric stacks are regarded as a promising information storage media in the Charge Trapping Non-Volatile Memories, which are the most viable alternative to the standard floating gate memory technology. The implementation of high-k materials in real devices requires (among the other investigations) estimation of their radiation hardness. Here we report the effect of gamma radiation (60Co source, doses of 10 and 10 kGy) on dielectric properties, memory windows, leakage currents and retention characteristics of nanolaminated HfO2/Al2O3 stacks obtained by atomic layer deposition and its relationship with post-deposition annealing in oxygen and nitrogen ambient. The results reveal that depending on the dose, either increase or reduction of all kinds of electrically active defects (i.e., initial oxide charge, fast and slow interface states) can be observed. Radiation generates oxide charges with a different sign in O2 and N2 annealed stacks. The results clearly demonstrate a substantial increase in memory windows of the as-grown and oxygen treated stacks resulting from enhancement of the electron trapping. The leakage currents and the retention times of O2 annealed stacks are not deteriorated by irradiation, hence these stacks have high radiation tolerance. Introduction The charge trapping in thin dielectric films has been intensively investigated recently in order to employ this phenomenon in the non-volatile memories as a replacement of the existing floating gate technology [1][2][3][4][5][6][7]. The charge trapping memory (CTM) design has a lot in common with the floating gate design. The main difference is that the CTM concept uses charge storage in spatially separated charge traps in dedicated dielectric layers while the floating gate concept relies on keeping charges in a potential well realized through a poly-Si layer (floating gate) sandwiched between two dielectrics [6]. The CTM concept is not new but offers some advantages over the floating gate design that are vital for the continuing scaling of non-volatile memories [7]. The introduction of high-k dielectrics in microelectronic technology boosted CTM development as these dielectrics have been proven to possess large densities of traps whose parameters could be tailored by the fabrication processes and consequent treatments. On the other hand, they could be Recently, we demonstrated the excellent application capability of Al 2 O 3 /HfO 2 stacks obtained by atomic layer deposition technique (ALD) as charge trapping media in CTMbased non-volatile memory devices. Moreover, it was established that by tailoring of the stack structure; optimization of Al 2 O 3 to HfO 2 ratio; and applying post-deposition annealing (PDA), substantial improvement of Al 2 O 3 /HfO 2 stacks' charge storage characteristics could be achieved [31,32]. The results obtained in [11,31,32] and also supported by others [33], demonstrated unequivocally that oxygen annealing substantially stimulates electron trapping in deep traps, thus enhancing charge storage ability of stacks. On the contrary, rapid thermally annealed (RTA) in N 2 results in a substantial decrease of the memory windows. In this work, the effects of γ-radiation ( 60 Co) on the electrical characteristics and charge trapping of Al 2 O 3 /HfO 2 nanolaminated stacks deposited on Si by ALD are investigated. The focus is on the properties (memory windows, leakage currents and retention) which are of primary interest for implementation of these structures in non-volatile memories. The influence of post-deposition ambient annealing on radiation response is also examined, as it strongly affects stacks parameters. Materials and Methods Nanolaminated Al 2 O 3 /HfO 2 stacks were deposited on p-type (100) Si wafers with resistivity of 6 Ω cm by atomic layer deposition (ALD). The investigated stacks consist of five bi-layer blocks, each block containing 30 cycles of HfO 2 and 10 cycles of Al 2 O 3 sublayers. The schematic picture of this structure is presented in Supplementary Figure S1. HfO 2 deposition was realized with tetrakis (dimethylamido) hafnium (TDMA) precursor, and for the Al 2 O 3 sublayers trimethylaluminum precursor (TMA) was used. In both processes, H 2 O was used as oxidant and the deposition temperature was 135 • C. The stack deposition starts with Al 2 O 3 process followed by HfO 2 one. The total thickness of the nanolaminated dielectric structure is 26 nm as evaluated by Woollman M2000D spectral ellipsometer (JA Woolman Co., Lincoln, NE, USA). A part of the samples were rapid thermally annealed (RTA) in oxygen or nitrogen at 800 • C for 1 min. The electrical properties of the stacks were examined on MIS capacitors with Al top (gate) and backside contacts. The square gate electrodes with an area of 10 −4 cm 2 were patterned photolithographycally. Two separate sets of capacitors were irradiated using 60 Co source-one at 10 kGy and the other at 100 kGy (Si). No external voltage was applied to the capacitor's terminals during the irradiation. The charge trapping in the nanolaminates were examined through capacitance-voltage (C-V) measurements in a dark chamber at 1 MHz with an Agilent 4980A LCR meter (Keysight Technologies, Santa Rosa, CA, USA). The leakage currents measurements were carried out with a Keithley 236 SMU (Tektronix Inc, Beaverton, OR, USA). The charge trapping characteristics of the stacks were studied by applying to the capacitors negative and positive square voltage pulses V p with a duration of 1 s. After each pulse a consecutive C-V curve was recorded in order to find the shift of the flat-band voltage, V fb (Supplementary Figure S2). The retention characteristics of selected memory capacitors were assessed by applying a charging pulse (12 V, 1 s) to introduce a negative or positive charge and a subsequent monitoring of the evolution of V fb over time. Initial Oxide Charges, C-V Hysteresis and Density of Interface States The initial oxide charge present in the dielectric stacks, Q ox , is estimated from the flat band voltage of C-V curves recorded under small applied voltage sweeps (about −3 ÷ 2 V) at which charge injection in the stacks is negligible. Q ox has been found to depend strongly on the PDA treatment [31]. Q ox of the as-grown layers is positive~0.7 × 10 12 cm −2 , the treatment in O 2 increases the value of Q ox to about 2 × 10 12 cm −2 . PDA in nitrogen; however, results in a change of the sign of Q ox , i.e., for these samples the initial oxide charge is negative~−1.8 × 10 12 cm 2 . The radiation response of Q ox depends also on PDA (Figure 1a). The generation of a positive oxide charge Q ox after irradiation is observed for the as-grown and O 2 treated stacks. The radiation-induced change in the initial oxide charge is higher for the as-grown samples. 10 kGy exposure slightly increases Q ox to about 0.9 × 10 12 cm −2 but for a higher dose, Q ox is doubled (~1.8 × 10 12 cm −2 ). The initial oxide charge of O 2 annealed stacks also increases to 2.5 × 10 12 cm −2 after 10 4 Gy irradiation, but further increase of the dose to 100 kGy reduces Q ox to the values close to the non-irradiated capacitors. As seen in Figure 1a after 100 kGy irradiation Q ox of as-grown samples is equal to the Q ox values of O 2 annealed ones in contrast to the non-irradiated capacitors for which substantially larger Q ox is found after O 2 annealing. The irradiation of stacks treated in N 2 inflicts appearance of additional negative charges. As with the oxygen treated samples, after the initial increase to about −3 × 10 12 cm −2 for 10 kGy exposure, the higher dose of 100 kGy reduces Q ox to its initial non-irradiated values. Therefore, the higher doses seem to inflict a turn-around point in the sign of radiation-generated charge in the annealed stacks. The different sign of the radiation created charge in O 2 and N 2 annealed stacks strongly suggests that the nature of the centers giving rise to oxide charge in the two cases is also different. A small counterclockwise hysteresis ∆V 0 fb is observed in the initial C-V curves measured in a sweep voltage range in which the charge trapping into the stacks is negligible. ∆V 0 fb is 34; 8; and 250 mV for the as-grown samples, O 2 and N 2 annealed ones, respectively. The hysteresis is usually ascribed to the charge capture at slow states-i.e., traps inside the dielectric located within a tunneling distance from Si. The obtained values for nonirradiated films suggest that PDA in N 2 creates a substantial number of traps in the first Al 2 O 3 sublayer, in the possible interfacial SiO x layer and/or at their interface. Generally, γ-radiation changes only slightly the hysteresis of the as-grown and O 2 annealed stacks, (Figure 1b). The induced changes, however, are very small and close to the detection limits especially in the case of the O 2 annealed samples, so that tracing a certain tendency cannot be done. The nitrogen treated samples show almost twofold increase of ∆V 0 fb to 420 mV after 10 kGy exposure. However, after the 100 kGy exposure ∆V 0 fb decreases to the values similar to those of the control non-irradiated capacitors. Hence, the dependence of ∆V 0 fb on the γ-radiation dose for the N 2 treated stacks shows the same radiation behavior as the oxide charge and the density of fast interface states (as shown below). The effect of gamma radiation on the density of the fast interface states at Si, D it , is evaluated through Terman method under the flat-band condition ( Figure 2). The choice of the Terman technique over the other more precise methods such as: low-high frequency C-V; conduction method (G-ω); and charge pumping seems justified since the fast inter- face states at Si do not affect directly the charge storage into the stack. The obtained here D it serve only as an indication of the radiation hardness of interface between Al 2 O 3 /HfO 2 stack and Si which thorough study is out of the scope of the present work. Moreover, D it is defined by the dielectric at contact with Si and in the real memory cells the charge trapping media is separated from Si by a dedicated tunnel oxide layer (usually SiO 2 ). D it values of about 2.2 × 10 11 , 7.8 × 10 12 and 3.9 × 10 12 eV −1 cm −2 have been obtained for non-irradiated as-grown, oxygen annealed and nitrogen annealed stacks, respectively. A higher D it value for annealed samples indicates that during PDA some kind of interaction between the stack and Si takes place, and oxygen ambient seems to intensify this reaction. The gamma radiation treatment increases density of interface states of the as-grown stacks without a clear dependence on the dose (at 10 kGy the D it value is~1.3 × 10 12 eV −1 cm −2 while for 100 kGy D it is slightly lower~1.1 × 10 12 eV −1 cm −2 ). D it of the N 2 annealed layers exhibits a similar behavior-the density of the interface states increases after 10 kGy (to~6.4 × 10 12 eV −1 cm −2 ), but the 100 kGy irradiation leads to its reduction. However, in this case the decrease is more prominent (twofold reduction) and after the 100 kGy treatment D it is almost equal to the values for non-irradiated capacitors. For the oxygen annealed samples D it monotonically decreases with the dose and the 100 kGy γ-radiation reduces D it almost twice. The results obtained for the as-grown and oxygen annealed HfO 2 /Al 2 O 3 stacks corroborate the reports [14,20,23] of higher radiation hardness of HfO 2 and HfO 2 /Al 2 O 3based MOS structures compared to the SiO 2 based ones. For these structures the effect of radiation mainly consists of a moderate positive oxide charge generation, and in accordance with [14,20,23,26] depending on the dose of γ radiation some improvement of the interface properties (reduction of slow interface states as well as fast interface states in the case of O 2 annealed samples) of high-k stack/Si system can be obtained. At the same time, the data for the N 2 treated samples suggest that the pre-irradiation processing affects the radiation response, most likely as a result of the different defect structure created under PDA in oxidizing and nitrogen containing ambient. Our previous studies [31] revealed that the amount of negative oxide charge in the N 2 treated samples depends also on the Al 2 O 3 content, and the layers with a lower Al 2 O 3 content exhibit a lower negative Q ox or even a positive one for the smallest alumina amount in the stack. In this context, it should be noted that the Al 2 O 3 layers commonly demonstrate a negative oxide charge [34,35] unlike the HfO 2 ones whose Q ox is predominantly positive. The short annealing in N 2 [35] is found to increase the negative Q ox value which is related to reduction of the density of positively charged Al interstitials, whereas the density of negative O interstitials remains unchanged. Other studies [36] of the effect of nitridation annealing environment (RTN in NH 3 ) suggest that the higher negative oxide charge of the HfAlO ALD stacks could be associated with the incorporation of N at the interface between the laminated film and Si. In addition, the N atoms have been found predominantly bonded to Al 2 O 3 , but not to HfO 2 . The N 2 annealing ambient, however, is regarded as an inert one compared to the case of NH 3 treatment. Therefore, a more plausible explanation of the observed results is related to the possible high temperature induced transformations in the stacks. The theoretical analysis of defects in the Al-doped HfO 2 films suggests that under O-rich conditions the most probable defects are electronically compensated-a negatively charged Al ion at the Hf site is compensated by a hole in the valence band. Under oxygen poor conditions the most stable defect is ionically compensated (2Al − hf ) V O (two negative Al ions compensated by a double positively charged neighbor oxygen vacancy). This defect, however, requires two dopant atoms located in a close proximity to each other. Therefore, if the mobility of dopant atoms within the HfO 2 matrix is low and/or the distance between dopant atoms is large, the formation of this defect might be suppressed. In this case, formation of a mixed compensated Al − hf V O defect (negative Al at the Hf site and double positively charged oxygen vacancy V O and electron in conduction band) is fovored. The oxygen poor ambient will likely increase the number of oxygen vacancies in the stack as the formation energy of V O is decreased from 7.5 eV in the O reach conditions to below 2 eV [37]. In fact, some recent investigations [38] notice that the incorporation of aluminum in ALD HfO 2 films significantly increases the density of oxygen vacancies. Another important aspect of the ALD films is the inevitable presence of hydrogen and C-N radicals as leftovers from the chemical reactions [39,40]. Their interaction with the annealing gas ambient could lead to creation of different types of defects depending on the used PDA environment. Memory Windows Next, we will consider the effect of γ-radiation on the memory windows of the stacks, formed as a result of the trapping of electrons and/or holes into bulk traps. To get better notion on the different types of charge accumulation processes (electron trapping, hole trapping and/or electric stress-generated defects) contributing to and affecting the memory window, the evolution of the flat-band voltage after applying voltage pulses with duration of 1 s and different amplitude to the capacitors is presented in Figure 3. The data are plotted with respect to the initial (before applying V p ) value of the flat-band voltage, V fb0 . Under positive voltage pulses electrons from the substrate are injected into the stack and their consecutive capturing leads to accumulation of a negative charge into the capacitor. Under negative V p holes from Si are injected and trapped into the stack and the emerging charge is positive. (Strictly speaking, for both V p polarities the charge carriers with opposite sign with respect to the injected from Si are introduced into the layers from the gate electrode. Therefore, the resulting charge in the dielectric is the sum of trapped carriers supplied by the opposite flows of carries injected from the gate and the substrate. However, as we will see below, the obtained C-V data follow predominantly the substrate injection scenario). The resulting memory window is defined as the difference between the positions of curves measured at two pulse polarities, along the voltage axis at flat-band point. As seen in Figure 3a, electron trapping is hardly observed in the as-grown sample before irradiation at |V p | < 10 V. The C-V curves for both V p polarities are shifted toward negative voltages implying an accumulation of positive charge in the structures. As a result, the memory windows ∆V are negligible. Only at |V p | above 10 V some noticeable electron trapping occurs. With the increase of |V p | both C-V characteristics (for negative and positive V p ) are progressively shifted to more negative voltages. Such a behavior is most likely due to the prevalence of the accumulated positive charge over the electron trapping. The increase of positive charge with increasing V p under both polarities also implies that some part of it is due to stress-induced positively charged defects, representing irreversible damage. Thus, a net positive charge accretion is observed even at conditions of substrate electron injection. The results, however, could be interpreted also in terms of low initial electron trap density which increases slightly as a result of the stress-related effects, i.e., some of the positively charged defects act as electron traps. For the samples with PDA in O 2 before irradiation ( Figure 3b) the charge trapping (both positive and negative) is negligible up to |V p | of 7 V. Unlike the as-deposited stacks, the O 2 annealed ones show a steady electron trapping which increases at V p > 7 V. A noticeable positive charge trapping starts at V p about −9 V and for more negative V p it increases continuously in the same way as for unannealed samples. Therefore, in the |V p | range of 7-9 V, the electron trapping prevails. The data clearly indicate that O 2 annealing creates electron traps which in turn gives rise to a significant memory window (e.g., at V p = ±18 V the memory windows, ∆V, are 9.5 V and 3.2 V for annealed and as-grown stacks, respectively). The beneficial effect of PDA in oxygen on the memory windows of the ALD Al 2 O 3 /HfO 2 stacks is also confirmed in [11,33] for structures obtained under different deposition conditions and with different compositions. The charge trapping for the nitrogen annealed samples before irradiation is negligible at |V p | below~5 V. Generally, the positive charge trapping for V p above −5 V follows the behavior of the as-grown ones (Figure 3c). At positive V p > 5 V, the electron capture is more pronounced than for the as-grown stacks, but the positive charge trapping still prevails. The electron trapping increases more significantly and progressively at V p > 10 V, reaching a maximum shift of~1.5 V at V p = 17 V. However, as evidenced in Figure 3b,c, the effects of O 2 and N 2 annealing are different, i.e., the gas ambient plays a significant role. The enhanced electron trapping observed for V p > 10 V for the N 2 treated samples could be also associated with a creation of new traps as a result of voltage stress. The impact of γ-radiation on the charge trapping phenomena in the HfO 2 /Al 2 O 3 stacks could be summarized as follows: (1) The positive charge trapping is almost unaffected by irradiation for the as-grown and oxygen treated samples. Indeed, the irradiation slightly decreases the positive charge build-up in the as-grown stacks, and for the O 2 annealed ones it is somewhat enhanced at V p > 10-15 V. Though, the observed effects are small and it can be assumed that irradiation does not generate any new hole traps and does not change the positive charge build-up behavior for these structures. The case of the N 2 treatment, however, is different. The 10 kGy irradiation slightly increases the positive charge trapping for V p < 8 V, but for the higher V p the magnitude of the V fb shift saturates. For the 100 kGy irradiation the flat-band shift under negative V p is almost the same as that for non-irradiated films. Therefore, the behavior of the positive charge build-up at γ exposure seems to correlate somehow with the effect of the irradiation on Q ox for these samples. In other words, the high negative Q ox at 10 kGy results in the reduced hole trapping, which is restored after the 100 kGy irradiation in accordance with the recovery of Q ox to its non-irradiated values. (2) The data clearly indicate that γ-radiation boosts significantly electron trapping in both the O 2 annealed stacks and the stacks without PDA. The 10 kGy irradiation of the as-deposited stack results in a noticeable V fb shift due to the electron trapping which initially increases with voltage pulse magnitude up to~9 V, and for higher V p the flat-band voltage shift turns-around. Most likely, this kind of dependence is a consequence of interplay between positive charge accumulation and electron trapping at radiation-generated by traps with the domination of the first process. Moreover, a part of the positive charge is probably stress generated, as indicated by the results for non-irradiated stacks [31]. Since the generated electron traps increase with the dose, the turn-around effect for 100 kGy is weakly pronounced and begins at higher Table 1, where ∆V values measured at |V p | = 15 V are given. As is seen, the radiationinduced enhancement of the memory windows is larger for the as-grown films which is a result of a very weak electron trapping in the non-irradiated stacks. Since the irradiation (especially at higher doses) tends to level out the memory windows of the as-deposited and O 2 annealed stacks, it might be suggested that the traps induced by oxygen annealing and by radiation have the same origin. This is further supported by the radiation response of the N 2 annealed samples for which γ-radiation shrinks the memory windows due to the reduced electron trapping (and also hole trapping at 10 kGy). The behavior of these samples suggests that the defects developed by PDA in N 2 are different compared to the as-grown and O 2 annealed layers. Leakage Currents and Conduction Mechanisms Leakage currents of non-irradiated stacks are depicted in Figure 4. The annealing procedure does not affect the leakage current values of non-irradiated capacitors in the voltage range of ±5 V. However, for higher applied V, annealing influences the leakage. The lowest leakage at high field is observed for PDA in O 2 . The extent of the reduction compared to the as-grown stacks increases with the increase of the electric field and at ±10 V it is larger than 1 order of magnitude. A similar improvement of the leakage currents by oxygen annealing of the HfO 2 -based stacks is also observed in [41,42]. The N 2 annealing effect on J is more unclear. PDA in nitrogen is often reported as a beneficial step for leakage current improvement (in some cases substantial) of ALD HfO 2 and Al 2 O 3 /HfO 2 nanolaminated dielectrics [43][44][45][46]. Our results show that PDA affects J-V characteristics differently, depending on the voltage polarity: for negative V, J is practically not changed while for positive V > +5 V nitrogen processing provides a lower leakage, which values are close to those observed for PDA in O 2 . Considering the effect of PDA on J it should be mentioned that divergent effects of PDA on J found in the literature seem to be closely related to the initial properties of the layers depending on the parameters of implemented ALD process. Hence, the response of J to the PDA could be different for each particular case. Furthermore, we note that the leakage current does not appear to correlate with the initial oxide charges. Despite the different amount and sign of Q ox all structures have similar J in the low voltage range (up to ± 5 V). At higher fields the layers with the highest positive initial charge (annealed in O 2 ) exhibit the lowest J, while as-grown stacks with positive Q ox demonstrate leakage currents at negative V identical to the N 2 treated stacks which have negative initial charge. The values of J for samples with RTA in O 2 are comparable to J of the nitrogen annealed ones for the positive applied voltages. This behavior might indicate that leakage is governed by bulk-limited conduction mechanisms. (The saturation tendency of J at high positive applied V is related to the limited amount of minority carriers in p-Si, whereas the slight asymmetry of J branches at negative and positive V reflects the influence of different barrier heights at gate (Al/HfO 2 ) and substrate (Al 2 O 3 /Si) interfaces.) Indeed, the leakage current of the stacks is described reasonably well by a combination of Ohmic (J Ohm ) and Poole-Frenkel (J PF ) conduction (inset in Figure 4): with: where E is the electric field, σ Ohm , σ PF are conductivity constants, q-electron charge, k-Boltzmann constant, T-temperature, ε r -optical dielectric constant of the stack (ε r is square of the refractive index), r is a parameter (1 ≤ r ≤ 2) describing the presence of additional traps in the dielectric, apart from the PF emitting donor-like center [47][48][49]. J has been modeled only for negative V for which the Si substrate is in accumulation since in this case the whole voltage drop is on the dielectric itself. The conduction at electric field values below about 3 MV/cm shows the Ohmic-like behavior, while at higher fields it could be modeled by PF expression (inset of Figure 4). The obtained values of r and ε r are well in the self-consistent range for the investigated stacks (r = 1.8 ε r = 3.6 for the as-grown; r = 1.89, ε r = 3.9 for the O 2 annealed stacks and r = 1.95, ε r = 3.6 for the N 2 treated layers). Although ε r of nanolaminated HfO 2 /Al 2 O 3 stack is not known, the obtained from the fit values are close to the value (3.3) estimated by effective media approximation [50] (values of refractive index of HfO 2 and Al 2 O 3 -1.88 and 1.62 at 5000 nm, respectively [51,52]). (Please note that the agreement between the obtained from Equation (3) high frequency (optical) dielectric constant, ε r , and its value established from refractive index measurements is assumed as the main indication for the operation of PF mechanism [47][48][49]53,54]). The increase of ε r of the annealed stacks certainly reflects the densification of the films after PDA. A closer look to Figure 4 also reveals that J-V curves for the negative applied voltages demonstrate good linearity in log(J) vs. V plot scale. This J-V dependence is characteristic for Poole hopping conduction which variant is the PF mechanism. The current density in Poole conduction is given by: where l is the distance between the adjacent traps, φ a is the ionization barrier, and J PO is a proportionality constant [53]. Therefore, it turns out as demonstrated in Figure 5, that current-voltage curves can be represented by either PF or Poole mechanism in the high field range E > 3 MV/cm. (Here we will note that the slopes (β) found from the linear fit of the data in Figure 5a (PF scale) are slightly different from the slopes which would be obtained using ε r and r produced by the fit of J-E curves to Equation (1).) That is because of the effect of the linear part in Equation (1) (namely Equation (2)). In [53] De Salvo et al., by using a two-trap center model, show that for a certain range of distances between traps, experimental curves are equally well fitted by PF and Poole equations. Following the analysis in [53] the distance between traps estimated assuming ε r = 3.3 is as follows: l = 1.05, 1.16, and 0.92 nm for the as-grown, O 2 and N 2 treated structures, respectively. The estimated trap distances suggest that O 2 annealing reduces the density of trap-centers taking part in the conduction process. Therefore, one of the reasons for the better charge trapping properties of oxygen treated stacks may be the reduction of the leakage through these layers leading to more efficient trapping of the carriers injected into the dielectric. At the same time, the J-V data could be also interpreted in the light of some structural transformation of the existing trap levels making them deeper; or generation of new deeper centers; or annealing of the per-existing ones as a consequence of oxygen annealing. experimental curves are equally well fitted by PF and Poole equations. Following the analysis in [53] the distance between traps estimated assuming εr = 3.3 is as follows: l = 1.05, 1.16, and 0.92 nm for the as-grown, O2 and N2 treated structures, respectively. The estimated trap distances suggest that O2 annealing reduces the density of trap-centers taking part in the conduction process. Therefore, one of the reasons for the better charge trapping properties of oxygen treated stacks may be the reduction of the leakage through these layers leading to more efficient trapping of the carriers injected into the dielectric. At the same time, the J-V data could be also interpreted in the light of some structural transformation of the existing trap levels making them deeper; or generation of new deeper centers; or annealing of the per-existing ones as a consequence of oxygen annealing. The effect of gamma irradiation on leakage currents is presented in Figure 6. As shown, the radiation does not lead to an increase of the leakage current unlike the data reported in [20,21]. The results presented here seem to be more consistent with the findings published in [23], where no current deterioration is found. In fact, the irradiation with 10 kGy leads to a lower J for all stacks. The induced changes are most prominent for the N2 annealed samples (Figure 6c)-for both polarities the current after 10 kGy irradiation is substantially reduced. For the as-grown stacks J is reduced mainly in the low electric field of Ohmic-like region while at higher E the values of J are not affected; the changes are more pronounced for −V. In case of oxygen treated stacks J-E curves after 10 kGy are shifted toward lower J values in the whole range of the applied V except for the high positive biases. As with the as-grown layers, the current decrease is clearer for the negative applied V. For both types of PDA the difference between pristine and 10 kGy irradiated curves increases with the magnitude of the applied negative V. The effect of gamma irradiation on leakage currents is presented in Figure 6. As shown, the radiation does not lead to an increase of the leakage current unlike the data reported in [20,21]. The results presented here seem to be more consistent with the findings published in [23], where no current deterioration is found. In fact, the irradiation with 10 kGy leads to a lower J for all stacks. The induced changes are most prominent for the N 2 annealed samples (Figure 6c)-for both polarities the current after 10 kGy irradiation is substantially reduced. For the as-grown stacks J is reduced mainly in the low electric field of Ohmic-like region while at higher E the values of J are not affected; the changes are more pronounced for −V. In case of oxygen treated stacks J-E curves after 10 kGy are shifted toward lower J values in the whole range of the applied V except for the high positive biases. As with the as-grown layers, the current decrease is clearer for the negative applied V. For both types of PDA the difference between pristine and 10 kGy irradiated curves increases with the magnitude of the applied negative V. The impact of the 100 kGy dose is more complicated. For the as-grown stacks the values of J at negative biases are close to those for non-irradiated structures, except for the voltage interval (−11 ÷ −6 V) in which J after 100 kGy is lower. However, at positive applied voltages, the 100 kGy treatment lowers J significantly for V > 1 V by shifting the curve to higher V. Oxygen annealed stacks exhibit further lowering of J after the 100 kGy irradiation at high negative V, but for the positive applied biases, J-V curves of non-irradiated and the 100 kGy treated samples are almost the same. For both types of stacks the 100 kGy irradiation inflicts some noise appearance at high applied V, (more noticeable under negative V). The 100 kGy irradiation returns the J-V curves of the N 2 annealed structures in the initial state defined by the pristine case. It should be noted that a similar behavior is observed also for the charge trapping characteristics (Figure 3c). The irradiation does not seem to change the dominant conduction mechanism; the J-E characteristics are well described by a combination of Ohmic and PF conduction (solid lines in the insets of Figure 6). For the as-grown samples, however, the 100 kGy curve after the Ohmic part cannot be fitted because of the current fluctuations. The obtained r and ε r are as follows: for the as-grown stacks 1.5 and 3.6 after the 10 kGy; for layers with PDA in O 2 -r = 2, ε r = 4 and r = 2, ε r = 10 after 10 and 100 kGy irradiation, respectively; and r = 2, ε r = 10 for the annealed in nitrogen structures after the 10 kGy. By applying the analysis of De Salvo et al. [53], the distances between adjacent traps after irradiation have been obtained-l = 1.2 nm in the case of as-grown layers after the 10 kGy; l = 0.83 and 0.5 nm for the samples annealed in oxygen after 10 and 100 kGy, respectively; and 0.5 nm for the 10 kGy irradiated nitrogen annealed stacks. Therefore, the results indicate that radiation increases the trap density into the stacks resulting in a smaller distance between adjacent traps. It should be noted, however, that determined l depends on the value of optical dielectric constant of the stacks, which is not known and is supposed to change after PDA and possibly after irradiation. As seen in Figure 6, radiation reduces the slope of the high voltage part of J-V curves (negative applied voltages) of the annealed samples. According to the PF theory this corresponds to an increase of ε r or r, or both which is indeed obtained by the fitting. The variation of r is suggested to result from changes in the ratio of PF-centers, compensating traps and the density of free carriers. R = 2 corresponds to the presence of either a single donor center, or a combination of deep (below Fermi level) donor and a shallow trap [47]. In the latter case the donor energy is equal to the sum of the real ionization energies of the donor (PF) center and the shallow trap, which leads to a smaller current. The presence of a noticeable amount of deep compensating acceptor-like centers results in r = 1 [48]. The radiation with both doses resulted in an increase of r from 1.8 to its upper limit of 2 which might be indeed inflicted by a certain change in the defects structure in the stacks. However, drawing of more specific conclusions is hindered, keeping in mind the uncertainties related with the actual value of ε r , and that in the PF theory involving two defect centers, one and the same slope can be attributed to different defect combinations [47,48]. At the same time, the increase of ε r might be interpreted as an irradiation-induced densification of stacks. It also worth noting that in some recent studies [54] involving simulations of the charge transfer through thin high-k films, the identification of PF conduction by its "fingerprint" linearity of ln(J/E) vs. E 1/2 (and its slope) is contested. Finally, we would like to mention that the observed effect of radiation on the leakage currents seems somehow counter-intuitive since the lowering of the leakage current is accompanied by an increased trap density. (A higher density of traps was obtained from the memory windows measurements as well as the interpretation of leakage current considering both PF and Poole mechanisms [53]). We should note that the same is valid for the non-irradiated nanolaminates as well-the lowest J is found for the oxygen treated samples for which the electron trapping is the strongest. Apart from the interpretation that different types of centers are involved in the conduction and charge storage, another possible explanation of this observation is the modification of electric field into the stacks by the built up charge. Therefore, a more complicated analysis of the J-V characteristics based on the modeling of the electric field redistribution during the current measurement procedure is needed. Retention Characteristics As seen in Section 3.2, the as-grown structures before irradiation as well as the N 2 annealed Al 2 O 3 /HfO 2 stacks before and after irradiation provide rather small memory windows due to the weak electron trapping which makes them unattractive from the CTM application point of view. Thus, we will focus mainly on radiation effect on the retention of the O 2 treated layers (Figure 7a). The retention of as-grown stacks after irradiation is also considered (Figure 7b) in order to answer the question whether the process of radiationinduced enhancement of the trap density could be used in charge storage devices. charge transfer from/into the electrodes, has to be applied to describe the retention characteristics and to estimate the remnant charge at the 10 years limit. Additionally, the positive charge seems to decline faster than the negative one ( Figure 7a). The experimental data show that at 10 6 s the negative charge remaining in the capacitors is ~80% from its initial value, while the positive charge drops to ~60%. By using ln 2 (t) dependence for extrapolation to 10 years it is obtained that 44% of the initial negative charge will be lost, and for the positive trapped charge the loss is 70%, i.e., the memory window will be reduced by 57%. The retention for the as-grown irradiated samples turns out to be independent of the γ-radiation dose especially for trapped electrons. Moreover, the 100 kGy dose seems to improve the retention of the accumulated positive charge. Unlike the case of PDA in O2, the discharge in the irradiated as-grown capacitors obeys the exponential dependence on time (linear dependence in semi-logarithmic plot). This kind of dependence is usually observed when the charge loss is realized via tunneling front processes [56]. More elaborated models based on thermal detrapping combined with reduction of the electron escape probability in the volume near the positive-ionized center, left after the first dettrapping act [57] can also explain such a dependence. In contrast to the O2 annealed stacks, for as-grown samples the electron discharge rate is faster than the reduction of the positive charge: 12% per decade for electron decay vs. ~10% for the trapped holes. For the 10 kGy exposure case a strong hole detrapping, (a drop of ~25% from the initial charge) is observed within the first 10 s; the 100 kGy irradiation seems to improve to some extend the retention of the accumulated positive charge. It is evident from Figure 7b that the charge loss rate in the case of as-grown films is much higher than for the O2 annealed ones. Based on this higher detrapping rate and the difference in the retention characteristics of the as-grown stacks and those with PDA in O2, we conclude that radiation-induced traps in as-grown stacks are of different type than the capture centers produced by the oxygen annealing and they are not effective for reliable storage. Conclusions The results clearly demonstrate that ALD HfO2/Al2O3 nanolaminated stacks have good radiation tolerance to γ-rays up to very high doses of 100 kGy. Although γirradiation affects the oxide charges and the interfacial properties of the stacks, the inflicted changes are not severe, and depending on the dose, even radiation-induced improvement of the density of fast and slow interface states can be obtained. The specific radiation response of the HfO2/Al2O3 stack depends also on the post-deposition treatment, reflecting the differences in the layer structure after annealing in different ambient. The γradiation significantly enhances the electron trapping in the as-grown and oxygen The retention is defined as the ratio between ∆V fb (t) (the difference between V fb at time (t) and V fb of uncharged capacitor) to its initial value ∆V fb initial immediately after the charging operation. The data clearly indicate that for oxygen treated stacks γ-radiation does not deteriorate the charge retention (Figure 7a). The charge decay with time (t) can be well fitted with a ln 2 (t) dependence. Such a type of relation between the trapped charge and the retention time is most likely due to domination of Poole-Frenkel discharge currents [31,55]. However, a more detailed analysis, which takes into account the electric field redistribution; possible secondary capture events of emitted electrons/holes; and the charge transfer from/into the electrodes, has to be applied to describe the retention characteristics and to estimate the remnant charge at the 10 years limit. Additionally, the positive charge seems to decline faster than the negative one (Figure 7a). The experimental data show that at 10 6 s the negative charge remaining in the capacitors is~80% from its initial value, while the positive charge drops to~60%. By using ln 2 (t) dependence for extrapolation to 10 years it is obtained that 44% of the initial negative charge will be lost, and for the positive trapped charge the loss is 70%, i.e., the memory window will be reduced by 57%. The retention for the as-grown irradiated samples turns out to be independent of the γ-radiation dose especially for trapped electrons. Moreover, the 100 kGy dose seems to improve the retention of the accumulated positive charge. Unlike the case of PDA in O 2 , the discharge in the irradiated as-grown capacitors obeys the exponential dependence on time (linear dependence in semi-logarithmic plot). This kind of dependence is usually observed when the charge loss is realized via tunneling front processes [56]. More elaborated models based on thermal detrapping combined with reduction of the electron escape probability in the volume near the positive-ionized center, left after the first dettrapping act [57] can also explain such a dependence. In contrast to the O 2 annealed stacks, for as-grown samples the electron discharge rate is faster than the reduction of the positive charge: 12% per decade for electron decay vs.~10% for the trapped holes. For the 10 kGy exposure case a strong hole detrapping, (a drop of~25% from the initial charge) is observed within the first 10 s; the 100 kGy irradiation seems to improve to some extend the retention of the accumulated positive charge. It is evident from Figure 7b that the charge loss rate in the case of as-grown films is much higher than for the O 2 annealed ones. Based on this higher detrapping rate and the difference in the retention characteristics of the as-grown stacks and those with PDA in O 2 , we conclude that radiation-induced traps in as-grown stacks are of different type than the capture centers produced by the oxygen annealing and they are not effective for reliable storage. Conclusions The results clearly demonstrate that ALD HfO 2 /Al 2 O 3 nanolaminated stacks have good radiation tolerance to γ-rays up to very high doses of 100 kGy. Although γ-irradiation affects the oxide charges and the interfacial properties of the stacks, the inflicted changes are not severe, and depending on the dose, even radiation-induced improvement of the density of fast and slow interface states can be obtained. The specific radiation response of the HfO 2 /Al 2 O 3 stack depends also on the post-deposition treatment, reflecting the differences in the layer structure after annealing in different ambient. The γ-radiation significantly enhances the electron trapping in the as-grown and oxygen annealed HfO 2 /Al 2 O 3 nanolaminates due to the creation of new electron traps. As a result, a significant widening of the memory windows is obtained. The nitrogen annealing, however, seems to suppress the generation of radiation-induced electron traps. Generally, the positive charge trapping in all investigated stacks is not affected by the irradiation. No deterioration of the leakage currents and retention characteristics have been observed after irradiation. The obtained results demonstrate that the oxygen annealed HfO 2 /Al 2 O 3 stacks can be successfully used in CTM devices working in radiation-intensive environment. Moreover, for these stacks γ-treatment with suitable doses can be applied to extend the charge trapping characteristics relevant to the charge trapping-based non-volatile memory devices. In case of the as-grown films, however, the retention times associated with the radiation-generated electron traps are not adequate for application in memory devices, suggesting that radiation-induced traps in oxygen annealed and in as-deposited stacks have different nature. Supplementary Materials: The following are available online at https://www.mdpi.com/1996-1 944/14/4/849/s1, Figure S1: A schematic presentation of a test wafer with MIS capacitors, blue layers-HfO2, yellow Al2O3, Figure S2: An illustration of the memory window definition. The memory window, ∆V is defined as the difference in the position of the measured C-V curves at flat band capacitance Cfb after applying consecutively positive voltage pulse (+Vp) and negative voltage pulse(-Vp). The inset shows the measurement procedure. Data Availability Statement: The data presented in this study are available on request from corresponding author.
10,823.4
2021-02-01T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
High-resolution photoelectron imaging and resonant photoelectron spectroscopy via noncovalently bound excited states of cryogenically cooled anions Noncovalently bound excited states of anions have led to the development of resonant photoelectron spectroscopy with rich vibrational and dynamical information. Introduction When a neutral molecule possesses a large dipole moment (m > $2.5 D), it can bind an excess electron because of the long-range charge-dipole interaction with a binding energy on the order of a few to few hundreds meV. [1][2][3] Valence-bound anions with polar neutral cores can support an excited dipole-bound state (DBS) with a diffuse orbital, analogous to Rydberg states in neutral molecules. Dipole-bound anions constitute a class of manybody systems to study electron-molecule interactions, such as vibronic coupling 4 and low-energy electron rescattering. 5 DBSs have been proposed as the "doorway" to the formation of stable valence-bound anions, 6-8 especially for those formed in the DNA damage process by low-energy electron attachment 9 and those in the interstellar medium under astronomical environments. 10 Fermi and Teller rst predicted a minimum dipole moment of 1.625 D for a nite dipole to bind an electron when studying the capture of negative mesotrons in 1947. 11 Subsequently, many theoretical groups obtained a similar value of minimum dipole moment for nite dipoles to bind an electron, which was discussed by Turner in an interesting historical perspective. 12 Further theoretical calculations showed that the critical dipole moment for electron-binding could be up to 2.0 D by including molecular effects, such as molecular rotation, moment of inertia, and dipole length. [13][14][15] A more practical critical dipole moment of $2.5 D was suggested empirically from experimental observations. 1, 16 More recently, theoretical attention has been focused on the electron binding energies in dipole-bound anions, the nature of the electron-molecule interactions in DBSs, and the transition from DBSs to valence-bound anions. [17][18][19][20][21][22][23][24][25] Direct evidence of DBSs came from photodetachment experiments of the enolate anion, which revealed sharp peaks in the photodetachment spectra attributed to the existence of dipole-supported excited states. 26,27 Subsequently, highresolution photodetachment spectroscopy (PDS) for a series of anions was performed to investigate rotational autodetachment via excited DBSs. [28][29][30][31] In addition to studies of dipole-bound excited states of valence-bound anions, there have been major experimental efforts for ground-state dipole-bound anions for neutral molecules that cannot form stable valence-bound anions. A variety of dipole-bound anions were successfully produced by Rydberg electron transfer 1,7,16,[32][33][34][35][36][37] to dipolar molecules or clusters, which did not form valence-bound anions. In addition, the dynamics of DBSs of anionic clusters and complexes have been studied by time-resolved photoelectron spectroscopy (PES). 9,[38][39][40] The Wang group rst reported high-resolution rPES via vibrational autodetachment from dipole-bound excited states of cryogenically cooled C 6 H 5 O À . 41 The DBS of C 6 H 5 O À was found to be 97 cm À1 below the detachment threshold. Mode-specic autodetachment from eight vibrational levels of the DBS was observed, yielding highly non-Franck-Condon resonant photoelectron (PE) spectra, due to the Dv ¼ À1 vibrational propensity rule. 42,43 Subsequently, more deprotonated organic molecular anions ( Fig. 1) were found to support excited DBSs [44][45][46][47][48][49][50][51][52][53] or quadrupole-bound states (QBSs) 54 below the anion photodetachment threshold. As shown in Fig. 1, DBS binding energies for various anions were measured, ranging from 25 cm À1 to 659 cm À1 depending on the dipole moments of the neutral cores. The small binding energies conrm the weakly bound nature of the DBSs, which have been probed by high-resolution PEI using a third-generation electrospray ionization (ESI)-PES apparatus equipped with a cryogenically cooled Paul trap. 55 rPES via vibrational autodetachment has been shown to be a powerful technique to resolve rich vibrational features, especially for low-frequency and Franck-Condon (FC) inactive vibrational modes, as well as conformation-selective and tautomer-specic spectroscopic information. Additionally, a DBS of the cluster anion C 2 P À was observed, revealing that the weakly dipole-bound electron is not spin-coupled to the core electrons of C 2 P. 56 In the meantime, DBS resonances of a number of diatomic anions and the associated vibrational autodetachment have also been reported. [57][58][59][60] In this perspective, we rst discuss the experimental methods in Section 2. We then present the DBSs of C 6 H 5 O À and C 6 H 5 S À in Section 3, illustrating some basic features of the DBSs, such as the small binding energies of the DBSs, structural similarities between an anion in the DBS and its corresponding neutral, and vibrational autodetachment following the Dv ¼ À1 propensity rule. Section 4 presents several applications of rPES in resolving vibrational information by resonant enhancement, from the vibrational origin of the CH 3 COO radical to the lowfrequency and FC-inactive vibrational features of the deprotonated uracil radical. Intramolecular inelastic rescattering, which lights up low-frequency FC-inactive vibrational modes, will also be discussed. In Section 5, we present isomer-specic rPES via DBSs of two conformers of m-HO(C 6 H 4 )O À and two tautomers of deprotonated cytosine anions. The rst observation of a quadrupole-bound excited state of cryogenically cooled NC(C 6 H 4 )O À anions will be described in Section 6. Finally, in Section 7, we give a summary and provide some perspectives for the study of noncovalent excited states and rPES of cryogenically cooled anions. Experimental methods This section describes the experimental techniques that we have developed to study excited DBSs of anions. The principle of rPES via vibrational autodetachment from DBSs will be discussed, illustrating the differences of rPES from conventional PES. Photodetachment spectroscopy used to search for DBS resonances of anions will be discussed. We will briey present our current third-generation ESI-PES apparatus, 55 equipped with a cryogenic Paul trap and high-resolution PEI system, which is critical for the realization of rPES and PDS of cold anions. Resonant PES via vibrational autodetachment and PDS Conventional anion PES is done at a xed laser wavelength, as schematically shown in Fig. 2a. A beam of anions (M À ) is detached by a laser beam. When the laser photon energy (hv) exceeds the binding energy of the electron in the anion or the electron affinity (EA) of the corresponding neutral, photoelectrons (e À ) can be ejected with various kinetic energies (KEs) depending on the resulting nal neutral states (M). Conventional PES is governed by the FC principle, only allowing vibrational modes with signicant FC factors to be observed, though anomalous PES intensities can be observed in slowelectron velocity-map imaging in certain detachment photon energies 61,62 or due to vibronic coupling 4,63 and excitations to non-valence states. 64 However, if an excited DBS exists, rPES is possible by tuning the laser wavelength to the DBS vibrational resonances of the anion, as shown in Fig. 2b The Dv ¼ À1 propensity rule, which is also related to the fact that the potential energy curve of the DBS and that of the neutral is almost identical (i.e., the DBS electron has little effect on the structure of the neutral core), suggests that only one quantum of vibrational energy is allowed to transfer to the DBS electron (see Fig. 2b). The corresponding neutral nal vibrational state in the resonant photoelectron spectrum will display an enhanced intensity in comparison to the vibrational peak in the non-resonant spectrum, due to the large cross section of the resonant excitation process. Hence, rPES is highly non-Franck-Condon. 55 Because the diffuse dipole-bound electron has little effect on the structure of the neutral core, the geometries of the anion in the DBS and the corresponding neutral are identical, implying that the vibrational frequencies of the DBS are the same as those of the neutral. Therefore, the vibrational frequencies of the corresponding neutral molecules can be obtained by probing the DBS vibrational levels or vice versa. It should be pointed that the Dv ¼ À1 propensity rule is derived under the harmonic approximation and can be violated if there are strong anharmonic effects. 42 DBS vibrational resonances can be searched using photodetachment spectroscopy by scanning a tunable laser across the detachment threshold of an anion while monitoring the total photoelectron yield. When the laser wavelength is in resonance with a DBS vibrational level, the photoelectron yield is enhanced due to autodetachment for above-threshold levels or resonant two-photon detachment for below-threshold vibrational levels (Fig. 2b). It is interesting to note the differences of DBS vibrational autodetachment from normal vibrational autodetachment involving anions with very low electron binding energies, 65 rst observed for NH À . 66 The vibrational energy in one quantum of NH À is higher than its electron binding energy; hence vibrational excitation to the v ¼ 1 vibrational level of NH À can induce electron detachment, i.e. vibrational autodetachment. In such a normal vibrational autodetachment, there are usually large FC activities due to the large geometry changes between the anionic initial states and the nal neutral states, for which theoretical models have been developed. 43 The third-generation ESI-PES apparatus The rPES and PDS experiments were made possible with our third-generation ESI-PES apparatus, 55 as schematically presented in Fig. 3. It mainly consists of four parts: (1) an ESI source similar to that used in the rst ESI-PES apparatus, 67 (2) a cryogenic Paul trap similar to that developed for the secondgeneration ESI-PES apparatus, 68 (3) a TOF mass spectrometer, and (4) a high-resolution PEI analyzer. 69 Details of the third-generation ESI-PES apparatus and the improvements relative to the rstand second-generation apparatuses have been described previously. 55 Briey, anions are produced usually by electrospray ionization of $1 mM sample solutions in a mixed solvent of either MeOH/H 2 O or CH 3 CN/H 2 O. Two radio-frequency quadrupole and one octopole ion guides are used to direct anions from the ESI source into a cryogenically cooled Paul trap, which is attached to a helium refrigerator operated at 4.5 K. The anions are cooled via collisions with a 1 mTorr He/H 2 (4/1 in volume) buffer gas, which is shown empirically to exhibit optimal thermal cooling effects. 68 Aer being accumulated for 0.1 s and thermally cooled, anions are pulsed out at a 10 Hz repetition rate into the extraction zone of a TOF mass spectrometer. Anions of interest are selected by a mass gate and photodetached in the interaction zone of the PEI lens using a Nd:YAG laser or a tunable dye laser. Photoelectrons are focused by a set of imaging lenses and projected onto a pair of 75 mm diameter micro-channel plates coupled to a phosphor screen and are captured by a charge-coupled device camera. The electron KE resolution is usually 3.8 cm À1 for electrons with 55 cm À1 energy and about 1.5% (DKE/KE) for kinetic energies above 1 eV. The narrowest line width achieved was 1.2 cm À1 for 5.2 cm À1 electrons. 69 The third-generation ESI-PES apparatus has allowed the study of weakly bound non-covalent excited states of anions, including both dipole-bound 5,41,44-53 and quadrupole-bound excited states, 54 and the development of rPES and PDS for cold anions. In a typical investigation, we rst measure nonresonant PE spectra to obtain the detachment threshold of an anion. Then, PDS is used to search for DBS resonances by monitoring the total electron yield as a function of the detachment laser wavelength across the detachment threshold at a step size of 0.1 nm or 0.03 nm. Subsequently, rPES is performed by parking the laser wavelengths at the identied DBS resonances. The enhanced vibrational peaks in rPES can be used to infer the vibrational resonances of the DBS, oen assisted by computed vibrational frequencies. The cryogenically cooled Paul trap Due to the small binding energies of the DBS electron, it is critical to cool down the anions to low temperatures to allow high-resolution PDS and rPES and facilitate spectral assignments of complex anions by eliminating vibrational hot bands. In 2005, the Wang group developed the rst version of a cryogenically cooled Paul trap 68 and reported the rst PES experiment for cold anions from an ESI source. 70 Different from the cryogenic 22-pole trap, 71 the cryogenic Paul trap exhibits better 3D connement of ions, making it more suitable for the subsequent TOF mass selection necessary for the PES and PDS experiments. The current conguration of the cryogenic Paul trap (see inset of Fig. 3) at Brown University features a pulsed buffer gas and a more powerful cryostat. 55 When the cryostat is operated at 4.5 K, the ion temperature achieved has been estimated to be 30-35 K from simulations of rotational proles in PDS of several anions. 43,44,54 With the complete elimination of vibrational hot bands in the PE spectra of cold C 60 À , the most accurate EA of C 60 was measured to be 2.6835(6) eV, as well as the resolution of sixteen fundamental vibrational frequencies for the C 60 molecule. 72 The cryogenic Paul trap has also been adapted by several groups to study cold ions and ionic clusters by vibrational spectroscopy, 73-75 UV photofragmentation, 76-79 UV-UV holeburning spectroscopy [80][81][82] and anion slow electron velocity map imaging spectroscopy. 83 3 Basic features of dipole-bound excited states Recently, a more complete photodetachment spectrum was obtained for C 6 H 5 O À , revealing a total of eighteen vibrational resonances across the detachment threshold at 18 173 cm À1 (Fig. 4a, the red solid curve). 41,51 The weak peak 0, below the detachment threshold by 97 cm À1 , represents the ground vibrational level of the DBS of C 6 H 5 O À , which is due to resonant two-photon detachment. Above the threshold, the gradually increasing baseline represents the non-resonant detachment signals. The seventeen peaks (1-17) correspond to excited vibrational levels of the DBS of C 6 H 5 O À , i.e., vibrational Feshbach resonances. 3.1.2 The structural similarity between an anion in the DBS and the corresponding neutral. In Fig. 4a, the non-resonant PE spectrum of C 6 H 5 O À at 480.60 nm (black dashed curve) obtained from the PE image in Fig. 4c is overlaid with the photodetachment spectrum (red solid curve). The non-resonant PE spectrum shows the vibrational progression of the most FC-active stretching mode n 11 up to the h quanta, 84,85 represented by peaks A to E. By shiing the PE spectrum by 97 cm À1 to line up peak 0 0 0 in the PE spectrum with peak 0 in the photodetachment spectrum, we see that the positions and relative intensities of the vibrational progression of mode n 11 in the PE spectrum and those in the photodetachment spectrum (peaks 1, 7, 11, 15 and 17) are perfectly matched. This comparison vividly demonstrates the structural similarity between the molecular core in the DBS of C 6 H 5 O À and the neutral C 6 H 5 O radical. Since the peak width in the photodetachment spectrum is mainly limited by rotational broadening, the measured frequencies are in general more accurate than those obtained from the PE spectrum, where the spectral resolution depends on the photoelectron kinetic energies. In addition, much richer vibrational features are revealed in the photodetachment spectrum due to the resonant enhancement via the DBS. Hence, in comparison with conventional nonresonant PES, rPES in combination with PDS is more powerful to resolve vibrational information for dipolar neutral radicals by probing the DBS resonances. 3.1.3 The s-type orbital of the DBS. By tuning the laser wavelength to the below-threshold peak 0 in Fig. 4a, we obtained the resonant two-photon PE image displaying a p-wave angular distribution (the outermost ring in Fig. 4b), which is due to the detachment from the s-type DBS orbital of C 6 H 5 O À , as shown in Fig. 4d. In contrast, the non-resonant PE image at 480.60 nm exhibits an s + d perpendicular angular distribution (Fig. 4c), as a result of one-photon detachment from the p-type HOMO orbital of C 6 H 5 O À (Fig. 4e) Two detachment channels contribute to the resonant PE spectra: the non-resonant detachment process represented by the continuous baseline in the photodetachment spectrum and the resonantly enhanced vibrational autodetachment via the DBS indicated by the sharp peak in the photodetachment spectrum in Fig. 4a. In comparison to the non-resonant PE spectrum at 480.60 nm in Fig. 4a, the resonant PE spectra are highly non-FC with one or more vibrational peaks enhanced due to the mode selectivity and the Dv ¼ À1 propensity rule. The vibrational DBS resonances consist of single-mode levels ðn x 0n Þ, combinational levels ðn x 0m n y 0n .Þ or nearly degenerate overlapping vibrational levels. Note that a prime is used to designate DBS vibrational modes to distinguish from the corresponding neutral modes. For autodetachment from vibrational levels of a single mode ðn 0 x Þ, the nth vibrational level of this mode ðn x 0n Þ in the DBS can autodetach to the (nÀ1)th level of the same mode in the neutral ðn x nÀ1 Þ, i.e. one quantum of the vibrational energy in mode n 0 x is transferred to the dipole-bound electron during autodetachment. The resulting nal neutral peak in the PE spectrum corresponding to the n x nÀ1 level will be highly enhanced. For instance, the resonant PE spectra in Fig. 5a, b, e, g and h correspond to excitations to DBS vibrational levels involving mode n 0 x for n ¼ 1 to 5, respectively. Vibrational autodetachment processes from these DBS levels result in signicant enhancement of peaks 0 0 0 , A (11 1 ), B (11 2 ), C (11 3 ) and D (11 4 ), respectively, in the resonant PE spectra, following the Dv ¼ À1 propensity rule. In these autodetachment processes, one vibrational quantum of mode n 0 11 (519 cm À1 ) is transferred to the DBS electron (BE ¼ 97 cm À1 ), yielding an autodetached electron with a KE of 422 cm À1 in all ve cases. In addition, peaks A (11 1 ) and B (11 2 ) are slightly enhanced in Fig. 5g and h, respectively, following a Dv ¼ À3 autodetachment process. This violation of the Dv ¼ À1 propensity rule indicates anharmonicity at higher vibrational levels. 42 The autodetachment from a combinational vibrational level ðn x 0m n y 0n .Þ of the DBS is more complicated. When all the vibrational frequencies of the modes involved are larger than the binding energy of the DBS, both neutral nal levels, n x mÀ1 n y n . and n x m n y nÀ1 ., are expected to be enhanced. Fig. 5c displays such a case, where both peaks A (11 1 ) and d (18 1 ) are highly enhanced because of autodetachment from the combinational DBS level 11 01 18 01 following the Dv ¼ À1 propensity rule. However, excitation to the DBS combinational level 9 01 11 01 in Fig. 5d only results in strong enhancement of peak f (9 1 ), which means that the mode n 0 11 is more strongly coupled with the dipole-bound electron, indicating mode selectivity in vibronic coupling. Even more complicated cases are those involving autodetachment from overlapping vibrational levels of the DBS, as shown in Fig. 5f, which corresponds to resonant excitation to two nearly degenerate vibrational levels, 9 01 11 02 and 10 01 11 02 20 01 . The enhancement of the two peaks A (11 1 ) and k (9 1 11 1 ) is due to autodetachment from the DBS level 9 01 11 02 , while that of peak h (11 2 20 1 ) is due to autodetachment from the 10 01 11 02 20 01 DBS level. Both mode-selectivity and anharmonic effects are observed. All the discussed autodetachment processes from the DBS vibrational levels to neutral levels are schematically illustrated in Fig. 5i. Observation of a DBS in C 6 H 5 S À The thiophenoxide anion (C 6 H 5 S À ) is another relatively simple example that can be used to illuminate the basic features of DBSs and rPES, 51 as shown in Fig. 6. With a dipole moment of 3.18 D for the thiophenoxy radical (C 6 H 5 S), an excited DBS was observed in the photodetachment spectrum of C 6 H 5 S À (Fig. 6a). The ground vibrational level of the DBS, labeled peak 0 in Fig. 6a, is 39 cm À1 below the detachment threshold of C 6 H 5 S À at 18 978 cm À1 . Similar to the PE spectra of C 6 H 5 O À , the nonresonant PE spectra of C 6 H 5 S À were also dominated by the n 11 vibrational progression. 85 By aligning peak 0 in the photodetachment spectrum and peak 0 0 0 in the non-resonant PE spectrum at 492.10 nm, a perfect agreement is observed for the relative peak positions and intensities of the most FC-active n 11 vibrational progression, again suggesting little inuence of the DBS electron on the neutral C 6 H 5 S core in the DBS. Eleven above-threshold vibrational resonances were observed. Selected high-resolution resonant PE spectra are presented in Fig. 6b-g, which were collected at laser wavelengths corresponding to the selected DBS resonances in Fig. 6a. The highly enhanced peaks a (20 1 ) in Fig. 6b, peak 0 0 0 in Fig. 6c, peak A (11 1 ) in Fig. 6e and peak B (11 2 ) in Fig. 6g are due to excitations to DBS vibrational levels 20 02 , 11 01 , 11 02 and 11 03 , obeying the Dv ¼ À1 propensity rule for autodetachment. In Fig. 6f, the enhancement of peak e (10 1 ) is due to the mode-specic autodetachment from the combinational level 10 01 11 01 : strong vibronic coupling is only observed for mode n 0 11 , similar to the case of C 6 H 5 O À (Fig. 5). The resonant PE spectrum in Fig. 6d, corresponding to excitation to the combinational DBS level 11 01 20 02 , reveals enhancement of three nal vibrational states, labeled b (20 2 ), c (11 1 20 1 ) and A (11 1 ). The autodetachment to peaks b and c follows the Dv Fig. 6 (a) Comparison of the photodetachment spectrum (red solid curve) and the non-resonant PE spectrum at 492.10 nm (black dashed curve) of C 6 H 5 S À . The PE spectrum is red-shifted by 39 cm À1 to line up peak 0 0 0 (the ground vibrational level of neutral C 6 H 5 S) with peak 0 (the ground vibrational level of the DBS of C 6 H 5 S À ). The vibrational progression of the FC-active mode n 11 matches well with each other, suggesting the weakly bound electron in the DBS of C 6 H 5 S À has little effect on the neutral core C 6 H 5 S. (b-g) High-resolution resonant PE spectra of C 6 H 5 S À at six different wavelengths. The enhanced peak via vibrational autodetachment from the DBS is labeled in bold face. The assigned vibrational levels of the DBS are given. Adapted from ref. 51 with permission from AIP Publishing. ¼ À1 propensity rule, while that to peak A involves Dv ¼ À2 of the lowest frequency bending mode n 0 20 . 51 Rich vibrational information from PDS and rPES The structural similarities between dipole-bound anions and the corresponding neutrals are clearly revealed from the similarities of the vibrational structures of the DBS and the neutrals for the cases of C 6 H 5 O À and C 6 H 5 S À , as shown in Fig. 4a and 6a, respectively. These observations conrm spectroscopically that the weakly bound electron in the DBS has little inuence on the structure of the neutral core. This observation means that the vibrational frequencies of the neutrals are the same as those in the DBS. Photodetachment spectra oen show much richer vibrational features with higher spectral resolution. Resonant PE spectra can "light up" FC-inactive vibrational modes or vibrational transitions with very small FC factors. Hence, the combination of PDS and rPES of cold anions can be a powerful approach to obtain vibrational information for dipolar neutral radicals, inaccessible in other spectroscopic techniques. Determining accurate EAs via resonant enhancement of the 0-0 transition In anion PES, the 0-0 transition denes the EA of the corresponding neutral species. However, for large geometry changes between the anion and neutral, the FC factor for the 0-0 transition may be extremely weak, making it difficult to be observed and identied in conventional PES. According to the Dv ¼ À1 propensity rule, autodetachment from fundamental DBS vibrational levels can result in considerable resonant enhancement of the 0-0 detachment transition. This resonant enhancement can be very valuable in the assignment of the 0-0 transition and in determining the EA of neutrals with large geometry changes in anion PES. For example, the photodetachment from CH 3 COO À results in a large reduction of the :O-C-O angle by $20 in the neutral CH 3 COO radical, 86 which results in a very weak FC factor for the 0-0 transition. If the anions are vibrationally hot, the weak peak 0 0 0 would be buried in the vibrational hot bands, making it challenging to accurately determine the EA of CH 3 COO. 87,88 With the third generation ESI-PES apparatus, a high-resolution non-resonant spectrum of cold CH 3 COO À at 372.68 nm revealed a very weak feature for the 0 0 0 transition and two vibrational peaks, 14 1 and 8 1 (Fig. 7b). 45 When tuning the laser wavelength near the detachment threshold at 380.68 nm, peak 0 0 0 is better measured, giving rise to an accurate EA of 26 236 AE 8 cm À1 for CH 3 COO (Fig. 7a). However, the non-resonant spectrum required a very long time for signal accumulation due to the poor FC factor. 45 Because the CH 3 COO radical has a dipole moment of 3.47 D (Fig. 1), CH 3 COO À was found to support a DBS with a binding energy of 53 cm À1 . 45 Even though the FC factor is small for the 0-0 transition, there are strong FC activities to vibrationally excited levels in both the PE spectra and the photodetachment spectrum. When the detachment laser was tuned to the DBS vibrational resonances corresponding to the 14 01 and 8 01 vibrational levels, two resonant PE spectra ( Fig. 7c and d) were obtained, exhibiting signicant enhancement for peak 0 0 0 and conrming its origin as the 0-0 transition. The relevant nonresonant and resonant detachment transitions are shown schematically in the energy level diagram in Fig. 7e. Observation of Franck-Condon-inactive low-frequency vibrational modes Conventional non-resonant PES is governed by the FC principle, which means that only FC-allowed or totally symmetric vibrational modes can be observed usually. However, rPES involving optical excitations to DBS levels can "light up" FC-inactive modes due to the large optical absorption cross sections relative to non-resonant photodetachment processes. For example, the lowest-frequency symmetry-forbidden and FC-inactive n 20 bending mode of C 6 H 5 S, absent in the non-resonant spectra, is revealed prominently in the resonant PE spectrum in Fig. 6b, when the 20 02 DBS vibrational level is excited. 51 The combination of PDS and rPES has been shown to be particularly powerful to allow low frequency and FC-inactive modes to be observed. One of the most prominent examples is the deprotonated uracil radical ([U-H] or C 4 N 2 O 2 H 3 ), 5,44 which has a total of twenty seven fundamental vibrational modes (Table 1), including nineteen in-plane vibrational modes (A 0 ) and eight out-of-plane modes (A 00 ). With a dipole moment of 3.22 D for the neutral core, the deprotonated uracil anion ([U-H] À , Fig. 1) was found to possess a DBS below the detachment threshold by 146 cm À1 . By scanning the laser wavelength up to $1700 cm À1 above the threshold, a total of forty-six DBS vibrational levels were observed. 5,44 The combination of PDS and rPES allowed fundamental vibrational frequencies for twenty-one modes to be observed, including seven out of the eight symmetryforbidden out-of-plane modes, as shown in Table 1. Even more vibrational modes could have been observed if we were to scan the laser to higher excitation energies to probe more DBS resonances. Intramolecular inelastic scattering In Fig. 8a and b, peak 0 0 0 is enhanced due to the Dv ¼ À1 autodetachment from the 10 01 and 9 01 vibrational levels of the DBS of C 6 H 5 O À , corresponding to peaks 3 and 5, respectively, in the photodetachment spectrum in Fig. 4a. 51 Peak a corresponds to the out-of-plane n 20 mode (Fig. 8c), which is symmetryforbidden, but it is present in the resonant PE spectra quite prominently. In the same way, when exciting to the vibrational levels 25 01 (Fig. 9a) and 16 01 (Fig. 9b) of the DBS of [U-H] À , the enhancement of peak 0 0 0 following the Dv ¼ À1 autodetachment is accompanied with prominent excitations of several lowfrequency modes (Fig. 9c), peaks a (27 1 ), b (26 1 ), c (27 2 ), and e (25 1 ), which are symmetry-forbidden in the non-resonant spectra. 5 Vibronic coupling or Herzberg-Teller coupling 4,63,72,89 has been previously invoked to explain the observations of FCinactive vibrational modes or anomalous vibrational intensities in non-resonant PES. While we cannot rule out the effects of vibronic coupling for the appearance of the low-frequency FCinactive and symmetry-forbidden bending modes in the resonant PE spectra shown in Fig. 8 and 9, a more interesting possibility is intramolecular inelastic rescattering due to the interactions of the autodetached outgoing electron with the neutral core. The rescattering process is possible because the DBS electron is highly diffuse and far away from the neutral core. Hence, there is a nite probability for the outgoing electron to interact inelastically with the neutral core because of exciting low-frequency vibrational modes, akin to processes in electron energy loss spectroscopy. 90,91 Take Fig. 9b as an example: autodetachment from the DBS vibrational level 16 01 (n 16 ¼ 577 cm À1 , Table 1) of [U-H] À yields an outgoing photoelectron with a kinetic energy of 431 cm À1 by subtracting the 146 cm À1 binding energy of the DBS. Because of the highly diffuse DBS orbital, it is conceivable that the autodetached electron may have nite probabilities to interact with the neutral core (i.e. half-collision or intramolecular rescattering) and lose energies to the bending modes n 27 (113 cm À1 ), n 26 (150 cm À1 ), and n 25 (360 cm À1 ), corresponding to peaks a, b and e, respectively. We have observed especially pronounced rescattering effects for autodetachment from the 16 01 DBS level of [U-H] À . This observation is not well understood currently and it would deserve some careful theoretical consideration. Conformer-selective rPES via DBSs One interesting application of rPES is to obtain conformerselective spectroscopic information for dipolar species because different conformers have different DBSs. If multiple conformers are present in the ion beam, a non-resonant PE The 3-hydroxyphenoxide anion has two nearly degenerate conformers, synand anti-m-HO(C 6 H 4 )O À , due to the different orientations of the hydrogen atom on the -OH group, as shown in Fig. 10a. The non-resonant PE spectrum at 517.45 nm (Fig. 10b) at low temperatures exhibits detachment transitions from both conformers, labeled S 0 0 0 , A 0 0 0 , and A ( S 23 1 ). 48,49 Note that the superscripts "A" and "S" designate the antiand synconformations, respectively. Peaks S 0 0 0 and A 0 0 0 , with binding energies of 18 850 cm À1 and 18 917 cm À1 , represent the EAs of the synand anti-m-HO(C 6 H 4 )O radicals, respectively. Peak A is a vibrational feature of mode n 23 of syn-m-HO(C 6 H 4 )O. With dipole moments of 3.10 D and 5.34 D for the synand antiradicals (Fig. 1), respectively, both the anionic conformers are able to support a DBS, as shown in the photodetachment spectrum in Fig. 10c. The weak peaks S 0 0 and A 0 0 , below the respective detachment thresholds by 104 cm À1 and 490 cm À1 (inset in Fig. 10c), represent the ground vibrational levels of the DBS for synand anti-m-HO(C 6 H 4 )O À , respectively. The larger DBS binding energy of anti-m-HO(C 6 H 4 )O À is consistent with the larger dipole moment of its neutral radical. A complicated detachment spectrum was observed with DBS resonances from both conformers: peaks A 1-A 17 are due to anti-m-HO(C 6 H 4 )O À , peaks S 1-S 8 are due to syn-m-HO(C 6 H 4 )O À , and peaks AS 1-AS 5 are due to overlapping vibrational levels of both conformers. Hence, by tuning the detachment laser to DBS levels of specic conformers, conformer-selective resonant PE spectra can be obtained. When the detachment laser is tuned to the DBS vibrational levels S 30 01 and S 28 01 of syn-m-HO(C 6 H 4 )O À , the resonant PE spectra display major enhancement of the S 0 0 0 peak as shown in Fig. 11a and b, where the A 0 0 0 peak is negligible. When the laser is tuned to the DBS levels A 27 01 and A 24 01 of antim-HO(C 6 H 4 )O À , the A 0 0 0 peak is greatly enhanced as shown in Fig. 11d and e, whereas the S 0 0 0 peak becomes negligible. In Fig. 11c and f, peaks A ( S 23 1 ) and C ( A 21 1 ) are enhanced due to autodetachment from DBS levels S 23 01 30 01 and A 21 02 , respectively. Such conformer-selective resonant PE spectra have been obtained from every DBS resonance in Fig. 10c, except the ve overlapping resonances of the two conformers. 49 Tautomer-specic rPES via the DBS of [Cy-H] À Tautomerism of nucleic acid bases plays an important role in the structure and function of DNA. For example, the deprotonation of cytosine can produce many tautomeric negative ions ([Cy-H] À ). 92 Previous calculations 93 found that the two most stable deprotonated anions in the gas phase are tKAN3H8b À and cKAN3H8a À (Fig. 12a) by deprotonation of H b and H a , respectively. The tKAN3H8b À anion was calculated recently to be more stable by 1.93 kcal mol À1 . 52 In Fig. 12f, the nonresonant PE spectrum of [Cy-H] À at 392.11 nm reveals three major peaks, labeled C 0, T 0 and C ( T 21 1 ). 52 Note that the superscripts "C" and "T" designate the tautomers of cKAN3H8a À and tKAN3H8b À . Peaks C 0 and T 0 represent the 0-0 detachment transitions and yield the EAs of cKAN3H8a and tKAN3H8b to be 3.047 eV and 3.087 eV, respectively, which are in excellent agreement with the calculated EAs. 52 The higher intensity of peak T 0 than C 0 is consistent with the computed relative stabilities of the two anionic tautomers. Hence, both tautomers are present experimentally even under our low temperature conditions. At 400.22 nm (Fig. 12b), two more vibrational features of cKAN3H8a, labeled A ( C 30 1 ) and B ( C 30 2 ), are observed. The cKAN3H8a and tKAN3H8b radicals are calculated to have dipole moments of 3.35 D and 5.55 D (Fig. 1), respectively, which are large enough to support a DBS for the corresponding anions. Distinct DBS vibrational resonances have been observed in the photodetachment spectra of tKAN3H8b À and cKAN3H8a À , allowing tautomer-specic resonant PE spectra to be obtained, as presented in Fig. 12c-e and g-i. The resonant PE spectra in Fig. 12c and d show enhancement of peak C 0, due to autodetachment from the C 21 01 and C 18 01 DBS vibrational levels of cKAN3H8a À , respectively. The highly enhanced peak A ( C 30 1 ) in Fig. 12e is due to resonant excitation to the C 29 01 30 03 DBS level followed by Dv ¼ À3 autodetachment, breaking the Dv ¼ À1 propensity rule. The resonant PE spectra in Fig. 12g-i all display a strongly enhanced T 0 peak due to autodetachment from DBS vibrational levels T 27 01 , T 17 01 and T 23 01 of tKAN3H8b À , respectively, whereas the C 0 peak from the cKAN3H8a À tautomer is negligible. 6 Quadrupole-bound excited states in NC(C 6 H 4 )O À Long-range charge-quadrupole interactions can form quadrupole-bound anions (QBAs). 3,94,95 The rhombic (BeO) 2 À cluster was rst suggested to be a QBA. 96 However, PES of a similar (MgO) 2 À cluster showed a relatively high electron binding energy, 97 suggesting that this cluster anion should probably be considered as a valence-bound anion. 3 Similar rhombic alkali-halide dimers, such as (NaCl) 2 and (KCl) 2 , and a series of complex organic molecules with vanishing dipole moments but large quadrupole moments have also been proposed to form QBAs. [98][99][100] Experimental studies of electron binding to quadrupolar molecules have been scarce. 101,102 A more recent example of QBAs was from Rydberg electron transfer to the trans-isomer of 1,4-dicyanocyclohexane, which has no dipole moment. 103 A valence-bound anion with a nonpolar core may support the excited quadrupole-bound state (QBS) just below the electron detachment threshold, if the neutral core possesses a large quadrupole moment. The 4-cyanophenoxide anion [NC(C 6 H 4 )O À , see Fig. 13 inset (a)] was found to be a good candidate in the search for the rst excited QBS. 54 The neutral radical, NC(C 6 H 4 )O, has two dipolar centers (-C^N and C-O) in the opposite direction, resulting in a small dipole moment of 0.30 D but a large quadrupole moment (traceless quadrupole moment: Q xx ¼ 5.4, Q yy ¼ 15.1, Q zz ¼ À20.5 DÅ). The dipole moment is much smaller than the 2.5 D critical value to form an excited DBS, but the large quadrupole moment may allow a QBS. Photodetachment spectroscopy of NC(C 6 H 4 )O À indeed revealed many resonances across the detachment threshold at 24 927 cm À1 , as presented in Fig. 13. A broad peak labeled 0 is observed, 20 cm À1 below the detachment threshold, due to resonant two-photon detachment. Since NC(C 6 H 4 )O À cannot support a DBS, peak 0 should represent the ground vibrational level of the QBS. The continuous baseline above the threshold represents the non-resonant detachment signals, while the seventeen peaks, labeled 1-17, are vibrational resonances of the QBS of NC(C 6 H 4 )O À . Inset (b) of Fig. 13 shows a high-resolution scan of resonant peak 2, revealing a rotational prole. Rotational simulations yield a rotational temperature between 30 and 35 K for the cryogenically cooled NC(C 6 H 4 )O À anion, consistent with previous results. 5,44,45 The vibrational autodetachment processes via the QBS are found to be the same as those via the DBS, following the Dv ¼ À1 propensity rule. Seventeen resonant PE spectra were obtained, which together with the photodetachment spectrum yielded ten fundamental vibrational frequencies for the NC(C 6 H 4 )O radical. 54 Conclusions and outlook The development of the third-generation ESI-PES with a cryogenically cooled Paul trap and a high-resolution photoelectron imaging system has made it possible to conduct high-resolution spectroscopic investigations of solution-phase anions in the gas phase and, in particular, has enabled high-resolution studies of anions with noncovalent excited states (DBSs or QBSs). Photodetachment spectroscopy has been used to search for both dipole-and quadrupole-bound excited states of cryogenically cooled anions. Resonant PES has been performed via autodetachment from above-threshold vibrational levels of noncovalent excited states, resulting in highly non-Franck-Condon PE spectra and rich vibrational information. The weaklybound electron in the non-covalent excited states has been shown spectroscopically to have negligible effect on the neutral core. Hence, PDS and rPES can be combined to yield much richer vibrational information for the corresponding neutral radicals not accessible by other spectroscopic means. The resonant enhancement of the 0-0 transition in rPES via autodetachment from fundamental vibrational levels of DBSs or QBSs allows accurate measurements of EAs for neutrals which have large geometry changes from the corresponding anions. Low-frequency FC-inactive or symmetry-forbidden vibrational modes of various radical species have been observed in rPES. Both mode-selectivity and intramolecular inelastic rescattering have been observed for vibrational autodetachment via DBSs. Polar anions with multiple conformers or energetically close tautomers have different DBSs, which allow conformer-or tautomer-specic resonant PE spectra to be realized. There are many interesting questions that can be investigated using PDS and rPES, as well as experimental challenges. For all the anionic systems we have studied (Fig. 1), the smallest dipole moment (3.03 D) occurs for the neutral core of o-HO(C 6 H 4 )O À , which gives the smallest DBS binding energy of 25 cm À1 , 46 while the deprotonated 4,4 0 -biphenol anion [HO(C 6 H 4 ) 2 O À ] has a large neutral core dipole moment of 6.35 D with a DBS binding energy of 659 cm À1 . 54 The DBS binding energy generally increases with the magnitude of the dipole moment. But there are exceptions. For example, the phenoxy radical has a dipole moment of 4.06 D and the DBS of C 6 H 5 O À is found to have a binding energy of 97 cm À1 . Yet the DBS in synm-HO(C 6 H 4 )O À has a larger binding energy of 104 cm À1 while its neutral core has a smaller dipole moment of 3.10 D. This indicates that molecular structures and polarizability play important roles in the electron binding in DBSs. Thus, it would be interesting to investigate how the DBS binding energies depend on the magnitude of the dipole moment for different classes of molecular species 1,32 and if the 2.5 D empirical critical dipole moment holds for dipole-bound excited states. 104
9,854.6
2019-09-16T00:00:00.000
[ "Physics", "Chemistry" ]
An Intuitionistic Evidential Method for Weight Determination in FMEA Based on Belief Entropy Failure Mode and Effects Analysis (FMEA) has been regarded as an effective analysis approach to identify and rank the potential failure modes in many applications. However, how to determine the weights of team members appropriately, with the impact factor of domain experts’ uncertainty in decision-making of FMEA, is still an open issue. In this paper, a new method to determine the weights of team members, which combines evidence theory, intuitionistic fuzzy sets (IFSs) and belief entropy, is proposed to analyze the failure modes. One of the advantages of the presented model is that the uncertainty of experts in the decision-making process is taken into consideration. The proposed method is data driven with objective and reasonable properties, which considers the risk of weights more completely. A numerical example is shown to illustrate the feasibility and availability of the proposed method. Introduction Failure Mode and Effects Analysis (FMEA) has received attention from many researchers [1][2][3][4][5][6][7], and it can evaluate and analyze various risks in order to reduce these risks to acceptable levels or directly eliminate them. Moreover, FMEA is a very complex system so that information fusion technology is used in evaluation processes, such as evidence theory [8,9] and D number [10]. Since the uncertainty information is inevitable in FMEA, some methods have been widely used, such as Dempster-Shafer evidence theory and so on [11][12][13]. Though FMEA has been used in practice for many years, how to determine the weights of risk factors and team members is still an open issue. In order to define the weights more reasonably, some scholars have proposed many methods. The intuitionistic fuzzy entropy is introduced by Lei and Wang [14] to determine the weights of risk factors, while, in [15], the weights of risk factors are calculated by the objective weights. While Boran et al. [16] determined the subjective weights of risk factors. In the method proposed in [17], the weights of risk factors are simply determined by the weights calculation proposed by Boran et al. [16], although the intuitionistic fuzzy set (IFS) model is efficient to deal with FMEA [18]. However, existing methods do not take the uncertainty into consideration for the relative importance of team members. In recent years, the relative concept of intelligence has been paid great attention due to the simulation of human intelligence [19,20]. As a result, it is reasonable to model experts' uncertainty in the process of decision-making in FMEA, which is important to improve the intelligent degree of the evaluation system. Thus, the measurement of uncertainty should also be regarded as content worth exploring. The related research of uncertainty metrics has been heavily discussed [21][22][23][24]. For probability distributions, Shannon entropy is efficient to handle the uncertainty [25]. However, it can not deal with the uncertainty of basic probability assignment (BPA) in Dempster-Shafer evidence theory [26]. To address this issue, a new belief entropy, named Deng entropy, is presented [26]. In recent years, the belief entropy has been widely used in many fields [27,28]. In this paper, a hybrid weights determination of team members in the FMEA model is proposed based on the evidence distance [29] and the belief entropy [26]. The evidence distance is to measure the degree of conflict for all team members, and the belief entropy is used to model the domain experts' uncertainty in FMEA. With the combination of evidence distance, the new weights of team members are obtained, which makes the final rank of failure modes be more effective and reasonable. The rest of this paper is organized as follows. In Section 2, some basic definitions about the evidence theory, IFS, and belief entropy are briefly introduced. In Section 3, the new method to determine the weights of team members is proposed. In Section 4, a numerical example and the computational process are illustrated. Furthermore, the comparisons and discussion have been also mentioned. In Section 5, some conclusions of the proposed method are drawn. Preliminaries In this section, some basic concepts which include evidence theory, intuitionistic fuzzy sets and belief entropy will be introduced. In evidence theory [30], there is a fixed set of N mutually exclusive and exhaustive elements, called the frame of discernment, which is symbolized by Ω = {H 1 , H 2 , H 3 · · · H N }. P(Ω) is denoted as the power set composed of 2 N elements of Ω. Each element of 2 N represents a proposition [48]. Definition 1. A basic probability assignment (BPA) is a function. The range is from P(Ω) to [0,1], which is defined by [30] m : and it must satisfy the following conditions: The mass m(A) indicates the strength of the evidence's support for A, while m(Ω) is represented as the uncertainty of evidence. If m(Ω) = 1, no useful information from the evidence exists. The basic probability assignment (BPA) function, plausibility function (PF), belief function (BF) and other trust quantization functions are described as follows. Each function has a clear definition with physical meaning and there are some corresponding relationships. Definition 2. Given a BPA m, for a proposition A ⊆ Ω, the belief function Bel: 2 Ω → [0,1] is defined as [30] Bel(A) = ∑ B⊆A m(B). (3) The plausibility function Pl: 2 Ω → [0,1] is defined as whereĀ = Ω − A. The quantity of Bel(A) can be seen as a measure of people's belief that the hypothesis A is true and can be viewed as a lower limit function on the probability of A. The plausibility Pl(A) can be interpreted as the degree that we absolutely believe in A and can be seen as an upper limit function on the probability of A. Based on the classical evidence theory, the combination rules to aggregate the multiple sources are defined as follows. Definition 3. Assume that there are two bodies of evidence m 1 and m 2 defined on Ω, respectively, m 1 and m 2 can be combined with Dempster's orthogonal rule as follows [48,49]: where where K (conflict coefficient) is called the degree of conflict which measures the degree of conflict between m 1 and m 2 . If K = 0, it means that there is no conflict between m 1 and m 2 , and if K = 1, it means that m 1 and m 2 is a complete contradiction. In recent years, more and more scholars have paid attention to improve the method of combination rules [50,51]. Evidence Distance Evidence distance is regarded as an effective method to measure the conflict of evidence. Here are some of the basic concepts: Definition 4. Assume that m 1 and m 2 are two BPAs defined on the same frame of discernment Ω, which contains N mutually exclusive and exhaustive hypotheses. Namely, it can be expressed as Ω = {H 1 , H 2 , ..., H N }. The basic definition can be defined as follows [29]: where → m 1 and → m 2 are the BPAs and D = is an 2 N × 2 N matrix where the elements are D(A, B) = |A∩B| |A∪B| , and A, B ⊆ U. Another way to represent d BPA is with A i , A j ∈ P(Ω), i,j=1,2,...,2 N . To combine the multiple source of evaluation and better solve the combination issues of highly conflicting evidence, the weighted average method is proposed by Deng et al. [52]. Intuitionistic Fuzzy Set Since the intuitionistic fuzzy set (IFS) is proposed by Atanassov as a generalization of fuzzy sets in 1986, the aggregation of fuzzy sets and IFS theory have received a lot of attention in the past few years [53,54]. In a classical fuzzy sets theory, the relationship between each set is only Belong to or Not Belong to. The central idea of the traditional fuzzy set is to expand the characteristic function to the closed interval [0, 1]. Based on it, the intuitionistic fuzzy sets were introduced to express the uncertain information better. Here are some of the basic definitions: Definition 5. An intuitionistic fuzzy set (IFS) A on the space X is defined by two functions, A =< A + , A − >, A + (x) can be represented by the degree of membership of x in A and A − (x) can be represented by the degree of nonmembership of x in A. Furthermore, it satisfies the condition that [55] where The degree of hesitancy of x is defined as Thus, the membership grade of x in the IFS A can be expressed by the tuple A( With the development of IFS and evidence, the relationship between these two mathematical models has been investigated more and more. Here is a brief introduction. Assume that there exists an IFS The three kinds of variables are differentiated and denoted x ∈ A , x / ∈ A, and the situation of hesitation when both the two hypotheses can not be approved or rejected. In this case above, the relationship between IFS and evidence theory by mathematical modelling can be found, which can be expressed that [56] Recalling the evidence theory, the IFS A can also be expressed as another form [56] where Thus, the belief interval of proposition A is defined as follows: Definition 6. Assume that there exist two alternatives x i and x j . Based on the conversion process, the belief interval for those two alternatives can be defined as follows [57]: where P(x i > x j ) expressed the degree of possibility of x i >x j . Belief Entropy In the classical information science, Shannon entropy has been used in many applications [58]. Here are some of the basic definitions: Definition 7. Shannon entropy is defined as [25] where N is the number of basic states, p i denotes the probability of state i, and p i satisfies If the unit information is bit, then b = 2 , Shannon entropy is expressed as However, since Dempster-Shafer evidence theory has been widely used in many fields, the method to measure the uncertainty in evidence theory is still an issue worth exploring. To measure the uncertain information better, a belief entropy, named as Deng entropy, is presented to deal with uncertainty measure of BPA based on Shannon entropy [25]. Here are some of the basic definitions: Definition 8. In frame of discernment X, the belief entropy is defined as [26] where |A| is the cardinality of the proposition A and E d (m) expresses the belief entropy for basic possibility assignments. In particular, the belief entropy can definitely degenerate to the Shannon entropy if the belief is only assigned to single elements. With the development of evidence theory, the belief entropy has been more and more researched [59,60]. The Proposed Method In this section, a new method to determine the weights of team members based on the evidence theory, intuitionistic fuzzy sets and belief entropy is proposed to rank the failure modes. The function of Failure Mode and Effects Analysis (FMEA) team members is to assess the risk factors with linguistic variables, such as very low, low, medium, high, very high and so on. Assume that there are k cross-functional team members TM k (k = 1, ..., p) in an FMEA team; after discussing them, the experts prioritize i potential failure modes FM i (i = 1, ..., m). Each failure mode is evaluated on the j risk factor RF i (j = 1, 2, 3). λ k ij is the weight of decision makers which reflects the relative importance of the kth decision maker with respect to the jth risk factors for the ith potential failure modes. In addition, intuitionistic fuzzy numbers [61], which are represented by the ordered pairs of membership degrees and non-memberships corresponding to the intuitionistic fuzzy sets, is used to simply express the relevant conversion process. Assume that the IFN α k ij = (µ k ij , υ k ij ) is provided by TM k on the assessment of FM i for RF j . The proposed method consists of eleven steps. In addition, the flowchart of the proposed approach is shown in Figure 1. Step 1 Step 2 Step 3 Step 5 Step 6 Step 7 Step 8 Step 4 Step 9 Step 10 Step Step 1: Determine the linguistic terms for each failure mode and transform them into IFNs. The specific judgement levels are divided into ten linguistic parts (see in Tables 1-3 Step 2: Evaluate the linguistic terms of relative importance for each risk factor and transform them into IFNs. Similarly, the specific judgement levels are divided into five parts (see in Table 4), which contains Very important, Important, Medium, Unimportant and Very unimportant. Step 3: Convert all IFNs into BPAs for all failure modes. In addition, the concrete form can be defined as follows: Step 4: Determine the weights of risk factors. For the three judgement model S, O and D, each model can be transformed into an IFS to represent the information value. Based on the weight calculation, which is proposed by Boran et al. [16], the weight w j can be obtained. The computational equations are defined as follows: which satisfy the condition that n ∑ j=1 w j = 1. Step 5: Determine the weights of team members λ k ij by using evidence distance which is introduced by Jousselme et al. [29]. Assume two groups of BPAs m q ij (Yes), m q ij (No), m q ij (Yes, No) and m t ij (Yes), m t ij (No), m t ij (Yes, No), (q, t = 1, 2, ..., p) are two bodies of evidence (BOE) , obtained by two different team members. In this paper, the similarity function, using the evidence distance to define the distance d(m q ij , m t ij ) between m q ij and m t ij , is proposed as follows: The Sup(m k ij ) is used to represent the degree of m k ij supported by other bodies of evidence. In addition, the reliability degree Crd(m k ij ) are defined as follows: The Crd(m k ij ) is to define the λ k Step 6: Determine the weights of team members E d k ij by using belief entropy [26]. Based on the fellow steps, for all team members, their information value has been transformed into IFSs. Then, another weight E d k ij is calculated, which expresses the amount of uncertainty for all propositions. The specific equation is defined as follows: Step 7: Calculate the total weights of team members w k ij by combing λ k ij and E d k ij . After obtaining the two weights, the total weights of FMEA team members can be calculated as the form of multiplication, which is defined as follows: where p is the total number of failure modes for each risk factor. Step 8: Calculate the weighted average of evidence considering the team members' effect of FMEA model. For each failure mode FM i , there exists a group of basic probability assignment functions to express the degree of importance, which can be denoted as m(Yes) m(No) and m(Yes, No). Thus, after obtaining the w k ij weights, the weighted average can be obtained as follows: Step Step 10: Calculate the belief intervals. After obtaining the weighted average of evidence in Step 9, the belief interval [Bel(FM i ), Pl(FM i )] which is used to show the degree of support and opposition can be determined as: Step 11: Rank all kinds of failure modes. Based on the belief intervals, the risk of different failure model can be compared with others by using Equation (19) . After the process of comparison, the list of ranking in FMEA can be obtained. Application In this section, an example is used to illustrate the complete procedures of the proposed method. The risk evaluation process has a great impact in many fields, such as multi-criteria decision-making (MCDM) [62][63][64][65] and other works [66,67]. In most situations, the weights for each risk factor may change the final result and lead the decision maker to make the undeserved judgement. To modify the process of products production as an easier and lower-cost method, the Failure mode and Effects Analysis play a growing important role in modern society. Thus, an FMEA team consisting of five functional team members identifies potential failure modes in the electronics manufacturing project and wants to prioritize them in terms of their risk factors such as S (Severity), O (Occurrence) and D (Detection). In addition, twelve failure modes are identified. For the difficulty of evaluating the risk factors, the FMEA team members in this numerical example are supposed to assess them employing the linguistic terms. The specific transforming process is shown in Tables 1-3. Step 1: The assessment information of the twelve failure modes on each risk factor, which was provided by the five team members, can be illustrated in Table 5. Each team member comes from different department, such as manufacturing, engineering, design and technique. Considering their deferent specialities and functions, the weights are determined by their degree of importance. Step 2: Evaluate the linguistic terms of relative importance for each risk factor and transform them into IFNs (see in Table 6). Step 3: Convert those IFNs into BPAs for all failure modes. The specific transforming equation is shown in Equations (12)- (14). Step 4: Determine the weights of risk factors. In this paper, the weights calculation of risk factors in this example is shown in Equations (27) and (28). In addition, the results are shown in Table 7. Step 5: Determine the first weights of team members λ k ij by using evidence distance [29]. The specific value of each mode is shown in Table 8. (D) TM 1 TM 2 TM 3 TM 4 TM 5 TM 1 TM 2 TM 3 TM 4 TM 5 TM 1 TM 2 TM 3 TM 4 Step 6: Determine the second weights of team members E d k ij by using belief entropy. In addition, the results are shown in Table 9. Step 7: Calculate the total weights of team members w k ij by combing λ k ij and E d k ij . After the process of normalization, the specific value of weights is shown in Table 10. Step 8: Calculate the weighted average of evidence considering the team members' effect of FMEA model. The results are shown in Table 11. Step 9: Calculate the weighted average of evidence considering the risk factors with team members. By introducing the consideration of risk factors, the weighted average of evidence is calculated. In addition, the results are shown in Table 12. Failure Mode Severity (S) Occurrence (O) Detection Step 10: Calculate the belief intervals. With the Equations (41) and (42), the final results are also shown in Table 12. Step 11: Rank all kinds of failure modes. After the process of comparison, the final ranking can be obtained (see in Table 13). Here, we present some discussion about the proposed method. In the previous related research, many scholars have tried to enhance the effectiveness and availability based on intuitionistic fuzzy sets, evidence theory and so on. The ranking comparisons of all the related works are shown in Table 13. There are some ranking differences among those methods. In general, the higher ranked models are FM 1 , FM 2 , and the lowest ranked model is FM 12 , which is consistent with the previous three methods. For other failure modes, it can be seen that, in the evaluation of the proposed method, the overall ordering of FM 3 and FM 4 is slightly higher than the previous method, while the remaining rankings are generally consistent. The main reasons are summarized as follows: The relatively importance of team members are different. In the method proposed by Liu et al. [15], the relative weights were supposed in advance, which are 0.10, 0.15, 0.20, 0.25 and 0.30. In addition, in the intuitionistic fuzzy TOPSIS method, the impacts of team members are not considered. In addition, the method proposed by Guo [17] has just considered the conflict of team members simply. In our proposed method, the weights of team members are defined by using both the evidence distance [29] and the belief entropy [26]. The evidence distance is to show the degree of conflict for all team members. In addition, the belief entropy is used to reflect the uncertainty of the information of each team member. The combination of them can express the evaluated information completely and effectively. To be specific, since the uncertainty information contained in the results of the expert evaluations in FM 1 , FM 2 and FM 12 is relatively low in this application, the weights obtained by considering the entropy factor has little effect on the final evaluation. Moreover, in FM 3 and FM 4 , the overall uncertainty of the expert evaluation is relatively high, the second weights obtained by calculating the belief entropy have a relatively large influence on the overall evaluation result, which leads to the final result as shown in Table 13. Thus, with the differences mentioned above, the aggregation approaches for all kinds of methods are different. As a comparison, the process of our proposed method to determine the weights of team members is particularly scientific and effective, with strong practical significance and good performance. Conclusions FMEA has been regarded as an effective analysis approach to identify and rank the potential failure modes in many applications. However, the uncertainty of the experts' decision process is not taken into consideration, which is regarded as an essential factor in decision-making. In this paper, the impact of experts factor uncertainty is modelled. A hybrid method to determine the weights of team members is proposed based on the belief entropy. The application in FMEA illustrates the efficiency of the proposed method. Funding: The authors greatly appreciate the reviews' suggestions and the editor's encouragement. This research is supported by the Chongqing Overseas Scholars Innovation Program (No. cx2018077). Conflicts of Interest: The authors declare no conflicts of interest.
5,171.6
2019-02-01T00:00:00.000
[ "Computer Science" ]
Student sense-making on homework in a sophomore mechanics course When students solve physics problems, physics instructors hope that they use and interpret algebraic symbols in coordination with their conceptual understanding, their understanding of geometric relationships, and their intuitions about the physical world. We call this process physics sense-making. “Plug-and-chug” and “template” problem solving strategies, which are common for many students, exclude sense-making. We have designed a mechanics course for sophomore, undergraduate students that emphasizes sense-making and traditional physics content in equal measure. Sense-making is supported in all aspects of the course: during in-class activities, on augmented homework assignments, and on exams. While sense-making prompts on homework assignments are strongly scaffolded at the beginning of the course, these supports fade as the course progresses. In this paper, we discuss an analysis of students’ homework responses to open-ended sense-making prompts throughout the course. Physics instruction seeks to support the development of future physicists by cultivating expert-like problemsolving skills in students.Experts commonly use various sense-making strategies when solving physics problems but students rarely do [1].Sense-making is the use and interpretation of algebraic symbols in coordination with one's conceptual understanding, their understanding of geometric relationships, and their intuitions about the physical world."Plug-and-chug" and "template" problem-solving strategies, which are common for many students, exclude sense-making [2,3]. Lenz and Gire have found that while professors believe reflection-based sense-making abilities are important skills for students to develop, they do not explicitly teach theses strategies in their courses [4].Little is known about how explicit instruction of sense-making strategies affects student performance of these strategies; Warren's work with algebra-based physics and evaluation strategies is one example [5].This paper bridges the gap between instructional goals and student development of these sense-making skills.We do this by putting forth an analysis of data collected from a middle-division physics course that specifically augments traditional homework (focused on physics content) with explicit sense-making prompts to help students mature their sense-making abilities. II. SENSE-MAKING IN THE TECHNIQUES OF THEORETICAL MECHANICS COURSE The data for this study were collected from a new physics course, Techniques of Theoretical Mechanics, at Oregon State University, a large, public, research intensive institution.This course was created and taught by author EG to help students make the transition between introductory and upper-division physics through an emphasis on sense-making. The physics topics of the course are: finding equations of motion with Newton's Laws by solving differential equations, velocity-dependent drag forces, rockets with variable mass, Lagrangian and Hamiltonian tech-niques, and special relativity.The class met for 50 minutes 3 times per week and small-group problem-solving activities happened at least once per week. The prerequisite courses are multivariable calculus and the first 2 quarters of introductory physics (which include Newtonian mechanics for translational and rotational motion and waves).Of the 27 students who completed the course, 3 were co-enrolled in the last quarter of the introductory physics sequence, 11 students had completed the introductory sequence the previous quarter, and 5 students had already taken most of the junior-year Paradigms courses [6]. The theme of this course is physics sense-making.The sense-making goal was treated on equal footing as the physics content goals of the course: it appeared on the syllabus, was discussed in almost every class meeting, appeared in every homework problem, and was included on exams.During in-class small-group problem-solving activities, the instructor asked for several volunteers to come to the board to demonstrate a sense-making strategy they used for the problem.During the second week of class, while considering a Newtonian mechanics problem, the class brainstormed a list of strategies that could be used to check the correctness of the answer of the problem (Table I).This list was posted on the course website for reference.We describe how sense-making was included on homework in Section III.Sense-making was primarily discussed in terms of evaluation strategies: strategies for evaluating the correctness of an answer at the end of a solution (and at intermediate steps).Some discussion in the course focused on sense-making at the beginning of a problem to orient oneself, but this discussion was less formalized than the 2017 PERC Proceedings, evaluation strategies.We intend to make this orientation aspect of sense-making more formal in the future. III. METHODS Students were assigned 10 weekly homework assignments during the 10 week term.The first 8 assignments contained classical mechanics problems and the last 2 assignments contained special relativity problems.The coursework was structured to support students' sensemaking development as well as provide multiple lenses into students' application of these strategies. Each assignment included 2-4 problems with 2-10 explicit sense-making prompts embedded in the problems.This course was designed to implement Rosenshine's scaffolding and fading approach to teaching higher-level cognitive strategies as a way to aid students in learning to utilize sense-making strategies [7].Thus, the first 3 homework assignments had sense-making prompts that directed students to use specific sense-making strategies (e.g. after finding an equation for the range of a projectile on an incline: "Sense-Making: Consider Special Cases Does your result for the maximum range make sense if the ground is horizontal?If the ground is vertical (like right up against a cliff)?").On Homework 4-7, the prompts for sense-making were intentionally faded; they became less prescriptive and more open-ended (e.g."Find the equation of motion (acceleration) of the bead.Use at least two sense-making strategies to make sense out of this equation.").On Homework 8, the sense-making prompts were faded yet again; they did not specify a particular number of strategies (e.g."Be sure to do some sense-making around your result").The sense-making prompts became more specific again for the last 2 homework assignments which featured special relativity problems.Twenty-nine students turned in homework during the term.Student solutions were scanned twice: before grading (for a clean copy of the students' work) and after grading (so we could record the feedback students were receiving about their sense-making performance). Students were provided with written feedback on the content of both their solutions to the physics problems and their responses to the sense-making prompts.The feedback took the form of short questions aimed at drawing the student's attention to places where errors occurred and asking them to consider what changes they might have made to their solution or how they displayed their work and their reasoning.Homework was promptly graded and returned to students, typically within 1-2 class days.Often, the grader gave a brief announcement to the course as a whole identifying common errors on the assignment and suggesting how students might improve for future assignments.Detailed solutions to the homework, including responses to sense-making prompts, were made available on the course website. In this paper, we report on the sense-making strategies students used on Homework 4-8 where they were asked to use sense-making strategies but these strategies were not prescribed.These assignments contained a total of 15 sense-making prompts: 12 that specified the number of strategies to use and 3 that did not.We received assignments from 27 study participants, although not all students responded to every sense-making prompt. Many responses contained several sense-making strategies, and each strategy was individually coded using an emergent coding scheme [8].Although it was generated independently, unsurprisingly the strategies found were similar to the student generated list found in Table I.Often, students labeled which strategy they thought they were using.We coded the students' work based on what the student actually did, thus our codes sometimes differed from how the student labeled the strategy. IV. RESULTS The codes of student work and their accompanying descriptions are presented in Table II.In the end, the data set contains 333 responses to sense-making prompts; out of those came 825 coded sense-making strategies.During coding, 18 unique sense-making strategies were identified (Table II).These strategies were broken into 3 categories: dimensions, cases, and other strategies.Many of these strategies aligned with those identified by the students (Table I) though not all did.Examples of students' work demonstrating some of the most frequently used sensemaking strategies can be found in Fig. 1-6. The most common sense-making strategy was to check the units or dimensions of an answer.Students performed this strategy in a number of different ways.By far the most common way was to substitute the fundamental dimensions of quantities (e.g.length, mass, and time) into the answer equation and then check that the dimensions on both sides of the equals sign were the same (Fig. 1). FIG. 1. Student example of using fundamental dimensions to analyze the Lagrangian of a free particle. We also observed students perform this process with (1) units (e.g.meters, grams, and seconds) instead of fundamental dimensions or (2) compound dimensions (e.g.acceleration, force, and energy) (Fig. 2).The strategy of using compound dimensions was advocated for by the instructor on the second day of class as having 2 advantages: (1) it can be faster than breaking everything down into fundamental dimensions, and (2) it fosters deeper understanding of the connections between quantities.However, we found that students infrequently used this compound dimension strategy when checking dimensions on their homework solutions. The next most common sense-making strategy was to check special or limiting cases.The special case strategy is when a student evaluates their equation-answer with precise values that allow for meaningful interpretation/comparison (Fig. 3).A limiting case strategy is similar to special case but requires a student to take the limit of the equationanswer as a particular variable goes to a specific value, typically 0 or ∞ (Fig. 4).When labeling their homework solutions, students often did not distinguish between these 2 strategies and often called a limiting case a "Special Case."This led to many instances where our codes differed from the students' labels.Another common sense-making strategy was conceptual connection (Fig. 5).When using this strategy, students explained why an answer made sense in terms of their conceptual understandings of the physical situation.Another frequently used strategy was the functional dependence strategy.A student using this strategy com-mented on whether the behavior of the function in an equation-answer makes sense for the physical situation.The students often called this strategy "Proportionality" from their list of strategies (Table I).Using this strategy, a student might comment on whether (a) the answer is expected to depend on a particular physical quantity, (b) the answer should increase or decrease when a physical quantity is varied, or (c) if the qualitative behavior of the equation-answer matches the expected behavior of the physics system, such as having a maximum value or oscillatory behavior.Students often confused this strategy with limiting case, mislabeling one as the other. V. LIMITATIONS The students analyzed are primarily physics majors at a large, public, research institution.While students are required to turn in individual work they are encouraged to work together.An individual's assignment may not be entirely representative of individual thought; due to the nature of homework and this encouragement to work together the collected data is a polished version of student reasoning.While this study was conducted in a course with explicit attention to sense-making strategies, the differences in teaching techniques from traditional teaching techniques are not fully documented.We make no new claims about student reasoning in a course that does not emphasize sense-making.This paper focuses solely on analysis of written work, therefore the coding of strategies does not reflect students' evaluation process. VI. DISCUSSION AND IMPLICATIONS Students were found to gravitate towards fundamental dimensions, special case, conceptual connection, and functional dependence as their primary sense-making strategies.Not all of these strategies were intended to be emphasized as the best strategies in course instruction.Specifically, checking units/dimensions was considered to be "low-hanging fruit" and students were told that while it is always good to do, it alone is not enough.Thus it is not surprising that students heeded this advice and checked units/dimensions on the majority of their work, often as a first step before further sense-making. Furthermore it is unsurprising that special case and conceptual connection were prevalent strategies, especially due to student's comfort with the physical connection of the classical mechanics material.Students were most likely comfortable interpreting the results of special case analyses and able to draw on their experience with introductory physics and the physical world to make conceptual connections with their solutions. Rosenshine's scaffolding technique was implemented to cultivate a course-wide norm of sense-making [7].The effectiveness of this scaffolding was analyzed through students' choices of sense-making strategies on the homework.While this scaffolding worked well for many strategies, students did not choose to use some of the strategies that were emphasized in the course.Two such strategies are: using power series expansions to understand a solution and predicting the form of the solution on the beginning of problem solving.While these strategies do not possess the widespread applicability of fundamental dimensions they are useful strategies that are applicable to the homework prompts analyzed here.We value the use of these strategies and future instruction will be tailored to increase their emphasis. Other changes that will be made for future instruction will be: emphasize the distinction between a special case and a limiting case and how to choose advantageous cases to analyze.While students often used these strategies, they did not always choose cases that would yield the most insight into the problem.Lastly, future instruction will expand the sense-making emphasis from a reflectioncentered approach to include using sense-making as a means of orienting oneself to the problem.Despite these intended modifications, this approach of having the sense-making goal treated on equal footing as the physics content goals of the course is a promising approach to support middle-division students in making sense of physics problems like experts.For instructors who believe that sense-making goals should be more explicit, we found our scaffolded approach to be highly successful.In particular, the combination of an interactive course environment and the comprehensive inclusion of sense-making in class meetings, homeworks, and exams proved highly successful at promoting sense-making. edited by Ding, Traxler, and Cao; Peer-reviewed, doi:10.1119/perc.2017.pr.035Published by the American Association of Physics Teachers under a Creative Commons Attribution 3.0 license.Further distribution must maintain attribution to the article's authors, title, proceedings citation, and DOI. FIG. 3 . FIG. 3. Student example of using special case to analyze an equation of the velocity as a function of mass for a rocket with linear air resistance; as seen by them setting m = 0. FIG. 4 . FIG. 4. Student example of using limiting case to analyze an equation of the velocity as a function of mass for a rocket with linear air resistance; as seen by them taking m → 0. FIG. 5 . FIG. 5. Student example of using conceptual connection to analyze the constraint force (λ), found through undetermined Lagrange multipliers, of a particle confined to the surface of a cylinder. FIG. 6 . FIG. 6.(a) Equations of motion (b) Student example of using functional dependence to analyze the equations of motion seen in part (a), of a spherical pendulum. TABLE I . List of sense-making strategies generated by students during Week 2. TABLE II . Sense-making strategy codes, descriptions, and frequency (% of 825 total code applications). compares answer to real world experiences or knowledge from a previous course 7% Sign checks that the sign of the answer makes sense with their coordinate system 3% Visualization understands the solution through figures, diagrams, graphs, etc. 2% 2nd Way compares answers using 2 solution methods 2% Algebrastates that the answer is correct because the algebra was done correctly 1% Assumptions checks for consistency between the answer and assumptions made at the beginning of the solution<1%Reasonable Magnitude states or argues why the magnitude of the answer is reasonable <1% Authority <1% Strategy Identification identifies potential sense-making strategies but doesn't implement them <1% No Sense-Making does the problem but does not answer the sense-making prompt 3% FIG.2.Student example of using compound dimensions to analyze the Lagrangian of a free particle.
3,609.2
2018-03-01T00:00:00.000
[ "Physics", "Education" ]
Simulation of Low Impact Development ( LID ) Practices and Comparison with Conventional Drainage Solutions † The present work aims at quantifying the benefit of Low Impact Development (LID) practices in reducing peak runoff and runoff volume, and at comparing LID practices to conventional stormwater solutions. The hydrologic-hydraulic model used was the Storm Water Management Model (SWMM5.1). The LID practices modeled were: (i) Green roofs; and (ii) Permeable pavements. Each LID was tested independently and compared to two different conventional practices, i.e., sewer enlargement and detention pond design. Results showed that for small storm events LID practices are comparable to conventional measures, in reducing flooding. Overall, smaller storms should be included in the design process. Introduction Urban drainage networks contribute to human well-being by preventing flooding, and so are considered as essential infrastructure [1][2][3][4].As urban areas become larger, denser and more impervious, usually expanding faster than their storm drainage systems, floods become more frequent and more devastating than in the past [5][6][7][8].During the last years, researchers have posed their concerns and criticism regarding the limited capacity and the flexibility of conventional drainage solutions to flooding (i.e., sewer enlargement; detention pond design etc.), especially for their ability to cope with climate variability and urbanization [9][10][11].New technologies (i.e., sustainable drainage solutions) have emerged that take into account other aspects of urban stormwater management, such as [1,2,[12][13][14]: (i) runoff quality; (ii) visual amenity; (iii) recreational value; and (iv) ecological protection.Nowadays sustainable drainage solutions are widely recommended and applied in different parts of the world [12,[15][16][17][18][19][20][21].They are called Low Impact Development (LID) in the United States and Sustainable Urban Drainage solutions (SUDs) in Europe.As LID practices present high interest in the last years, researchers have focused on evaluating LID hydrological performance and hydraulic behavior on flooding [16][17][18][19][20].However, little is known regarding LID ability in reducing hydrologic impacts at the watershed scale [12,13,15]. The present work aims to quantify the benefit of LID practices in reducing peak runoff and runoff volume, and to compare their efficiency in reducing flooding with those of two traditional stormwater practices [22].LID and conventional practices were simulated for 2, 5, 10, and 100-year synthetic design storms.The LID practices tested included Green Roofs (GR) and Permeable Pavements (PP), while the conventional measures included Sewer Enlargement (SE), and Detention Ponds (DP) placed parallel to the drainage network. Study Site The study site is located in Athens, Greece, and covers an area of 0.89 km 2 (89 ha).Most of the drainage area is densely developed (Figure 1).The portion of the combined drainage network corresponding to the catchment consisted of 79 combined pipes and 112 junctions with a total length of 5.34 km.The combined drainage system comprised either egg-shaped sewers with depths ranging from 0.9 m to 2.4 m, or pipes with diameters ranging from 0.3 m to 0.6 m.A full description of the study area is given by Kourtis et al. [22]. SWMM Model The software used in modelling conventional measures and LID practices was SWMM5.1 of the U.S. Environmental Protection Agency.SWMM is a fully dynamic rainfall-runoff model used for the simulation of water quantity, quality and LID controls in urban areas [22][23][24][25][26][27].Infiltration computations for the entire study area were based on the Curve Number method.In hydraulic calculations, the Dynamic Wave model was used with time step fixed at 0.1 s.SWMM software parameters and their variation ranges are presented in Table 1.Subcatchment information, such as area, slope, percent imperviousness and curve number values, were estimated based on the Digital Elevation Model (DEM) and the land uses of the study area using GIS techniques, while typical values from the literature were used for [26,[28][29][30]: (i) width; (ii) Manning's roughness coefficient for overland flow in pervious and impervious surfaces; (iii) depression storage for pervious and imperious areas; and (iv) Manning's roughness coefficient for storm sewers.As the study site is ungauged, the following IDF curves were used in simulating existing condition, conventional measures and LID practices [31]: where i is the rainfall intensity (mm/h), T is the return period (years), and t is the rainfall duration (h). LID Practices and Conventional Measures The main objective both of LID and conventional measures was to improve drainage conditions, so that the combined drainage network would be able to handle the runoff of return periods of up to ten years without surface flooding.Conventional measures and LID practices were simulated and compared with the existing condition using EPA SWMM5.1 software. GR mainly retain part of the rainfall but also lengthen flow paths, thus reducing runoff from impervious surfaces.PP can be used to replace impervious concrete or asphalt pavements covering sidewalks, parking lots, secondary roads etc.The main drawback is that only a few studies in the recent literature have compared observed flow from LID structures to simulated flow, and as a result, parameters need to be estimated in a relative coarse manner [33][34][35][36][37].In the present study, there are no flow measurements available in the study area, in order for the calibration-validation procedure to take place, and as a result, parameters are estimated from previous studies [17,18,[37][38][39][40]. Parameters for the two LID practices simulated in the present study are presented in Table 2.The GR scenario converted the commercial and residential rooftops into green roofs, while under the PP scenario sidewalks, parking places and secondary roads were converted to permeable surfaces.For determining the available space of the study area to be converted, for both scenarios, the ArcGIS software was utilized.For each subcatchment, total rooftop area, sidewalk area, parking lot area and road area were calculated using aerial imagery.In total, the area converted to green roofs was calculated at about 0.23 km 2 , covering 35% of the impervious area and 31% of the total study area.Finally, the area that must be replaced in order to become permeable was estimated at about 0.19 km 2 , covering 18% of the study area and 20% of the impervious area.The details about the setup of the model parameters in modelling the conventional measures (SE and DP) are described by Kourtis et al. [22].For the SE scenario, a total of 60 sewers were selected for enlargement with the diameter of the new pipes ranging from 0.4 m to 1.2 m, while the height of the sewers ranged from 1.05 to 2.4 m and their width ranged from 1.2 m to 3.0 m.Finally, 29 detention ponds were designed with maximum depth up to 3.0 m and maximum volume capacity ranging up to 1042 m 3 . Results and Discussion Even for duration of 1-h and return periods of 2 and 5 years, flooding occurs at two nodes of the system.The total flooding volume was calculated at 28 m 3 and 355 m 3 for 2-, and 5-year return periods, respectively.All flooding mitigation measures examined herein increased the drainage system capacity, and as a result there was not surface flooding at any node of the system.SE and DP upgraded the system capacity, while on the other hand GR and PP reduced the runoff volume from the subcatchments.Figure 2a,b present the hydrographs at the outlet of the study area for all the scenarios simulated.For SE, a slight increase for the 2-year flood and a significant increase for the 5-year flood of the hydrograph peak are shown, resulting from the additional water entering the storm sewer, which otherwise would end on the street surface.DP, GR and PP give comparable peaks at the exit but the flood volumes for both GR and PP are significantly reduced. Hydrographs for the existing condition, and all scenarios tested, for duration of 1-h and return period of 10-years, are displayed in Figure 2c. Figure 2c shows that both the peak flow and the total volume are reduced at the outlet due to the implementation of the DP, the GR and the PP.The reduction was in the range of 13.4-28.2%for the peak flow, and 24.5-29% for the total runoff volume.However, in case of DP, the total volume increased about 54%, since additional water, which otherwise would flood the streets, was temporarily stored in the DPs, and then was released back and slowly drained through the storm sewers.GR, PP and DP decreased the peak of the flood hydrograph and the occurrence time of peaks was slightly affected.On the other hand, the flow peak and the total volume increased in case of SE by about 49.7% and by about 15.8%, respectively. Finally, Figure 2d presents the results for the existing condition and after the implementation of flooding mitigation measures for a large storm event (i.e., return period of 100 years and duration of 1 h).The total surface flooding volume, before the implementation of mitigation measures, was calculated at 12,589 m 3 and 44% of the nodes of the combined drainage network flooded.The SE scenario upgraded the system drainage capacity by 79.6% but even in this case there was flooding in the area.The volume of surface flooding was computed at 4640 m 3 after SE, reduced by about 63%.The DP mitigation scenario upgraded the system capacity by about 13% and the volume of flooding was reduced 100%.Finally, GR and PP reduced the peak flow at the outlet of the study area by about 4%, while the flooding volume, for the whole study area, was reduced by about 70%.We also have to mention that, for all return periods, the SE scenario caused increases in flow peaks at the outlet of the drainage network which may negatively affect conditions at the receiving river.In order for engineers, practitioners and stakeholders to be able to effectively manage highly urbanized basins, and moreover, achieve sustainability goals, more frequent storms must be Green Roofs Permeable Surfaces incorporated in the design process, as they may have a significant impact on water quantity and quality, especially in urban areas were combined drainage networks are still in use.The analysis conducted herein indicated that LID practices, such as GR and PP, can operate as effectively as conventional measures, especially for small storm events, while traditional approaches, such as sewer enlargement and detention ponds are more effective in managing runoff from storm events with lower probability of occurrence.LID and conventional practices must be examined in combination in order to achieve both flood mitigation and sustainability goals. The methodology proposed herein is relatively easy to be transferred and applied in cities with different characteristics.However, one must be careful in choosing the parameters related to the drainage system, and parameters of the LID and the conventional solutions explored.Moreover, in case of absence of rainfall-runoff measurements, the hydrologist could transfer and use parameters from adjacent calibrated-validated areas with similar characteristics or use a detailed hydrodynamic 1D-2D model for calibration. Conclusions and Recommendations for Future Research The present paper analyzed the impacts of LID designs on urban flooding in a highly urbanized catchment in Athens, Greece, where two LID practices were modeled and compared with conventional drainage solutions (i.e., sewer enlargement and detention pond design) for stormwater management.The main objective was to improve understanding on how the adoption of LID practices, and in particular green roofs and permeable surfaces, in an already urbanized basin might impact runoff and, therefore, flood risk.It is essential to understand the main conditions under which sustainable stormwater solutions (i.e., LID) could mitigate flooding problems.Mitigation measures must be studied in combination, in order to address runoff volumes and discharges in urbanized basins, and so these modeling scenarios are primarily meant more for providing bounds on LID practices as flooding mitigation measures.Results demonstrated that LID practices are highly effective for small storm events.However, as the probability of the rainfall event reduces, the LID solutions tend to become less effective in reducing runoff volume and peak flow.Green roofs and permeable surfaces delay and attenuate stormwater runoff, at the source, and so they reduce stormwater volume discharges and flooding phenomena in urban regions.The two LID practices examined herein demonstrated that LID are more effective for lower intensity storm events; however, their effect tends to diminish while the magnitude of the rainfall event increases.Overall, it is proposed that smaller storms should be included in the design process in order for stakeholders to be able to evaluate sustainability.Finally, SWMM was found to be a very useful tool in modelling and testing the ability of conventional and LID practices in reducing flooding in a highly urbanized basin.Future research is needed regarding modelling parameters of LID practices and the optimum combinations between the sustainable urban drainage solutions and the conventional measures.Moreover, cost-benefit studies must be included in the design process, in order to determine the feasibility of conventional and LID solutions regarding the achievement of sustainability goals.Extension of such analyses in larger areas could provide a clearer insight on the impact of both LIDs and conventional measures to the urban drainage network.All the methods adopted and implemented in the present work are independent and could easily be applied to urban basins with different sizes and characteristics in order to determine feasibility of sustainable drainage solutions. Figure 1 . Figure 1.Aerial view of the study area (Google earth). Figure 2 . Figure 2. Hydrographs of 1-h duration storm at the outlet of the study area for all the scenarios tested, and for return periods of: (a) 2 years; (b) 5 years; (c) 10 years; and (d) 100 years. Table 1 . Key Model Parameters. Table 2 . Parameters of LID practices.
3,082.8
2018-08-03T00:00:00.000
[ "Engineering" ]
Proteomic Analysis of Stage-II Breast Cancer from Formalin-Fixed Paraffin-Embedded Tissues Breast cancer is the most frequently occurring disease among women worldwide. The early stage of breast cancer identification is the key challenge in cancer control and prevention procedures. Although gene expression profiling helps to understand the molecular mechanism of diseases or disorder in the living system, gene expression pattern alone is not sufficient to predict the exact mechanisms. Current proteomics tools hold great application for analysis of cancerous conditions. Hence, the generation of differential protein expression profiles has been optimized for breast cancer and normal tissue samples in our organization. Normal and tumor tissues were collected from 20 people from a local hospital. Proteins from the diseased and normal tissues have been investigated by 2D gel electrophoresis and MALDI-TOF-MS. The peptide mass fingerprint data were fed into various public domains like Mascot, MS-Fit, and Pept-ident against Swiss-Prot protein database and the proteins of interest were identified. Some of the differentially expressed proteins identified were human annexin, glutathione S-transferase, vimentin, enolase-1, dihydrolipoamide dehydrogenase, glutamate dehydrogenase, Cyclin A1, hormone sensitive lipase, beta catenin, and so forth. Many types of proteins were identified as fundamental steps for developing molecular markers for diagnosis of human breast cancer as well as making a new proteomic database for future research. Introduction Breast cancer is common lethal cause of malignancy among women around all the countries. Early detection of breast cancer facilitates the diagnosis and treatment prior to metastasis [1]. Despite remarkable development in new medicine findings and therapies for breast cancer during previous decades, no significant treatment methods are accessible for cancer-affected people with invasive and metastatic breast cancer. In this stage, the patients have less responses to cancer therapy due to recurrence properties of cancer [2]. The incident of breast cancer is increasing in India and also this is the second most common cancer in rural Indian females [3]. Furthermore, Indian women with breast cancer do not get medical assistance at early stage due to financial constraints, illiteracy, and lack of awareness. It is barely shocking that major elements of people with breast cancer in India do not get proper treatment at different stages of tumor progression [4]. The developed countries with small amount of global population reported approximately 50% of breast cancer diagnosed worldwide [5]. Far eastern and southeast Asian countries reported lowest breast cancer incidence [4]. Developing countries in Asia accounted for steadily increasing breast cancer population because of health care burdens and they are expected to have the highest rate of breast cancer incidence in the coming decades. Particularly, over 100000 breast cancer cases are accounted to be diagnosed every year in India [6,7]. Mammography plays significant role in diagnosing breast cancer; however, the tumor size less than 0.5 cm remains undetectable by this model. The survival rate of this cancer patient is majorly associated with tumor conditions. Subsequently tumor identification of early stage-I has 98% of 5year survival rate; stage-II tumors have 85%, stage-III tumors have 60%, and stage-IV has around 20% of 5-year survival rate for this cancer. In general, breast cancer has 5-year survival rate of 80% approximately with 207,090 cases and 39,840 deaths happening in women in America in the year of 2010. In 2015, approximately 231840 cases had breast cancer; among them 40290 people are going to die in the United States [8]. Breast cancer detection at early stage has treatment such as surgical resection with removal of axillary lymph nodes, radiation therapy, chemotherapy [3], and hormone therapy [9]. Though there is some notable improvement in treatment of breast cancer, the absence of biomarkers in serum/plasma causes delays in early identification of breast cancer [10]. This information signifies the requirement of new techniques in early diagnosis of breast cancer researches to the society. Stage-I patients have nine times more likelihood of staying healthy for ten years as compared with advanced periods [11]. This cancer becomes complex through invasive stage due to many changes in molecular level; it stimulates cell proliferation and genetic instability. This heterogeneity creates different subgroups that cause different clinical and therapeutical responses. Hence, it is necessary to determine the molecular structure including protein markers which are responsible for the diseases. These increase the rate of advanced stage analysis of the disease, therapeutical response, and the relapses after the therapy and the differences that are atypical to disease and person [12]. Hence, it is very urgent to make novel diagnostic methods for early stage detection of this cancer, which provide a new way to reduce this cancer related mortality [13,14]. The sequencing of human genome has given a new way for the tremendous revolution in biology and medical field today [15]. The number of emerging and powerful technologies within functional genomics and proteomics combined with bioinformatics tools accelerates the application of basic discoveries in clinical practice [16]. Recent improvement in the field of molecular genetics, particularly proteomics, paved the way for improving the drug development and clinical trial procedure [17]. In addition, proteomics provides tools for investigating abnormal molecular changes in cancer tissues and it gives new insights into developing new reagents of understanding all the stages of cancer conditions [18]. Furthermore, proteomics also provides tools for drug discovery [19]. Although proteomics studies have been initiated in the area of breast cancer research in other world populations, limited studies are reported till date in Indian population. Keeping this in view, we at Dr. Reddy's Laboratories initiated breast cancer proteomics to understand the subtle changes in protein patterns in cancer patients using the proteomics technology. The ultimate aim of the project is to identify breast cancer biomarkers. Materials and Methods Breast cancer tissues were received from patients undergoing mastectomy at Mehdi Nawaz Jung (MNJ) cancer hospital, Hyderabad, India. All patients were found to be serologically negative for HBS Ag and HIV. The permission was gotten from all patients (20) in the age group of 24-60 years. All selected patients were suffering from infiltrating ductal cell carcinoma at the stage-II progression. Small pieces of samples were sliced and used for histology analysis [20]. Histopathological analysis of all the samples was performed at L. V. Prasad Eye Institute, Hyderabad, India, and the cancerous tissue was separated from normal tissue. Sample Preparation. Histopathological separated normal and tumor tissues were homogenized in Bio-Rad's sample extraction reagent 2 buffers. The protein was estimated by Bio-Rad's RCDC method. Equal amount of protein was dissolved in 300 L Bio-Rad's rehydration buffer and loaded into 17 cm IPG strips (pH ranges from 4 to 7). Then the strips were focused using the PROTEAN IEF (Isoelectric Focusing) cell kit. Image Analysis. The gels were scanned on Bio-Rad's G800 densitometry scanner. The images were analyzed for differential expression between normal and tumor gels by Bio-Rad's PDQuest software. In-Gel Digestion/Mass Spectrometry (MALDI-TOF) Analysis. The excised silver stained in-gel protein band was chopped into small pieces and transferred into Eppendorf tubes. A piece of protein-free acrylamide gel was taken in parallel as a negative control. Mass spectrometry (MALDI-TOF) analysis was performed [21]. 2D Gel Analysis. The proteins of (400 g) histopathologically segregated normal and tumor breast tissue lysate were separated using 2D gel electrophoresis (17 × 20 cm) as represented in Figure 1. The images results exhibited many differentially expressed proteins in the breast cancer sample as compared with control tissues. The proteins with more than twofold average quantitative expressions between cancer and control tissues were considered as statistically regulated proteins. Among these, 29 spots and 6 spots were upregulated and downregulated in cancer condition, respectively, as compared with control tissues. Differential proteins were chosen for peptide mass fingerprinting analysis using MALDI-TOF. were cut from the gels after image analysis (Figure 2 with SSP numbers). The differentially expressed proteins were identified based on the peptide mass fingerprints in cancer tissues as well as in control tissues ( Table 1). The identified proteins were categorized with their cellular component. Maximum numbers of proteins were found in the cytoplasm followed by the membrane, nucleus, mitochondria, and others (Figure 3(a)). Furthermore, these proteins were classified again based on the biological functions (Figure 3(b)). Discussion The rate of incidence and problems associated with cancer has been increasing over the last fifteen years for both men and women. Particularly in developed areas, the uterus and cervical cancers had the highest incidence in the last fifteen years among women populations. However, nowadays they are replaced by breast cancer. Classification of molecular 310 P50336-00-01-00/splice isoform displayed (4) 402 Human GST (5) 404 Q9Y5H9 splice isoform (6) 613 MUTS2 protein (7) 704 Vimentin (8) 1201 Q9Y276 splice isoform displayed (9) 1309 Dehydroquinate synthase (10) 1507 Q9NRC8 splice isoform displayed (11) 1709 O32720 anti-sigma-F factor antagonist (12) 1710 P51587 human splice isoform II displayed (13) 2712 Kinesin like protein (14) 3303 AAA52735 immunoglobulin alpha-1 chain fragment (15) 3410 AX879017 NID Homo sapiens (16) 4406 Q860R0 MHC class I b antigen (17) 4611 Q13085 human acetyl-CoA carboxylase (18) 4613 Splice isoform displayed (19) 4613 O94805 BRG1-associated factor (20) 4701 Epithelial-cadherin precursor (21) 4911 CUL5 protein (22) 6216 Q92817 envoplakin (23) 6507 Annexin A1 (annexin I) (24) 6608 Glutamate dehydrogenase (GDH) (25) 7302 Q15828 cystatin M precursor (tumor suppressor) (26) 7408 Q9VPI8 DNA binding transcription factor (27) 8317 Q9Y6N7 human splice isoform I (28) 8510 Q05469 hormone sensitive lipase (29) 8703 Beta-catenin (30) 8704 Enolase-1 (31) 8705 P78396 Cyclin A1 (32) 8906 Dihydrolipoamide dehydrogenase, mitochondrial precursor (33) 9506 GTP binding protein (34) 9601 Lipid phosphate phosphohydrolase 3 (35) 9901 Cathepsin L2 precursor (cathepsin V) events creates the main challenge in human breast cancer research. Achieving this goal is interrupted by the practical aspects of the application of improved methods to the microscopic premalignant and preinvasive stages of cancer [22]. The above factors are supported by the several evidences which propose that environmental pollution is a well-known etiological factor for breast cancers [23]. The present study focuses on the differential profiling of the breast cancer proteome, by comparing proteins using 34 two-dimensional gel electrophoresis of both normal breast tissue and the tumor tissue. Some of the proteins identified are playing a crucial function in the disease progression and some are reported in the literature as tumor suppressor proteins. Human annexin significantly regulates tumor progression by stimulating cell proliferation and differentiation process [24]. The expression of annexin in tumor tissue strengthens its role in tumor progression and in similar way our results outcome correlated. Furthermore, the expression of annexin in tumor tissue strengthens its role in tumor progression and clinical features of breast cancer [25]. Similarly, human glutathione S-transferase is also known to be expressed in a variety of tumor tissues and breast cancer is one among them. This enzyme exerts the detoxification of cancer promoting reactive metabolites. The genetic polymorphisms of glutathione S-transferase (GST) enzyme T1, M1, P1, and A1 in Thai breast cancer patients related to progression of breast cancer. Therefore the increased expression of GST in this research confirms its role in tumor prognosis and supports the previous reports [26,27]. Vimentin is another protein that has been identified in this study. Vimentin commonly showed more expression in myoepithelial cells with low molecular keratins as the trademark of glandular breast cells [28,29]. The expression of vimentin in this study correlates with its possible role in cancer invasiveness and therefore proposes the different hypothesis in which vimentin in breast cancer could derive from breast progenitor cells with bilinear differentiation potential [30]. Enolase-1 is known to be a multifunctional enzyme and has a main role in glycolysis and it plays a part in different processes like growth control, hypoxia tolerance, and allergic responses [31]. Enolase-1 expressions in glycolytic process at hypoxia favor the tumor cells to solve their energy requirements. Consequently enolase-1 increases the survival rate, proliferation, and the invasive and metastatic ability of the tumor cells [32]. Hence, the expression of enolase-1 in our study validates its role in breast cancer cell energy requirements. Furthermore, downregulation of enolase-1 improves the cellular sensitivity to the radiation therapy and it might be the target for drug development for breast cancer [33]. Hormone sensitive lipase (HSL) plays potential role in lipogenesis, degradation, and its catabolism [34]. The expression of HSL in breast tissue proves its differential expression in breast cancer and its survival rate [35]. Conversely -catenin is involved in transcriptional regulation of Wnt signaling cascade [36]. In our study, this protein's expression in breast tissue is related to invasive lobular breast carcinogenesis. The activation of -catenin/Wnt signaling pathway is associated with low clinical output and is unlikely to be regulated by -catenin encoding gene mutations in breast cancer [37]. On the other hand, cell cycle regulatory pathways play an important role in estrogen related breast cancer cell growth, in which Cyclin A1 plays important role in tumor development. Alterations in vascular endothelial growth factor related cellular pathways resulted in high expression of Cyclin A1 in primary and metastatic breast cancer specimens [38]. Hence, our finding indicates that the expression of Cyclin A1 in tumor sample correlates with the involvement of various cell signaling pathways like cell cycle regulators and estrogen receptor signaling in breast cancer progression. Likewise other proteins listed in Table 1 have important role in tumor progression. Conclusions Analyzing cancer using proteomic approach shed significant light on the underlying mechanism that leads to cancer development. The results discussed in the present study relate the expressed proteins involvement in tumor and cancer development related cellular pathways like cell cycle, angiogenesis, and metastasis. Based on this research outcome we propose that differentially expressed proteins in cancerous condition could be fundamental steps for developing the markers and proteomic database for breast cancer diagnosis.
3,164
2016-03-24T00:00:00.000
[ "Biology" ]
Cities and villages in the religious conflict circle: Socio-demographic factors of communal and sectarian conflict in West Java, Indonesia Java. It can be used as an illustration for other regions in Indonesia and Southeast Asia. Introduction status, power and resources involving religious issues or issues framed in religious slogans or expressions (Ali Fauzi, Alam & Panggabean 2009:7). It is related to sociodemographic factors in which differences in urban and rural sociological characteristics cannot be separated from economic determination factors. The socio-demographic conditions greatly influence the tendency of different forms of conflict in each region. The urban social conditions (urban), which are heterogeneous, multi-religious and ethnic, for example, have an impact on the form of communal conflicts amongst religious groups between Muslims and Christians such as in the Ambon and Poso cases in 2000-2002(Dandirwalu & Rehy 2020Iqbal 2015:237). On the other hand, the homogeneous rural social condition (rural) affects the form of internal sectarian conflict amongst religious communities or fellow Muslims, as seen in the case of Ahmadiyah in Cikeusik in 2011 (Scherpen 2011). This study is important to show that various religious conflicts in different regions in Indonesia are closely related to the sociodemographic conditions of each region. Based on Toennis' category, strong individuality amid high immigrants causes urban communities (gesellchaft) to have high levels of contact with interfaith adherents. It is in contrast to rural communities (gemeinschaft), which have intimate family relationships. The unity of morality and religion makes contact with various religions less prominent than internal religious adherents (Stolley 2005:169;Toennies 1963). So, the tendency of the conflict in each case is different. The socio-demographic conditions of religious adherents influence it. Thus, knowledge of differences in sociodemographic conditions in each region is very important because it will determine the form, causes and conflicts handling in each region. This study focuses on the socio-demographic aspect as a point of view in analysing religious conflict in West Java. The 11 districts or cities which are the locus of this research include: Bekasi Regency, Bekasi City, Bogor Regency, Bogor City, Cianjur Regency, Bandung regency, Bandung City, Cimahi City, Garut Regency, Tasikmalaya Regency and Kuningan Regency. These 11 regions have a high level of religious conflict. A series of observations and interviews for data collection were carried out, especially with the Forum Kerukunan Umat Beragama (Religious Harmony Forum) at the provincial and regency/city levels in Indonesia. The data analysis used is a qualitative approach through data display, data reduction, interpretation and conclusion. Several previous studies did not consider socio-demographic factors in analysing religious conflict in West Java. Marshall's study, for example, asked why this area became home to some of the greatest acts of intolerance in Indonesia. Does it reflect a different cultural pattern from the Sundanese people who mostly live there or show the tensions produced by modernisation that have created many conflicts in the world? (Marshall 2018). Other previous studies (Sulistio 2018) on intolerance acts in church construction cases in Bekasi also did not consider this socio-demographic factor. So did the scholars examining attacks on Ahmadiyah, Shia, Gafatar and other minority groups in West Java (Burhani 2020;Makin 2019;Nurdin & Kharlie 2019;Zulkifli 2009). Religious conflict in West Java Lewis Coser referred to conflict as 'a struggle over values and claims to secure status, power and resources, a struggle in which the main aims of opponents are to neutralize, injure or eliminate rivals' (Coser 1956). For the purpose this study, religious conflict is defined as a feud concerning status, power and resources involving religious issues or issues framed in religious slogans or expressions (Ali Fauzi et al. 2009:7). It is related to socio-demographic factors in which differences in urban and rural sociological characteristics are strongly influenced by economic determination factors. The city as an economic and trade centre affects the diversity of its people so that it has an impact on forms of communal conflict amongst religious people. In contrast, the rural areas tend to be homogeneous and affect the sectarian internal conflict amongst the religious people or fellow Muslims. Based on these above mentioned reports, West Java Province ranks the highest with the highest number of violations, becoming the most intolerant area in Indonesia. The Setara Institute data mention that out of the 62 cases in Indonesia, 13 cases of violations were recorded in West Java. The rest happened in a number of other provinces. Whilst the Wahid Institute data show that out of the 190 cases of violations, 46 cases were from West Java (The Wahid Institute 2015). The rest are spread over in other provinces, such as Aceh (36 cases), DKI Jakarta (23), Yogyakarta (10), East Java (9), Lampung (8), Banten (7) and Central Java (7). Some cases of such violations occurred in the attack on the Ahmadiyah community in Manis Lor, Tasikmalaya, Garut, Cianjur and others. The question that arises is why does West Java province ranks the highest in cases of violations of religious freedom and belief in Indonesia, whereas inhabited by Sundanese it is known as one of the areas rich in traditional values and local wisdom in Indonesia. Many Sociologically, the increase in cases of such violations is not only related to the religious doctrine but also the situation of social change in West Java. The shift of culture and social structure from village to city and from agrarian to industrial, impacts the lives of its increasingly heterogeneous people. As an area adjacent to the nation's capital, DKI Jakarta, the presence of migrants from various regions in Indonesia and abroad is inevitable. As an ethnically and religiously heterogeneous region, West Java is known for a variety of potential conflicts. Uncontrolled displacement has been one of the triggering factors for internal and interfaith conflicts. The high level of migration germinates competition and friction that leads to the dominance of immigrant groups in the economic field. It creates an inequal social relationship between migrants and local residents, thus triggering potential social conflicts (Iqbal 2015:237). There have been a number of cases of social conflict in West Java over the last two decades since the reform era in 1998. A number of cases of socio-religious conflicts, either internal or interfaith, are in the case of houses of worship, religious understanding, blasphemy, conflicts of interest of religious leaders and demonstrations of religious attributes (Ali 2013:248-249). Cases of houses of worship appear evenly in almost all areas of the city/regency in West Java in the form of the use of residences or businesses as houses of worship and rejection of citizens to other religious houses of worship and the issue of Christianisation. Certain religious issues develop in the internal environment of mainstream religion. Blasphemy occurs in the case of banning prayers, wearing of headscarves, destruction of statues of Mother Mary, religious insults and others. Conflicts of interest of religious leaders occur amongst Muslims and Christians resulting in conflicts between both supporters. Whilst religious demonstrations involving the masses along with violent acts committed against entertainment venues, support for the enforcement of sharia local regulations, political competition in elections and others. Table 1 details some of the areas prone to religious conflict in West Java over the last two decades based on data from socio-religious research institutes. Table 1 shows a number of regencies/cities in West Java as the place where cases of religious conflict occur. There are two different locus tendencies, namely regencies and cities. The next section will be directed at the analysis of the spread of religious conflicts in West Java based on the differences between the two locales. The explanation is useful to clearly map the differences in conflict characteristics in the two areas so that a better understanding is obtained. Town and village characteristics in religious conflict There are two main areas in this article, namely regencies and cities in West Java. Both have different potential for religious conflict. Geographically, cities have a higher level of population heterogeneity than regencies. Cities tend to be more plural and heterogeneous ethnically and in custom. The attitude of urban people tends to be individualistic, rational, thus leading to secularity. Homogeneous religious ties are not particularly prominent. The position of religious leaders is also not too dominant. Therefore, in urban areas, religious homogeneity tends to decrease, and heterogeneity and religious freedom tend to be high. Religious adherents and types of worship places are also very diverse. Besides, the lifestyle and social system of the urban community have also undergone a shift. As a result of the high level of population density, there is a change in land use in urban areas that has resulted in smaller availability of land. Socio-anthropologically, urban areas in West Java have been transformed into cultural faults, where socio-religious groups from various religions meet because of their high level of urbanisation. In these areas, there are diverse elements of social class (strata), ethnicity, religion, culture, customs, language and gender, and people tend to live independently without mixing each other (Nasikun 1986:31). It is in contrast with rural areas, where the level of community homogeneity is very dominant. In rural areas, indigenous people are the dominant inhabitants because migration from (Burhani 2014). In addition to Ahmadiyah cases, religious conflicts also occurred between the Anti-Shia National Alliance, that represents Sunni Islam, and the followers of Shia Islam in Bogor, Bandung and several other areas in West Java (Syarif, Zulkarnain & Sofjan 2017). Conflicts that occur internally amongst a group of fellow Muslims or Ahmadiyah and Shia groups generally occur in rural areas which are predominately inhabited by homogeneous religious adherents. Internal religious conflicts in rural areas are different from religious conflicts that occur in urban areas. Geographically, the cities have a higher level of population heterogeneity compared to regencies. Urban areas tend to be more ethnically and religiously plural. Therefore, the prominent religious conflicts in urban areas are communal conflicts between religious communities such as in the case of the construction of the HKBP Filadelfia house of worship in Bekasi and Buddhist and Hindu houses of worship in 2019. Therefore, the socio-demographic factors of the village community and cities in West Java are one of the important factors that distinguish the characteristics of religious conflict in various regions in West Java. Communal and sectarian issues in villages and cities Generally, the characters of religious conflict in West Java cannot be separated from the issue of religious identity that is framed in religious slogans or expressions. The religious issues are indisputably related to the teachings or doctrines of a religion. Meanwhile, the issue framed in religious slogans or expressions are more general issues that are regarded to have a connection with their religious teachings or doctrines. Many religious issues have emerged during the various religious conflicts that have occurred in West Java, for example, moral, sectarian, communal, terrorist, politicoreligious and the mystical religious subculture issues such as witchcraft, divination and so on (Ali Fauzi et al. 2009:9-10). Out of these religious issues, sectarian and communal issues are the two most prominent issues in the 11 regencies and cities in West Java. Table 2). As previously explained, the characteristics of urban areas are high urbanisation and social, economic, cultural and religious heterogeneity. Religious diversity in urban areas occurs along with the heterogeneity of the population itself. 4. The cultural patterns and behaviour of newcomers vary. 5. The role of local community leaders, including clerics and other religious groups, still dominates the behaviour and attitudes of the community. 5. The community is more open, rational and high tolerance. -6. They are the economic centres and centres of trade and services. -7. The community is more focused on economic and business activities. Source : Nasikun, 1986, Sistem Sosial Indonesia, Rajawali Press, Jakarta Therefore, it is justifiable that it also affects the religious diversity and the intensity of the construction of worship places such as churches. For instance, the case of communal conflicts between Muslims and Christians related to the construction of churches such as GKI Yasmin Bogor. The Muslims who opposed the construction of the church questioned the permit of construction, the use of public facilities for worship places, the residents' protests and the revocation of the permit for the worship place. Religious conflicts in urban areas related to the construction of worship places continue to occur even though the Indonesian government has established SKB No.1/Ber/ MDN-MAG/1969 concerning the implementation of government apparatus duties in ensuring order and smooth implementation of religious development and worship by its adherents and the Joint Regulation of the Minister of Religion and the Minister of Home Affairs Number 8 of 2006 and Number 9 of 2006 concerning Guidelines for Implementation of Tasks Regional Heads/Deputy Regional Heads in maintaining Religious Harmony, Empowering Religious Harmony Forums and establishing houses of worship (Rumadi 2007:10-11). This is partly because the regulations have multiple interpretations regarding who the local government is, the government officials who are empowered to do so, and the religious organisations and the local ulama or clergy (Kustini 2009:2). Thus, the regulations made it more difficult for minorities to obtain rightful permission to build places of worship. The communal religious conflicts in urban areas are different from the internal sectarian conflicts amongst Muslims, which generally occur in rural areas. The villages are characterised by the dominance of the community pattern; the homogeneity of religion, customs and traditions that become the norm; highly respected community leaders or figures; and a tendency to reject new things. Therefore, the internal sectarian issues of Muslims have become very prominent in rural areas. It is a result of the interpretation or understanding of teachings in Islam as seen in the case of Ahmadiyah, Shia and other sects. According to Azyumardi Azra, the development of various sects or religious understandings that are different from the mainstream teachings in Islam occurs amidst socio-economic changes as a result of globalisation, which causes psychological disorientation or dislocation in the society (Azra 1999:10). Dissatisfaction with religious ideologies, movements or organisations that do not accommodate their spiritual needs also encourage the emergence of sectarianism in the Muslim community. There has also been a decline in the credibility of religious leaders amongst the people because of the influence of activities in the political world. Apart from this, one of the prominent symptoms in several movements is that their education and religious knowledge aspects are relatively minimal but they are balanced with a high religious spirit (Van Bruinessen 1999:242, 2004. It is understandable considering that low education and poverty are amongst the main problems of rural communities. and sub-district and village governments are trying to take the path of family deliberation. It is practicable because the village culture is friendly and highly respectful of religious leaders or leaders, which makes it easy for all parties in conflict to join dialogue through family friendly deliberations. The handling of these cases tends to be different from communal religious conflicts in urban areas. Resolution through power with the security apparatus approach dominates in urban areas such as communal conflicts in the case of the church construction. As there was a mass mobilisation in the form of Muslim demonstrations, legal and secure ways were selected to resolve the case. Finally, the government carried out mediation. The case occurred because of the construction of worship places in Bandung City, Cimahi City, Bekasi City and Bogor City. Thus, the given explanation shows that various religious conflicts in West Java cannot be separated from the sociodemographic conditions of each region. The rural sociodemographic conditions that tend to be homogeneous effect the form of religious conflict towards the internal sectarianism of Muslims. It is different from the urban socio-demographic conditions, which tend to be heterogeneous. So, communal conflicts between religions, especially Muslim-Christian dominated the religious conflicts in urban areas. This study emphasises the importance of differentiating the study of religious conflict in a society based on socio-demographic conditions, which in the discipline of sociology distinguish diametrically between rural and urban sociology (Lobao 2007;Sassen 2007). Conclusion Indonesia is a large country covering many ethnic groups with religious background. This diversity is spread in various regions, both in urban and rural areas. Therefore, observing the occurrence of various religious conflicts in Indonesia over the last two decades, it is important to consider not only the diversity of ethnic and religious backgrounds but also the place of residence of ethnic and religious groups in urban and rural areas. The research data from 11 regencies and cities in West Java shows that there are important differences in the characteristics of rural (kabupaten) and urban (kotamadya) areas that affect the different patterns of conflict that develop in both the regions. Regency areas generally have a pattern of internal religious conflict amongst Muslims, as seen in disputes between sects or religious understandings in Islam. Homogeneous rural social conditions affect forms of internal conflicts. They are different from the urban areas that are dominated by communal conflicts between Muslims and Christians, especially in the case of the construction of worship places. Heterogeneous urban social conditions tend to have an impact on the communal forms of conflict. These different conflict patterns also affect the way of handling the conflict. In rural areas, dialogue between religious leaders and local communities is the way to handle internal Muslim conflicts. In contrast, in urban areas, resolution through power line with the security approach of the security forces is the way to handle communal religious conflicts in dealing with cases of building worship places.
4,047.4
2021-01-01T00:00:00.000
[ "Economics", "Sociology" ]
Irradiation Investigation: Exploring the Molecular Gas in NGC 7293 Background: Many planetary nebulae retain significant quantities of molecular gas and dust despite their signature hostile radiation environments and energetic shocks. Photoionization and dissociation by extreme UV and (often) X-ray emission from their central stars drive the chemical processing of this material. Their well-defined geometries make planetary nebulae ideal testbeds for modeling the effects of radiation-driven heating and chemistry on molecular gas in photodissociation regions. Methods: We have carried out IRAM 30m/APEX 12m/ALMA radio studies of the Helix Nebula and its molecule-rich globules, exploiting the unique properties of the Helix to follow up our discovery of an anti-correlation between HNC/HCN line intensity ratio and central star UV Luminosity. Results: Analysis of HNC/HCN across the Helix Nebula reveals the line ratio increases with distance from the central star, and thus decreasing incident UV flux, indicative of the utility of the HNC/HCN ratio as a tracer of UV irradiation in photodissociation environments. However, modeling of the observed regions suggests HNC/HCN should decrease with greater distance, contrary to the observed trend. Conclusion: HNC/HCN acts as an effective tracer of UV irradiation of cold molecular gas. Further model studies are required. Introduction The molecular chemistries of planetary nebulae (PNe) provide ideal testbeds to explore the role that high-energy irradiation plays in photodissociation regions (PDRs). As the end-stage of ∼1-8 M stars, the ejected gas and dust of a PN leaves behind the hot core of the dying central star (CSPN), which is a copious source of far-UV and X-ray photons [1]. The irradiation of the nebular envelope then drives the chemistry observed across PNe of varying morphologies. Analysis of the molecular features in this gas provides a means to probe the evolution of the PN as it distributes the material into the interstellar medium. Two relatively abundant trace molecules, HCN and HNC, vary by several orders of magnitude across a range of PN morphologies and ages (i.e., [2][3][4]). Early studies suggest the intertwined production and destruction of HNC and HCN are highly temperature-dependant [4,5], a hypothesis that has been further supported by recent observations of a correlation between HNC/HNC and gas temperature in the Orion Bar [3]. Variation in the ratio has been observed across cold cloud core, protostellar, and protoplanetary environments in regions of 10 1 -10 2 K [6][7][8], demonstrating its potential utility as a probe of gas temperature in disparate astrophysical domains. Alternatively, however, the HNC/HCN ratio may be governed by selective photodissociation of the molecules, wherein HNC is more readily dissociated by bright UV sources to form HCN than the inverse reaction [9]. The potential value of observations of molecule-rich PNe to our understanding of the processes governing the HNC/HCN ratio has been overlooked in past works, though there is recent interest in the molecular ratio [2,10]. Driven by this recent progress, we seek a representative environment to establish whether and how UV emission from the CSPN and gas temperature affect the HNC/HCN ratio. The expansive envelope (0.46 pc radius) and intense CSPN emission of the highly evolved NGC 7293 (the Helix Nebula) provide such an environment. At a distance of 200 pc [11], the molecular envelope of the Helix covers 15-25 arcminutes on the sky. Its CSPN is a strong UV source (89 L , [12]), and displays point-like X-ray emission [1,13]. The Helix contains numerous dense, dusty, plasma-embedded neutral globules ∼0.15 pc from its CSPN, as well as an extended envelope of clumpy molecular gas [14,15]. When taken as a whole, the Helix presents a factor ∼20 gradient in UV flux from the inner globules to the edges of the molecular ring at ∼7.5 (Bublitz et al., in prep), and hence is the best PN in which to establish how HNC/HCN varies across an individual PN. In this proceedings paper, we present the current state of our radio molecular line observing campaign targeting the Helix Nebula and its globules, with the primary goal to gain insight into the mechanisms driving the HNC/HCN ratio. We employ radiative transfer modeling to interpret these observations so as to constrain the molecular abundances and gas physical conditions within the Helix. Single-Dish Observations: IRAM 30 m and APEX 12 m Six positions in the Helix Nebula have been observed with the Institut de Radioastronomie Millimétrique (IRAM) 30 m and Atacama Large Pathfinder Experiment (APEX) 12 m telescopes. At just 0.11-0.14 pc away from the CSPN are three dense molecular knots (Globules A, B, and C), previously the focus of IRAM 30 m 12 CO observations [15]. The Helix West position follows from a molecular line survey that observed multiple PNe [2], while the East position lies symmetrically opposite to the CSPN as a means to compare molecular and UV properties in the extended molecular envelope. The final "Helix Rim" position was derived from [10] Emission lines were fit with Gaussian functions to determine integrated line intensities. Computed HNC/HCN line ratios range from 0.46-1.11. Using emission line radial velocities from the 30 m observations and modeling of the nebular expansion (e.g., [16,17]), we estimated the deprojected distance of each position from the CSPN. With the radii and CSPN L UV ( [2], and references therein), a calculation of the incident UV flux has been made on a given region of the nebula. In the right panel of Figure 1, we present the HNC/HCN ratio for the positions in the Helix, as well as other PNe in the literature where it is possible to compute the UV flux incident on the gas sampled by those observations. In addition to the IRAM radio observations, the J = 2→1 transitions of HCN and HCO + (176-180 GHz) were observed across the same six Helix positions with the APEX 12 m telescope. These emission lines allowed us to better constrain gas models of the photodissociation regions in the nebula. ALMA Globules B and C were imaged in the 1 mm and 3 mm regimes with the Atacama Large Millimeter Array (ALMA) during Cycle 6. Data were reduced and preliminary analysis performed via CASA. Resulting images of 12 CO, HCN, HCO + , and HNC are presented in Figure 2. The globules appear highly filamentary in nature, with pronounced heads and 10-15 long tails trailing away from the central star. Distinct hotspots can be seen in HCN, HCO + , and HNC, though the emission generally traces that of 12 CO. The 12 CO image of Globule C was originally proposed by P. Huggins in 2014, with analysis and publication performed contemporaneously with the new observations [18]. A detailed analysis of the imaged molecules and their analysis is forthcoming (Bublitz et al., in prep). Modeling Modeling of Globule B was performed with RADEX, a publicly available, non-LTE radiative transfer code [19], where the line intensities of designated molecules are computed along a radiatively and collisionaly excited 1-D slab. All observed transitions of HCN and HNC are plotted as contours of the model output to identify solution spaces for a given molecule and temperature (Figure 3, left). The three transitions of HCN enabled a tight constraint on column density and the H 2 density of the gas below T kin = 60 K. HNC solutions remain degenerate for gas density, however the results indicate that HNC generally resides in a lower density region than HCN. N(HCN, HNC) for temperatures below 60 K, while H 2 density remains degenerate for HNC. Convergence of all HCN transitions is marked (blue triangle). (Right) Meudon PDR modeling for a slab of gas with fixed pressure (P = 10 7 K cm −3 ) confirms sub-unity HNC/HCN (purple) at increased depth and thus decreased UV penetration within the model globule. A noted increase in HCN compared to HNC (or decreasing HNC/HCN) at greatest depth of the slab contradicts the trend in Figure 1, suggesting an incomplete understanding of the chemical processes within the Helix globules. We have also plotted densities for additional species such as H and C + (green and light blue, respectively) that are commonly used as tracers of irradiation, density, etc. To better understand the structure of the PDR environment in the globules and molecular envelope, the radiative transfer Meudon code was implemented for a globule-like slab of gas irradiated by a model WD spectrum, scaled to the flux of the Helix CSPN [20]. The code then calculates the heating, cooling, and reaction rates for a catalog of atomic and molecular species. Abundances for relevant species are plotted across the 1 A V -scale region (Figure 3, right). Discussion Comparison of the HNC/HCN line ratio with the computed UV flux of the Helix Nebula reveals an anticorrelation that is consistent with the results of [2]. The addition of PNe with established UV flux from published literature sources further extends the trend of decreasing HNC/HCN into the regime where UV flux is strongest, suggesting that the ratio acts as a powerful tracer of UV irradiation in PDR environments. RADEX modeling of Globule B yields column density estimates for both molecules, but hints at an effect that is contrary to the observed results. That is, HCN appears to reside in regions of higher number density, hence perhaps closer to the cores of the globules, rather than along the globule surfaces where it is expected to be preferentially produced by UV emission. Similarly, the Meudon models predict that while both HCN and HNC rise in density towards the core of the globule where UV irradiation is weakest, the relative abundance HNC/HCN decreases along this path. Thus both modeling approaches suggest that within an individual region of gas, the expected anticorrelation between HNC/HCN and UV flux is inverted, with HCN preferentially residing deeper in the core than its isomer HNC. In forthcoming analysis (Bublitz et al., in prep), we will explore potential solutions to this conundrum, by more fully investigating the possible model parameter spaces that might describe molecular gas in the Helix nebula.
2,305.8
2020-04-08T00:00:00.000
[ "Physics" ]
Orientation-Based Control of Microfluidics Most microfluidic chips utilize off-chip hardware (syringe pumps, computer-controlled solenoid valves, pressure regulators, etc.) to control fluid flow on-chip. This expensive, bulky, and power-consuming hardware severely limits the utility of microfluidic instruments in resource-limited or point-of-care contexts, where the cost, size, and power consumption of the instrument must be limited. In this work, we present a technique for on-chip fluid control that requires no off-chip hardware. We accomplish this by using inert compounds to change the density of one fluid in the chip. If one fluid is made 2% more dense than a second fluid, when the fluids flow together under laminar flow the interface between the fluids quickly reorients to be orthogonal to Earth’s gravitational force. If the channel containing the fluids then splits into two channels, the amount of each fluid flowing into each channel is precisely determined by the angle of the channels relative to gravity. Thus, any fluid can be routed in any direction and mixed in any desired ratio on-chip simply by holding the chip at a certain angle. This approach allows for sophisticated control of on-chip fluids with no off-chip control hardware, significantly reducing the cost of microfluidic instruments in point-of-care or resource-limited settings. Introduction The advantages of microfluidics over conventional lab-scale techniques-reduced reagent consumption, faster reactions, smaller instrument size, enhanced automation, higher throughput, and so on-have enabled applications for microfluidic instruments in fields as diverse as health care, environmental monitoring, and space exploration. The ability to control fluid flow on a microfluidic chip is a fundamental need in these instruments. However, most existing techniques for controlling on-chip fluid flow rely on off-chip hardware. For example, to control the mixing ratio of two fluids in a microfluidic chip, two off-chip pumps (pressure regulators or syringe pumps) are typically required. Valves and pumps can be moved on-chip [1,2], but microfluidic valves and pumps also require off-chip hardware (usually computer-controlled solenoid valves and pressure or vacuum pumps). Electrical methods for controlling fluids, such as dielectrophoresis [3,4] and electrowetting, [5] require off-chip electrical power supplies and complicate the fabrication of the microfluidic chip. In each of these approaches, the offchip hardware required to control on-chip fluid flow can cost thousands of dollars, consume hundreds of watts of electrical power, and contribute significant bulk to the instrument. Consequently, many microfluidic instruments remain unsuitable for use in point-of-care or resource-limited settings, where instrument cost, power consumption, and size must be minimized. Here we describe a method for precisely controlling fluid flow inside a microfluidic chip that requires no off chip hardware. In this method, by simply orientating a microfluidic chip at a certain angle, on-chip fluids can be routed and mixed in any desired ratios. We accomplish this by adding inert compounds that slightly increase the density of certain fluids in the chip. When two fluids of different densities flow together under laminar flow on-chip, the interface between the two fluids quickly reorients itself to be orthogonal to Earth's gravitational force. If the channel containing the fluids then splits into two channels, the amount of each fluid flowing into each channel is a function of the angle of the channels relative to gravity. In this manner, different amounts of different fluids can be routed on-chip simply by changing the orientation of the microfluidic chip as shown in Fig 1. As a proof-of-concept demonstration, we used the principle of orientation-based microfluidic control in a simple mixer chip that is capable of generating any desired mixing ratio of two fluids. We chose to create a mixer because of the importance of mixing in microfluidic devices. Microfluidic mixers automate the time-consuming process of preparing arbitrary concentrations and mixtures of solutions by hand. Most existing microfluidic mixers utilize either microfluidic valves and pumps [1,2] or arrays of split-and-combine operations [6,7]. These mixers have found a variety of uses in microfluidic chips for evolving novel ribozymes [8], cytotoxicity studies [9], estimation of drug efficiency and optimization of biochemical reactions [10,11]. However, a chip that uses these existing mixers is capable of generating only certain fixed ratios of mixtures; it cannot be used to generate any desired concentration or mixing ratio without redesigning the chip. Additionally, mixers containing microfluidic valves and pumps still rely on off-chip hardware for controlling these valves and pumps. Finally, existing valve-and pump-based or split-and-combine mixers consume a relatively large amount of fluid to generate a relatively small amount of the desired mixture. In summary, there is an unmet need for simple, equipment-free methods for generating arbitrary mixtures of fluids in microfluidic chips. Theory of orientation-based microfluidics To explore the effects of chip orientation on fluid flow inside a microfluidic chip, we used finite element analysis software to simulate the behavior of a simple microfluidic chip held at different angles relative to gravity. Our model combines the Navier-Stokes equations, Fick's law of diffusion, and a function that describes the density of a solution as a function of the concentration of a solute [12]. COMSOL Multiphysics (Burlington, MA) was used to simulate a orientation-controlled mixer chip shown in Fig 2. The model simulated the experimental conditions in Fig 3: a microfluidic channel with a circular cross section and 1.0 mm diameter, a less-dense yellow fluid with density 1.00 g/mL, and a more-dense blue fluid with density 1.07 g/mL. The fluid phase in the model is governed by the continuity equation for the incompressible Navier-Stokes equations: where u is the velocity vector, μ is the fluid viscosity, P is the pressure applied to the upstream end of the fluid, ρ is the density of the fluid, and g is the gravitational acceleration of objects on Earth (9.8 m s −2 ). The solute concentration follows the equation of conservation of mass and Fick's Law of Diffusion: where D is the diffusivity of the solute and c is the concentration of the solute. Eqs 1-3 are coupled through the dependence of a solution's density on its solute concentration, which can be expressed as where ρ is the density of the solution, ρ water is the density of pure water (1.00 g/mL), c is the solute concentration, and B is an experimentally-obtained, solute-specific constant that correlates a solution's density with its solute concentration (in this study, B = 127 L/mol for sucrose). Eqs 1-4 are solved for a steady state flow with inlet solute concentrations of 0 mol/L in Inlet 1 and 1 mol/L in Inlet 2. The H-shaped microfluidic channel network simulated in Fig 2 has two inlets and two outlets. Inlet 1 contains a yellow fluid with density ρ = 1.00 g/mL, and Inlet 2 contains a slightlymore-dense blue fluid (ρ = 1.07 g/mL). In Fig 2A, the chip is oriented such that Inlet 2 is in the same direction as gravity (so the angle θ between Inlet 2 and gravity is 0°). In this orientation, the force of gravity keeps the more-dense blue fluid flowing along the bottom of the channel and the less-dense yellow fluid flowing along the top of the channel. Consequently, when the channel splits into two outlets, the fluids leave in the same directions they came from: the more-dense blue fluid exits through Outlet 2 in the direction of gravity, and the less-dense yellow fluid exits through Outlet 1. In Fig 2B, the chip has been rotated about the middle channel axis by 180°. In this configuration, the force of gravity causes the two fluids to quickly swap places in the middle channel so Using the orientation of a microfluidic chip to control the mixing ratio of fluids on-chip. Two fluids (yellow and blue) flow into the chip; the blue fluid includes an additive (sucrose) that makes the blue fluid 2% more dense than the yellow fluid. When the two fluids flow together in the chip, the fluids rotate to orient the more-dense blue fluid toward Earth's gravity. When the channel then splits, the amount of each fluid flowing in each direction is precisely controlled by the angle of the chip. By using this approach, any desired mixing ratio of the yellow and blue fluids can be obtained simply by holding the chip at a certain angle; no off-chip control hardware is needed. that the more-dense fluid flows along the bottom of the channel and the less-dense fluid flows along the top. When the channel splits, the fluids exit in the opposite direction they came from: the more-dense blue fluid entered from Inlet 2 but exits through Outlet 1, and the less-dense yellow fluid entered from Inlet 1 but exits through Outlet 2. In this manner two different fluids can be routed to two different destinations on-chip by orienting the chip at either 0°relative to gravity (Fig 2A) or 180°( Fig 2B). In Fig 2C, the chip has been rotated by 90°relative to gravity. The force of gravity again causes the fluids to reorient to place the more-dense blue fluid on the bottom and the lessdense yellow fluid on the top (a clockwise rotation of 90°). When the channel splits, both Outlet Principle of orientation-based control of microfluidics. These simulations show fluid flowing inside a simple microfluidic channel network containing two inlets and two outlets. Inlet 1 contains a less-dense yellow fluid and Inlet 2 contains a more-dense blue fluid. When the chip is oriented such that Inlet 2 is aligned with the Earth's gravitational force (A), the yellow and blue fluids remain unperturbed and exit the chip in the same directions from which they entered (yellow at Outlet 1 and blue at Outlet 2). However, when the chip is rotated 180°(B), the force of gravity causes the fluids to swap places in the horizontal channel, placing the more-dense blue fluid on the bottom of the channel and the less-dense yellow fluid on the top. Consequently, the two fluids exit in the opposite directions from which they entered: Outlet 1 contains blue fluid and Outlet 2 contains yellow fluid. When the chip is oriented at −90°relative to gravity (C), the fluids rotate 90°clockwise to orient the more-dense blue fluid on the bottom of the channel and the less-dense yellow fluid on the top. When the channel splits into two outlets, each outlet receives an identical mixture containing 50% yellow fluid and 50% blue fluid. Finally, when the chip is oriented at 90°(D), the fluids rotate 90°counterclockwise to once more orient the more-dense blue fluid on the bottom of the channel, and again both outlets contain identical mixtures containing 50% yellow and 50% blue. In this manner, the orientation of a microfluidic chip may be used to route fluids in different directions on-chip without using any off-chip control hardware. 1 and Outlet 2 have the same contents: the bottom-half of each channel is filled with moredense blue fluid, and the top half of each channel is filled with less-dense yellow fluid. After the contents of each exit channel becomes homogeneous by e.g. diffusional mixing, both outlets contain identical mixtures consisting of 50% blue and 50% yellow. Finally, in Fig 2D, the chip has been rotated by −90°relative to gravity. This case is similar to Fig 2C; gravity once more causes the two fluids to reorient to place the more-dense blue Photographs of a microfluidic mixer chip oriented at different angles θ relative to gravity. In each case Inlet 1 contains a less-dense yellow fluid (water; density ρ = 1.00 g/mL) and Inlet 2 contains a more-dense blue fluid (sucrose solution; ρ = 1.07 g/mL). When θ = 0°(A) the arrangement of yellow and blue fluids in the horizontal channel remains unchanged, and Outlet 1 contains yellow fluid and Outlet 2 contains blue fluid. However, at θ = 90°(B) gravity causes the more-dense blue fluid to move to the bottom of the horizontal channel and the less-dense yellow fluid to move to the top. This twists the contents of the horizontal channel by 90°and causes both outlets to contain identical mixtures containing *50% blue and *50% yellow. Finally, at θ = 180°(C) the gravity-induced repositioning of the fluids in the horizontal channel causes the fluids to twist by 180°, effectively swapping places in the channel. As a result, the two fluids exit the chip in directions opposite from where they entered, with Outlet 1 containing blue fluid and Outlet 2 containing yellow fluid. fluid on the bottom and the less-dense yellow fluid on top (a counterclockwise rotation in this case) and the exit channels each contain the same mixture (50% yellow and 50% blue). The results in Fig 2 can be generalized for any angle of orientation θ. If the less-dense fluid at Inlet 1 contains a solute A with a concentration [A 0 ], and the more-dense fluid at Inlet 2 contains a solute B with concentration [B 0 ], the concentrations [A] and [B] in each of the outlet channels are: Using the above equations, we can predict the concentrations [A] and [B] flowing in a chip held at any desired angle θ. These equations only apply to a microfluidic chip with circular cross-section channels, although similar equations may be derived for other channel shapes. Materials and Methods To demonstrate the principle of orientation-based control of fluid flow in a microfluidic chip, we fabricated microfluidic chips similar to the one simulated in Fig 2. Microfluidic chips were designed using SolidWorks (Dassault Systèmes, Vélizy-Villacoublay, France), exported as an. STL file, and printed using a 3D printer (Form 1+, Formlabs, Cambridge, MA). These chips contain a H-shaped microchannel 42 mm long and 1 mm in diameter. The Stereolithographybased printer uses a 405 nm Class 1 laser to polymerize a liquid resin into a solid part. The resin used in this study is a combination of methacrylated monomers and oligomers. Fig 3 shows our test chip in operation. Inlet 1 contains water, density ρ = 1.00 g/mL, and Inlet 2 contains more-dense sucrose solution, ρ = 1.07 g/mL. Sucrose solutions of precisely known densities were prepared using our software NaCl.py (available for download at http:// groverlab.org). FD&C Blue #1 and Yellow #6 were used specifically for visual characterizations. The inlets were connected by tubing to fluid reservoirs that were held 3 cm above the chip. Since the inlet reservoirs were higher than the outlet reservoirs, a head pressure P = ρgh developed that pumped fluid flow from the input reservoirs through the chip and to the output reservoirs (g = 9.8 m s −2 and h = height difference between input and output reservoirs). The inlet and outlet reservoirs were maintained at the same height regardless of the tilt of the chip. Consequently, the orientation of the chip does not affect the flow rate. No off-chip fluid control hardware like pumps or valves were used. The phenomenon of orientation-based control depends upon convection, not diffusion, being dominant in the microfluidic system. A dimensionless parameter called the Peclet number, Pe, is used to determine whether convection or diffusion is dominant in a system: where L is the characteristic length, u is the flow velocity, and D is the solute's diffusion coefficient. The calculated Peclet number for our flow rate (*11,000) is much greater than one, indicating that convection is dominant and diffusional mixing between the different fluid streams is negligible. Chips of different channel geometries were held at different angles. The average residence time of fluid in the chip was calculated to be 0.7 seconds. To make sure that the system reaches steady state, samples were collected from the chip after 1 minute of flow. Samples of 5 mL were collected from each outlet for each orientation and chip geometry. To quantify the resulting mixing ratios, fluids from both outlets were collected and analyzed using an UV-Vis-NIR spectrophotometer (V-670, Jasco, Easton, MD). Experimental Results When the chip is oriented on its edge relative to gravity in Fig 3A (θ = 0°), the more-dense blue fluid remains on the bottom of the horizontal channel and the less-dense yellow fluid remains on the top of the channel. Consequently, the fluids exit the mixer chip in the same directions from which they entered (yellow fluid at Outlet 1 and blue fluid at Outlet 2). This case is identical to the simulation shown in Fig 1A. When the chip is held flat relative to gravity in Fig 3B (θ = 90°), the two fluids reorient relative to gravity (rotating 90°clockwise) to place the moredense blue fluid at the bottom of the horizontal channel and the less-dense yellow fluid on the top of the channel. When the horizontal channel splits, both output channels have the same contents (*50% yellow and *50% blue, which appears as green in the outlets). This situation is identical to the simulation shown in Fig 2C. Finally, when the chip is oriented on its other edge in Fig 3C (θ = 180°), the yellow and blue fluids swap places in the horizontal channel to place the more-dense blue fluid on the bottom and the less-dense yellow fluid on the top. Consequently, the fluids leave the Outlet channels opposite from where they entered, with yellow fluid at Outlet 2 and blue fluid at Outlet 1. To demonstrate orientation-based control of fluid flow over a wide range of angles (not just the three angles shown in Fig 3), the mixer chip was operated at angles from 0°to 180°in 15°i ncrements. As before, the more-dense blue fluid was a sucrose solution (density = 1.07 g/mL) and the less-dense yellow fluid was water (density = 1.00 g/mL). Fig 4 shows the concentration of each dye in the two output channels. As the mixer chip's angle of rotation θ is varied from 0°t o 180°, the concentration of blue fluid rises and yellow fluid drops in Outlet 1, and an opposite trend is observed in Outlet 2. The relationship of concentration on angle of rotation is roughly linear as predicted by Eq 5. This experimental data deviates from the predicted model at angles near 0°and 180°, where the outlet fluids concentrations are not 100% and 0% as predicted but *90% and *10% instead. This is likely due to diffusion within the horizontal channel in the mixer chip, which contributes a small amount of mixing between the two fluid streams. The effect of this diffusional mixing is most pronounced at angles near 0°and 180°where the outlets should contain pure (unmixed) fluids according to Eq 5; instead their contents are *90% pure. Additionally, the relationship of fluid concentration on angle of rotation in Fig 4 is not purely linear but appears to have some higher-order shape. This can be attributed to the crosssectional geometry of the 3D-printed microfluidic channel because of the limited resolution of the 3D printer. The 1 mm channel was printed using a stereolithography 3D printer with a tolerance of ± 200 μm. This could result in a microfluidic channel that is not perfectly circular and requires a more complex model than the one shown in Eq 5. However, the error bars in Fig 4 confirm that the outlet concentrations are a reproducible and predictable function of the angle of rotation of the chip, and a higher-order function could easily be derived that predicts the fluid concentrations in a chip at any rotation angle θ to within a few percentage points. To explore the role of channel cross-section shape in orientation-controlled microfluidics, we designed and 3D printed mixer chips with square (1 × 1 mm) and rectangular (1 × 1.25 mm) cross-section channels. We then repeated the experiments from Fig 5A using the square and rectangular chips for rotation angles θ = 0°, 90°, and 180°. The results in Fig 5A confirm that the orientation of a chip can still be used to control fluid mixing in chips with square and rectangular cross sections, though increased deviation from the predicted fluid concentrations is observed in the rectangular cross-section chip. This suggests that orientation-based control of microfluidics works best in channels with aspect ratios near one; at higher aspect ratios the channel geometry hinders the desired rotation of fluid in the channel. We were limited to a 1 mm channel diameter since we wanted to explore different geometries and aspect ratios using a 3D printer; however, the same phenomena were observed in a conventional glass microfluidic channel of 180 μm (data not shown). We also examined the behavior of two fluids of different densities flowing in rectangular channels with much higher aspect ratios (cross-sectional dimensions 1 mm × 5 mm; data not shown). The experimental results further support the bars indicate ±1 standard deviation. These results show that mixture composition is a function of the angle of rotation of the chip, and any desired mixture can be generated simply by orienting the chip and the required angle. doi:10.1371/journal.pone.0149259.g004 assertion that orientation-based control of microfluidics is most practical in channels with cross-sectional aspect ratios near one (circular and square channels). In addition to channel geometry, fluid density has an effect on our phenomenon. The speed at which rotation occurs is influenced by the density difference. However, in practice we found that all rotations we observed were nearly instantaneous. The channel length was kept excessively long for characterization purposes. However, for a channel diameter of 1 mm, a minimum length of 10 mm is required for the rotation to occur. The larger the difference between fluid densities, the quicker the rotation occurs. Rotation speed could be increased by further increasing the difference between fluid densities; however, the resulting increase in concentration gradients will result in an increase in diffusive flux, potentially allowing for more mixing between the two layers. To determine how much of a difference in density between the two fluids is necessary to employ orientation-based control, sucrose solutions with different concentrations (and thus different densities) were prepared as shown in Fig 5B. Experiments were performed using solutions with densities of 1.00 g/mL (control), 1.02 g/mL, 1.07 g/mL, and 1.12 g/mL. As before, the density of the second fluid was kept constant at 1.00 g/mL (water). In the control case in Fig 5B, there is no difference in densities between the two fluids in the mixer chip, so no reorientation of the fluids occur within the chip and the fluid concentrations at the outlets are constant regardless of the angle of orientation of the chip; however, when the density of one input fluid is increased by only 2% to 1.02 g/mL, the fluids again reorient with respect to gravity and orientation-based control is demonstrated. Conclusions We demonstrated that the orientation of a microfluidic chip can be used to precisely control the flow of fluids inside the chip. By using the orientation of a chip to control fluid flow instead of on-chip valves or off-chip pumps and regulators, our technique can eliminate a substantial portion of the cost, size, and power-consumption of a microfluidic assay or instrument. Thus, this technique should facilitate the spread of microfluidic technologies to new applications in resource-limited and point-of-care settings. Orientation-based control of microfluidics does depend upon different fluids in the chip having different densities. However, the amount of density difference necessary to use orientation-based control is very small (only *2%), and there are many different substances that can be added to a fluid to adjust its density. For applications that do not require precise control of the osmotic strength of fluids, small amounts of solutes like sucrose (as used here) and sodium chloride can be easily and inexpensively added to a fluid to enable orientation-based control. For applications in which the osmotic strength of the fluid does need to be controlled, a number of compounds have been developed that increase a fluid's density without affecting its osmotic strength. These include Ficoll, a high molecular weight polysaccharide used in density gradient centrifugation [13]; Percoll, a colloidal silica solution which is biologically inert and also used in density gradient centrifugation [14]; Visipaque (iodixanol), an isotonic, nonionic, nontoxic compound used as an intravenous contrast agent in radiography [15] and metatungstate solutions, a dense, inert and inorganic solutions with low reactivity. By adding small amounts of substances like these to fluids and using the principle of orientation-based control, microfluidic assays can run with little or no off-chip hardware. A density difference of *2% is required for fluid reorientation to occur. Increasing the density of a solution by *2% may change the viscosity of the solution. For example, increasing the density of a sucrose solution by 2% increases the effective viscosity of the solution by 17%. However, this viscosity difference is actually too low to affect the concentration of the output fluid. Karst et al. [12] studied fluids of different viscosities flowing alongside each other in laminar flow (as is the case in the horizontal channel of our chip). They found that the ratio of the volumes occupied by each fluid inside the tube is not equal to the ratio of the fluids' flow rates. Moreover, they found that the viscosity of the second fluid has to be at least 100% greater than the viscosity of the first fluid for the ratio of the volumes of the two fluids in the channel to be altered. Therefore, the effect of fluid viscosity is minimal. Finally, is orientation-based control powerful enough to control real-world microfluidic chips? Certainly chips with hundreds of computer-controlled valves offer a level of fluid control that may be unattainable by orientation-based control (though this level of control comes with a significant cost). However, many real-world microfluidic devices require simpler fluid control and could be controlled using chip orientation. For example, the proof-of-concept mixer chip shown here could be used in a drug toxicity screening assay by exposing cells downstream of the mixer to various concentrations of a drug [16]. This could be accomplished by flowing a drug solution in one inlet and a diluent with 2% greater density in the second inlet, and trapping cells in one of the outlet channels. By orienting the chip at a certain angle and then assessing the viability of the cells, the response of the cells to a particular concentration of the drug could be assessed. Assuming that a chip can be held at an angle with an accuracy of ±5 degrees, the resulting drug concentration should be accurate to about ±10% which is adequate precision for many drug screening asays. This is one of many real-world microfluidic assays and diagnostics that could be performed with little or no off-chip control hardware using orientation-based control.
6,129.2
2016-03-07T00:00:00.000
[ "Computer Science" ]
Pharmacological Management of Atrial Fibrillation: One, None, One Hundred Thousand Abstract atrial fibrillation (AF) is associated with a significant burden of morbidity and increased risk of mortality. Antiarrhythmic drug therapy remains a cornerstone to restore and maintain sinus rhythm for patients with paroxysmal and persistent AF based on current guidelines. However, conventional drugs have limited efficacy, present problematic risks of proarrhythmia and cause significant noncardiac organ toxicity. Thus, inadequacies in current therapies for atrial fibrillation have made new drug development crucial. New antiarrhythmic drugs and new anticoagulant agents have changed the current management of AF. This paper summarizes the available evidence regarding the efficacy of medications used for acute management of AF, rhythm and ventricular rate control, and stroke prevention in patients with atrial fibrillation and focuses on the current pharmacological agents. Introduction Atrial fibrillation (AF) is the most common cardiac rhythm disturbance seen in clinical practice accounting for approximately one-third of hospitalizations [1]. AF may occur isolated or in association with structural heart disease, contributing substantially to cardiac morbidity and mortality. The estimated prevalence of AF is 0.4-1% in the general population, increasing with age [2,3], and it is associated with an higher long-term risk of stroke, heart failure, and allcause mortality, especially in women [4,5]. Management of patients with AF requires knowledge of its pattern of presentation [6] (first diagnosed, paroxysmal, persistent, long-standing, and permanent AF, Figure 1), underlying conditions, and decisions about restoration and maintenance of sinus rhythm, control of the ventricular rate, and antithrombotic therapy. Antiarrhythmic drug therapy is the first-line treatment for patients with paroxysmal and persistent AF based on current guidelines [6,7]. Prevention of AF-related complications relies on antithrombotic therapy, control of ventricular rate, and adequate therapy of concomitant cardiac diseases. However, available drug therapy has major limitations, including incomplete effectiveness, cardiac and extracardiac toxicity and risk of life-threatening proarrhythmic complications (antiarrhythmic agents), and bleeding (anticoagulants) [8][9][10][11]. Thus, there is a continuing need for new drugs, device, and ablative approaches to rhythm restoration, and simpler and safer stroke prevention regimens are needed for AF patients on life-long anticoagulation [12]. This paper summarizes the available evidence regarding the efficacy of medications used for ventricular rate control, Has lasted for ≥1 year when it is decided to adopt rhythm control strategy Self-terminating, usually within 48 h, may continue for up to Rhythm-control strategy is not pursued stroke prevention, acute conversion, and maintenance of sinus rhythm in patients with atrial fibrillation. Acute Management The acute management of patients with AF is driven by acute protection against thromboembolic events and acute improvement of cardiac function. The severity of AF-related symptoms should drive the decision for acute restoration of sinus rhythm or acute management of the ventricular rate. In stable patients with a rapid ventricular response, the acute control of ventricular rate can be achieved by oral administration of β-blockers or nondihydropyridine calcium channel antagonists. In contrast, in severely compromised patients, i.v. verapamil or metoprolol may be used [6]. In patients who remain symptomatic despite adequate rate control, or in patients in whom rhythm control therapy is pursued, pharmacological cardioversion of AF may be initiated by a bolus administration of an antiarrhythmic drug (Table 1) [6]. In the acute setting, flecainide has an established effect on restoring sinus rhythm in patients with AF of short duration (<24 hours) [6,13]. Patients undergoing flecainide treatment, should be checked for contraindications including structural heart disease, second-or third-degree AV block, left-bundle branch block, right-bundle branch block (when associated with left hemiblock), asymptomatic nonsustained ventricular tachycardia, cardiogenic shock, reduced cardiac output (LVEF < 35%), post-MI, and significant renal or hepatic impairment [6,13]. Flecainide is also a safe and effective agent for termination of AF in patients with Wolff-Parkinson-White (WPW) syndrome [14]. Propafenone is indicated to convert recent onset AF to sinus rhythm in patients without abnormal LV function and ischemia, but it has a limited efficacy to convert atrial flutter [15,16]. In patients with underlying heart disease, amiodarone can be employed as it blocks Na + , Ca 2+ , and K + channels [17] and inhibits the consequences of α adrenoceptor and β-adrenoceptor stimulation [18]. Nonetheless, it does not achieve cardioversion in the short and medium terms [6]. Ibutilide is a useful agent for the pharmacological cardioversion of recent-onset atrial fibrillation but is more effective in terminating atrial flutter. It prolongs the myocardial action potential duration and effective refractory periods in both the atria and the ventricles. The mean times to conversion were ≤30 minutes [19]. Its cellular electrophysiologic mechanism increases the slow inward plateau sodium current and inhibits the outward repolarizing potassium current. Adverse events associated with ibutilide are predominantly proarrhythmic effects [20,21]. The drug has minimal haemodynamic effects and is associated with few non-cardiovascular adverse events [22][23][24][25][26]. Vernakalant is a relative atrialselective antiarrhythmic agent [27] recently recommended for approval by the European Medicines Agency for rapid cardioversion of recentonset AF to sinus rhythm in adults (≤7 days for non-surgical patients; ≤3 days for surgical patients) [28,29]. Atrial selectivity of vernakalant is achieved by targeting atrialspecific channels: the Kv1.5 channel which carries K + current (IKur) and the Kir3.1/3.4 channel which carries muscarinic K + current (IKAch). Vernakalant can also work to block Ito, late Ina, with minor blockade of IKr currents [30,31]. A direct comparison with amiodarone in the AVRO trial [32] showed that vernakalant was more effective than amiodarone for the rapid conversion of AF to sinus rhythm (51.7% versus 5.7% at 90 min after the start of treatment, P < .0001). Intravenous vernakalant is generally given at an initial dose of 3 mg/kg, and then an additional 2 mg/kg if atrial fibrillation conversion fails after 15 min. The elimination half-life is about 2 h. Vernakalant is contraindicated in patients with systolic blood pressure <100 mm Hg, severe aortic stenosis, heart failure (class NYHA III and IV), acute coronary syndrome (ACS) within the previous 30 days, or QT interval prolongation [30][31][32][33]. Furthermore, before its use, the patients should be adequately hydrated. In addition, ECG and hemodynamic monitoring should be used, and the infusion can be followed by direct current cardioversion (DCC) if necessary [30][31][32][33]. The drug is not contraindicated in patients with stable coronary artery disease, hypertensive heart disease, or mild heart failure. The clinical positioning of this drug has not been determined yet, but it is likely to be used for acute termination of recent-onset AF in patients with lone AF or AF associated with hypertension, coronary artery disease, or mild-to moderate-(NYHA class I-II) heart failure [34]. The ACC/AHA/ESC guidelines identify dofetilide, ibutilide, and amiodarone as agents with efficacy for pharmacologic cardioversion of atrial fibrillation >7 days and disopyramide, flecainide, procainamide, propafenone, and quinidine as less effective or incompletely studied [13]. The socalled "pill-in-the-pocket" approach may be used in selected, highly symptomatic patients with infrequent (once/month to once/year) recurrence of atrial fibrillation. According to a medium-size trial 1, oral propafenone (450-600 mg) of flecainide (200-300 mg) can be administered by patients safely (1/569 episodes resulting in atrial flutter with rapid conduction) and effectively (94%, 534/569 episodes) out of hospital. In order to implement the pill-in-the-pocket technique, patients should be screened for indications and contraindications, and the efficacy and safety of oral treatment should be tested in hospital. Finally, patients should be instructed to take flecainide or propafenone when symptoms of AF occur [35]. Long-Term Management The restoration and maintenance of sinus rhythm has been shown to be associated with reduced atrial remodeling, improved left ventricular function, reduced symptoms, greater exercise tolerance, increased ability to perform activities of daily living, and improved quality of life [36]. However, rates of attainment and maintenance of sinus rhythm have been suboptimal in comparative studies such as atrial fibrillation followup investigation of rhythm management (AFFIRM) [37], Polish how to treat chronic atrial fibrillation (HOT CAFÈ) [38], pharmacological intervention in atrial fibrillation (PIAF) [39], rate control versus electrical cardioversion (RACE) [40], strategies of treatment in atrial fibrillation (STAF) [41] and atrial fibrillation and congestive heart failure (AF-CHF) [42]. Furthermore studies failed to demonstrate a survival advantage with either approach by intention-to-treat analysis-both in patients with and without heart failure (HF) [36,43].This is probably because the antiarrhythmic therapies studied had limited efficacy, poor tolerability, and the potential to trigger new arrhythmias. Moreover, several of the antiarrhythmic drugs used for rhythm control in these studies were associated with a significant increase in noncardiovascular deaths [36]. However, the results do not support the hypothesis that rate control is preferable as first-line therapy for AF with respect to survival and do not disprove the hypothesis that maintenance of sinus rhythm is preferable to the continuation of AF, particularly if rate control fails to restore adequate quality of life (QOL) or whether selective approaches are employed. Many posthoc analyses and substudies have assessed QOL, functional status, and exercise tolerance, with the majority demonstrating important benefits associated with achievement of rhythm control. Moreover, some subanalyses and additional trials have suggested that sinus rhythm can be associated with longer survival, including in patients with HF [43]. Current guidelines indicate paroxysmal AF is more often managed with a rhythm control strategy, especially if it is symptomatic and there is little or no associated underlying heart disease. Permanent AF is managed by rate control unless it is deemed possible to restore sinus rhythm when the AF category is redesignated as "long-standing persistent". Rate control is needed for most patients with AF unless the heart rate during AF is naturally slow. Rhythm control may be added to rate control if the patient is symptomatic despite adequate rate control, or if a rhythm control strategy is selected because of factors such as the degree of symptoms, younger age, or higher activity levels [6]. Maintenance of Normal Sinus Rhythm A number of agents are effective for the maintenance of normal sinus rhythm in patients with atrial fibrillation. According to current guidelines, amiodarone, dronedarone, flecainide, propafenone, sotalol, and disopyramide are recommended for rhythm control depending on the underlying heart disease ( Figure 2) [6]. The new ESC 2010 AF guidelines mention for the first time dronedarone as a recommended treatment in patients with AF [6]. Dronedarone is a multichannel blocker which inhibits sodium, potassium, and calcium channels with a noncompetitive antiadrenergic activity. Its short halflife (of approximately 24 h) reduces the accumulation in tissues and the low lipophilicity as well as elimination of iodine moieties, which reduces its toxicity. Dronedarone prolongs the action potential duration and reduces heart rate, with low proarrhythmic effect [1,44]. Maximum dronedarone plasma concentrations are reached within 1-4 h [45]. Steady-state concentrations are achieved within 7 days of 400 mg twice daily [46]. Dronedarone is extensively biotransformed by cytochrome P450 (CYP3A4) enzymes, with little excretion of unchanged drug in bile and urine. The elimination half-life (about 24 h) is much shorter than that for amiodarone, which is very slow (up to many weeks) because of delayed removal from adipose tissue stores. Potent CYP3A4 inhibitors (e.g., ketoconazole or erythromycin) can raise dronedarone plasma concentrations. Dronedarone increases serum digoxin levels by inhibition of P-glycoprotein intestinal and renal excretion and can raise serum creatinine concentrations by inhibition of renal organic-cation transport [47]. In patients with adrenergic AF and no or minimal structural heart disease, dronedarone is recommended if βblocking agents (including sotalol) are not effective [48]. In patients with left ventricular hypertrophy, coronary artery disease, and stable New York Heart Association (NYHA) class I/II, dronedarone is the antiarrhythmic drug of choice. However, dronedarone should not be used in AF patients with NYHA class III/IV or in unstable patients with NYHA class II. In these patients, β-blocking agents are recommended as first-line therapy [6,48]. According to 2011 ACCF/AHA/HRS focused update on the management of patients with atrial fibrillation [7], dronedarone is reasonable to decrease the need for hospitalization for cardiovascular events in patients with paroxysmal AF or after conversion of persistent AF. Dronedarone can be initiated during outpatient therapy (Class IIa, Level of Evidence B) [49]. Dronedarone should not be administered to patients with class IV heart failure or patients who have had an episode of decompensated heart failure in the past 4 weeks, especially if they have depressed left ventricular function (left ventricular ejection fraction ≤35%, Class III-Harm, Level of Evidence B) [50]. The EURIDIS (european trial in atrial fibrillation or flutter patients receiving dronedarone for the maintenance of sinus rhythm) and ADONIS (American-Australian-African Trial With Dronedarone in Atrial Fibrillation or Flutter Patients for the Maintenance of Sinus Rhythm) trials [51,52] found that dronedarone prolongs the time to recurrence AF. Posthoc analyses of pooled EURIDIS and ADONIS data showed that dronedarone greatly decreased the combined endpoint of admission or death. The efficacy of dronedarone for rate control in patients with permanent atrial fibrillation was tested in the ERATO trial [53]. Possessing both rate-and rhythm-control properties, dronedarone has proved safe and effective in preventing AF recurrence in patients with persistent AF in the DAFNE (dronedarone atrial fibrillation study after electrical cardioversion) trial, the first prospective randomized trial to evaluate its efficacy and safety [54]. In the DIONYSOS [55] (efficacy & safety of dronedarone versus amiodarone for the maintenance of sinus rhythm in patients with persistent atrial fibrillation), a short-term, randomized, double-blind, parallel-group study, in patients with persistent atrial fibrillation, dronedarone was less efficacious but also less toxic than amiodarone. Recurrence of AF during followup at 12 months and study drug discontinuation occurred in 75% and 59% of patients treated with dronedarone and amiodarone, respectively (hazard ratio (HR) 1.59; 95% CI 1.28-1.98; P < .0001). AF recurrence was more common in the dronedarone arm compared with amiodarone (36.5% versus 24.3%). Premature drug discontinuation tended to be less frequent with dronedarone (10.4% versus 13.3%). The safety profile of dronedarone is advantageous in patients without structural heart disease and in stable patients with heart disease. Specifically, dronedarone appears to have a low potential for proarrhythmia. Fewer thyroid, neurologic, dermatologic, and ocular events occurred in the dronedarone group. These data suggest higher tolerability but less efficacy for dronedarone than for amiodarone [52,55]. The ANDROMEDA (antiarrhythmic trial with dronedarone in moderate-to-severe CHF evaluating morbidity decrease) trial in patients in sinus rhythm and systolic left ventricular dysfunction was prematurely discontinued because of increased mortality with dronedarone. The deaths in the dronedarone group were due predominantly to heart failure, and there was no evidence of proarrhythmia or an increased incidence of sudden death. Six months after discontinuation mortality rates were similar, but dronedarone is nevertheless contraindicated in class III and IV heart failure [56]. The ATHENA study (A placebo-controlled, doubleblind, parallel arm trial to assess the efficacy of dronedarone 400 mg b.i.d. for the prevention of cardiovascular hospitalization or death from any cause in patients with atrial fibrillation/atrial flutter) randomized 462 patients with paroxysmal or persistent AF or flutter and cardiovascular risk factors to treatment with dronedarone or placebo, assessing a substantially reduction of primary endpoint (all-cause mortality and cardiovascular admissions) 31.9% versus 39.4% on placebo, HR 0.76 (0.69-0.84) driven by reduction in cardiovascular admission events: 29.3% versus 36.9 on placebo (HR 0.74 (0.67-0.82), but nonsignificant difference in all cause mortality. Dronedarone improved the composite endpoint of cardiovascular hospitalizations and all-cause mortality in a carefully selected, high-risk, nonpermanent AF population, in addition to its recognized reduction in AF [57]. The rate of cardiovascular mortality was lower in the dronedarone group (2.7% versus 3.9%; HR 0.71; 95% CI 0.51-0.98). The median time to first artial fibrillation or recurrence of atrial flutter was increased by dronedarone, and the likelihood of permanent atrial fibrillation or atrial flutter was greatly reduced [58]. The ATHENA trial excluded patients with decompensated heart failure within the previous 4 weeks, or with NYHA class IV heart failure. There was no evidence of an adverse effect of dronedarone in patient subgroups with a history of congestive heart failure or LV ejection fraction ≤35% [57]. The major adverse cardiac effects of dronedarone are bradycardia and QT prolongation. Torsades de pointes have been reported [57]. Like amiodarone, dronedarone inhibits renal tubular secretion of creatinine, which can increase plasma creatinine levels. However, there is no reduction in glomerular filtration rate. Dronedarone increases digoxin levels 1.7-to 2.5-fold.31 Dronedarone is predominantly metabolized by the liver (CYP3A4) with a half-life of approximately 19 hours. It should not be administered with strong inhibitors of CYP3A4 (e.g., ketoconazole and macrolide antibiotics), because these may potentiate the effects of dronedarone. It can be administered with verapamil or diltiazem, which are moderate CYP3A4 inhibitors, but low doses of these agents should be used initially and titrated according to response and tolerance [59]. Dronedarone does not alter the international normalization ratio when used with warfarin. The recommended oral dose of dronedarone is 400 mg twice a day with meals. An intravenous form is not available. In maintaining sinus rhythm, amiodarone is more effective than others agent and has restricted proarrhythmic potential, but, because of its very long half-life and profile toxicity with severe extracardiac side effects [60], should be generally use when other therapies have failed or are contraindicated. In patients with severe heart failure, NYHA class III and IV or recently unstable NYHA class II (decompensation within the prior month), amiodarone should be the drug of choice [6]. Nonetheless, although amiodarone is the widely considered for maintenance of sinus rhythm in atrial fibrillation management, it lacks FDA approval for this indication. Inhibitors of the Renin-Angiotensin-Aldosterone System Atrial angiotensin II concentrations increase in atrial fibrillation [61] and stimulation of its receptors activates nicotinamide adenine dinucleotide-phosphate (NADPH) oxidase to produce oxidative stress or inflammation [62]. Several studies [63][64][65] suggest benefit of angiotensin-converting enzyme (ACE) inhibitors or angiotensin II type 1 (AT1)blockers in prevention of atrial fibrillation, especially in patients with left ventricular hypertrophy or dysfunction. particularly, patients treated with amiodarone plus irbesartan showed a lower rate of recurrence of atrial fibrillation than did patients treated with amiodarone alone [65,66]. However, the large placebo-controlled GISSI-AF trial showed that valsartan did not reduce recurrence rates of atrial fibrillation, raising questions about the value of AT1 blockers in secondary prevention [67]. Furthermore, preliminary results of the ACTIVE I trial, including 9016 atrial fibrillation patients during a follow-up of 4.1 years, showed that irbesartan did not prevent cardiovascular events and had no effect on atrial fibrillation burden [68]. Further prospective studies are needed to establish the potential therapeutic value of ACE inhibitors and AT1 blockers in prevention of atrial fibrillation and to define the populations of patients that benefit. Aldosterone exerts many cardiac effects. In small animals, it causes atrial fibrosis, and spironolactone prevents fibrosis [69]. Selective aldosterone receptor blockade also suppresses atrial fibrillation in animal models of heart failure [70]. Plasma aldosterone concentrations increase in patients with atrial fibrillation [71], and atrial expression of the aldosterone receptor is higher in these patients than in those without the disorder [72]. Furthermore, patients with primary hyperaldosteronism have a 12-fold greater risk of atrial fibrillation than do controls matched for blood pressure [73]. Hence, blockade of aldosterone receptors could be a therapeutic option for patients with atrial fibrillation, but data from trials are not available. Anti-Inflammatory Agents Glucocorticoids have powerful anti-inflammatory properties and have efficacy against atrial fibrillation in animal [74] and clinical [75] studies although their potential toxicity restricts their value in this disorder. Both statins and omega-3 fatty acids have anti-inflammatory and antioxidant actions. Statins are effective against several substrates that maintain atrial fibrillation [76,77]. Statins are of benefit in prevention of atrial fibrillation, especially for postoperative AF [78]. Epidemiological data about the effects of omega-3 fatty-acids on occurrence of AF are conflicting [79]. Animal studies [80] show model dependent atrial-fibrillation-preventing effects, suggesting that omega-3 fatty acids prevent atrial fibrillation, especially in patients at risk of fibrotic structural remodeling. Peroxisome proliferator-activated receptor γ activators, such as pioglitazone, might suppress adverse cardiac remodeling and susceptibility to atrial fibrillation [81] but can also cause salt retention and might predispose to development of congestive heart failure. Pharmacological Rate Control Commonly, beta-blockers, nondihydropyridine calcium channel blockers, and digitalis are appropriate for most patients with persistent or permanent atrial fibrillation for whom control of ventricular rate is desired. For most patients, a target heart rate of 60 to 80 beats per minute at rest and 90 to 115 beats per minute during moderate exercise is appropriate. For the AFFIRM study, adequate control was defined as an average heart rate of up to 80 bpm at rest and either an average rate of up to 100 bpm over at least 18 hours of ambulatory Holter monitoring with no rate greater than 100% of the maximum age-adjusted predicted exercise heart rate, or a maximum heart rate of 110 bpm during a 6-minute walk test [37]. The selection of appropriate rate-control therapy for each patient should include consideration of the drug's potential impact on comorbid conditions such as hypertension, ischemic heart disease, and hypertrophic cardiomyopathy [36]. Generally, beta-blockers and nondihydropyridine calcium channel blockers are well tolerated; however, they are not always effective at controlling heart rate [82]. Beta-blockers maybe especially useful in the presence of high adrenergic tone or symptomatic myocardial ischemia occurring in association with AF, but they should be used with caution in patients with asthma [6,59]. Amiodarone, usually initiated for rhythm control, may continue to be used inadvertently for rate control when patients have lapsed into permanent AF [6,13]. Long-term use of amiodarone may result in end-organ toxicity (pulmonary, hepatic, thyroid, neurologic, and skin) [83]. Digitalics are effective for control of heart rate at rest but not during exercise. The patients should be monitored for signs of digoxin toxicity, especially in those with reduced renal function, advanced age, acute or chronic hypoxia, or thyroid disease [6,84]. Nondihydropyridine calcium channel antagonists should be avoided in patients with systolic heart failure because of their negative inotropic effect [6]. According to latest ESC guidelines, dronedarone is a firstline drug for rhythm control in patient with AF, but it is not currently approved for permanent AF to pharmacological rate control [6]. Antithrombotic Management Unless contraindicated, chronic oral anticoagulation therapy (OAC) is recommended in patients with a CHADS2 (cardiac failure, hypertension, age, diabetes, and stroke (doubled)) score of ≥2 [6,85] to achieve an international normalized ratio (INR) between 2.0 and 3.0. In patients with a CHADS2 score of 0-1, or where a more detailed stroke risk assessment is indicated, the latest guidelines recommend the use of the CHA2DS2-VASc (congestive heart failure, hypertension, age ≥ 75 (doubled), diabetes, stroke (doubled), vascular disease, age 65-74, and sex category (female)) score [86]. Indeed, in patients with CHA2DS2-VASc ≥2 OAC is recommended whereas with CHA2DS2-VASc = 1 either OAC or aspirin 75-325 mg daily can be chosen although OAC should be preferred. In case of CHA2DS2-VASc = 0, no antithrombotic therapy is preferred although 75-325 mg daily aspirin can be administered depending on the physician's choice. Moreover, the new AF guidelines emphasize the importance of bleeding risk assessment before starting anticoagulation. In this case, the HAS-BLE bleeding risk score [87] is recommended (hypertension, abnormal renal and liver function, stroke, bleeding, labile INRs, elderly (>65 years), drugs, or alcohol concomitantly). A score of ≥3 is considered indicative of "high-risk" patients who require caution and regular review after starting antithrombotic therapy [6]. Multiple studies have demonstrated that oral anticoagulation with warfarin is effective for prevention of thromboembolism in AF patients [88][89][90][91][92][93], but it is underused because of the risk of bleeding [94]. Dabigatran etexilate is a new, effective, reversible, rapid-acting, oral direct inhibitor of thrombin [95]. Dabigatran has been shown to be at least as safe and effective as warfarin therapy in RE-LY, a large-scale, international, multicenter trial (randomized evaluation of long-term anticoagulant therapy) [96]. Where oral anticoagulation is an appropriate therapy, dabigatran may be considered as an alternative to adjusted -dose vitamin K antagonist (VKA) therapy. If a patient is at low risk of bleeding (e.g., HAS-BLED score of 0-2), dabigatran 150 mg b.i.d. may be considered in view of the improved efficacy in the prevention of stroke and systemic embolism (but lower rates of intracranial haemorrhage and similar rates of major bleeding events, when compared with warfarin). When a patient has a measurable risk of bleeding (e.g., HAS-BLED score of ≥3), dabigatran etexilate 110 mg b.i.d. may be considered, in view of a similar efficacy in the prevention of stroke and systemic embolism (but lower rates of intracranial hemorrhage and of major bleeding compared with VKA). In patients with CHA2DS2-VASc score ≥2, dabigatran 110 mg b.i.d. may be considered, in view of a similar efficacy with VKA in the prevention of stroke and systemic embolism but lower rates of intracranial haemorrhage and major bleeding compared with the VKA and aspirin [6]. In patients with no stroke risk factors (e.g., CHA2DS2-VASc = 0), either aspirin 75-325 mg daily or no antithrombotic therapy is recommended. Where possible, no antithrombotic therapy should be considered for such patients, given the limited data on the benefits of aspirin in this patient group (i.e., lone AF) and the potential for adverse effects, especially bleeding [6]. The RE-LY [75] was reviewed by the 2011 Focused Update Writing Group [7], but recommendations about its use are not included in this focused update, because dabigatran was not approved for clinical use by the FDA at the time of organizational approval. Conclusions Although effective therapies have been identified in concrete cases, a treatment modality offering efficacy in most cases of AF remains to be established. It is essential to gain further insight to the physiopathological mechanisms of AF to develop drugs with improved efficacy and safety profiles in the treatment of this widespread cardiac arrhythmia.
5,923.4
2011-04-26T00:00:00.000
[ "Medicine", "Biology" ]
Low-Cost SCADA System Using Arduino and Reliance SCADA for a Stand-Alone Photovoltaic System SCADA (supervisory control and data acquisition) systems are currently employed inmany applications, such as home automation, greenhouse automation, and hybrid power systems. Commercial SCADA systems are costly to set up andmaintain; therefore those are not used for small renewable energy systems. This paper demonstrates applying Reliance SCADA and Arduino Uno on a small photovoltaic (PV) power system to monitor the PV current, voltage, and battery, as well as efficiency. The designed system uses low-cost sensors, an Arduino Uno microcontroller, and free Reliance SCADA software. The Arduino Uno microcontroller collects data from sensors and communicates with a computer through aUSB cable. Uno has been programmed to transmit data to Reliance SCADA on PC. In addition, Modbus library has been uploaded on Arduino to allow communication between the Arduino and our SCADA system by using MODBUS RTU protocol. The results of the experiments demonstrate that SCADA works in real time and can be effectively used in monitoring a solar energy system. Introduction For several hundred years, fossil fuels have been consumed as the main source of energy on Earth.As a result, they are now experiencing rapid depletion.Researchers and scientists who understand the importance of renewable energy have dedicated their efforts to the research, expansion, and deployment of new energy sources to replace fossil fuels. Photovoltaics (PV) are an important renewable energy sources.Also called solar cells, PV are electronic devices that can convert sunlight directly into electricity.The modern forms of PV were developed at Bell Telephone Laboratories in 1954 [1].Despite their promising performance, PV have some limitations, such as depending on factors like longitude, latitude, and weather and being limited to daytime hours to generate power [2]. The SCADA system is software that has been installed in several sites to monitor and control processes, and it is called telemetry importance [3,4].SCADA can monitor real-time electrical data measurements of solar module and batteries and collect data from wind turbines, such as the condition of the gearbox, blades, and electric system [5,6].Moreover, the sun-tracker system has also used the SCADA system to observe the solar insolation and movement of the sun [6]. These days, commercial companies are widespread for monitoring systems such as photovoltaic systems.However, those are quite expensive.For example, SMA Company is a German Company, and it was founded in 1981.It has many products.Some of them are related to monitoring and controlling, for example, Sunny View.It can show all of your system data in good condition, and we can read all data clearly.However, the major problem is that the device is costly; it costs about CA $793 [7,8]. A previous study also shows a data acquisition and visualization system, with storage in the cloud, and it has been applied on a photovoltaic system.In addition, this design was based on embedded computer, and it connected with PV inverters by using RS485 standard, and microcontroller is to read climate sensors but sensors have used web system to show data [9]. Also, a study shows a low-cost monitoring system, it is presented in [10].The system has determined losses in energy production.The paper is based on multiple wireless sensors and low cost, and it used voltage, current, irradiation, and temperature sensors which are installed on PV modules as well. In this paper, designed SCADA system is of lower price compared with commercial SCADA system, and it delivers the same performance.In order to test this work, the SCADA system is employed for monitoring the parameters of solar energy systems (photovoltaic) in real time, which consist of a solar module, MPPT, and batteries.The parameters are the current and voltage of the photovoltaic (PV) system and the current and voltage of the battery.Data acquisition system is by Arduino controller and sensors.All data are sent to a PC and are shown on a user interface designed by Reliance SCADA.The data are saved on a computer as an Excel file as well.This allows users and operators to monitor the parameters of the PV system in real time.The components of the SCADA system in this paper consist of two parts: hardware and software. Hardware Design The proposed Reliance SCADA is designed to monitor the parameters of a small PV system.It is installed at the Department of Electrical Engineering, Memorial University, St. John's, Canada.Figure 1 shows 12 solar panels up to 130 watts and 7.6 amp.Two solar modules are connected in parallel.Therefore, the system shown in Figure 1 consists of 6 sets of 260 watts each.The Reliance SCADA system was designed to be of low cost and can be expanded or modified without the need for major hardware changes in the future.Basic elements of the design are an Arduino Uno controller and sensors, as shown in Figure 2. Arduino Uno Microcontroller. Arduino Uno is opensource hardware that is relatively easy to use. Figure 3 shows Arduino Uno, while before MPPT to measure the PV voltage and the other is installed after MPPT to measure the battery voltage.Figure 5 demonstrates how it connects in an electrical circuit with Arduino Uno. Hardware Setup Figure 6 shows the hardware setup designed for the SCADA system. Arduino IDE. IDE is open-source software that features easy-to-write code that can be uploaded to any board.In this work, we needed to upload a new library on IDE to make a configuration between Arduino Uno and SCADA software by MODBUS RTU protocol.Figure 7 shows how the system works and also shows the code that has been burned on Arduino Uno. (B) Code.The code has some main functions such as setup() (it is called once when the sketch starts) and loop() (it is called over and over and is heart of sketch).The most important in the code are libraries mentioned initially: regBank.setId()command, regBank.add(),and regBank.set().The purpose of libraries is to connect between Arduino Uno and Reliance SCADA software by MODBUS RTU protocol.regBan.setId() is used to define MODBUS to work as slave.regBank.add()command is used to define addresses of registers which are used to send data to Reliance SCADA on computer.In this work, the addresses were from 30001 to 30005 as mentioned slave.run();}}4.2.Reliance SCADA.Reliance software is employed in numerous technologies for monitoring and controlling systems.It can also be used for connecting to a smartphone or the web.Reliance is used in many colleges and universities around the world for education or scientific research purposes [12].Figure 8 shows a user interface designed by Reliance SCADA software to monitor the parameters of the photovoltaic system. The user interface has four real-time trends and four display icons to show values as digital numbers.In addition, it Number Variable name MODBUS RTU address (1) Voltage of photovoltaic system 0 (2) Current of photovoltaic system 1 (3) Voltage of battery 2 (4) Current of battery 3 (5) Efficiency of MPPT 4 has two buttons and a container.These features are discussed in Results and Discussion. Communication System MODBUS library is added to Arduino Uno to allow communication with Reliance SCADA via a USB cable using MODBUS RTU protocol.Table 2 shows the allocation of MODBUS address for MODBUS RTU on Reliance SCADA software, with the MODBUS address for Arduino Uno mentioned in the Arduino code. Cost of the SCADA System Most factories that use several systems are looking for a low cost SCADA system to monitor and control their systems remotely.In this paper, the components used are quiet cheap. Monitring parameters of PV system using Arduino Uno Voltage Table 3 shows the price (CA dollar) for whole components according to the amazon.cawebsite. According to Table 3, we found that the whole price of SCADA system was CA$82.This price seems cheap to design SCADA system for monitoring parameters of our system. Results and Discussion In this work, the proposed SCADA monitors a solar energy system and several experiments are carried out.The experiments cover the measurement error of the sensor systems which are installed to measure PV current and voltage, battery current and voltage, MMPT efficiency, and SCADA features. The sensors that are used contain errors, so these errors are calculated with calibrated instruments, as listed in Table 4. As can be seen in Table 4, the measurement error of current sensors was the highest.The error percentages of the PV current sensor and the battery current sensor are about 3.42% and 3.10%, respectively.Although the error percentages of both voltage sensors were quite low, they were closer to the calibrated instrument. The monitoring tasks are displayed on the PC.They include the PV parameters as a graph and digital numbers and the MPPT efficiency as digital numbers.Figure 9 shows the user interface of SCADA after the system was operational. The SCADA system is designed to make an update every minute.As shown in Figure 9, there are four figures: two of them observe the PV voltage and current and the other two monitor the battery voltage and current.The figure also shows that the SCADA system makes updates every minute. The user interface of SCADA shows five icons displaying values of parameters as digital numbers, and they also make automatic updates every minute. Our SCADA system has the feature of enabling all data to be easily saved on a computer as an Excel file.To save the data, the user just has to hit the Export-Data icon and then hit the Save-Data icon.These icons are programmed by script to save the data on a PC as an Excel file. Figure 10 shows a screenshot of data saved in Excel. Also, user interface has a container that shows details.The Arduino connects with SCADA, and it gives warning if there is any error in connection. The efficiency of MPPT was also monitored.It represents the output power of MPPT over the input power to MPPT. Figure 11 presents MPPT efficiency for various periods of time, with efficiency ranging between 1 and 0.8. Conclusion In this paper, a low-cost SCADA system was designed and built with Reliance SCADA software and Arduino Uno.The SCADA system was applied to a stand-alone photovoltaic system to monitor the current and voltage of PV and batteries.The results of the experiments demonstrate that SCADA works in real time and can be effectively used in monitoring a solar energy system.The developed system costs less than $100 and can be modified easily for a different PV system. Figure 1 :Figure 2 : Figure 1: Solar panels on the roof of engineering building. Figure 5 : Figure 5: Connection drawing of current sensor. Figure 6 : Figure 6: Hardware setup of SCADA system. Figure 9 : Figure 9: User interface of SCADA while running. [11]e1shows specifications for the hardware.The license gives permission to anyone to improve, build, or expand Arduino.The original Arduino and its enhancement environment were founded in 2005 in Italy at the Smart Project Company.It has 14 digital input/output pins, 6 of which can be used as analog input/output[11]. 2.2.Current Sensor.Current sensors for DC currents must be able to measure a range of currents for PV and batteries between 0 A and 20 A. In this work, an ACS 712 sensor is used for sensing the current.It is designed to be easily used with any microcontroller, such as Arduino.The sensors are based on the Allegro AC712ELC chip.The scale value of ACS 712, which is used in this design, is 20 amp, which is appropriate for sensing current.Two sensors are installed: one a small one.In this work, the voltage sensor is a 25 V-sensor with two resistors of 30 KΩ and 7.3 KΩ.The maximum voltage of either PV or battery is 25 V, so this sensor is appropriate.The output of the voltage sensor is between 0 V and 5 V.This scale is suitable to the Arduino analog inputs.In this experiment, we need two voltage sensors: one is installed Table 1 : Specifications of Arduino board. Table 2 : Allocation of MODBUS address for MODBUS RTU. Table 3 : Price components of SCADA system. Table 4 : Measurement errors of sensor system.
2,756.6
2018-05-02T00:00:00.000
[ "Engineering", "Environmental Science" ]
GABA Withdrawal Modifies Network Activity in Cultured Hippocampal Neurons Dissociated hippocampal neurons, grown in culture for 2 to 3 weeks, tended to fire bursts of synaptic currents at fairly regular intervals, representing network activity. A brief exposure of cultured neurons to GABA caused a total suppression of the spontaneous network activity. Following a washout of GABA, the activity was no longer clustered in bursts and instead, the cells fired at a high rate tonic manner. The effect of removing GABA could be seen as long as 1 to 2 days after GABA withdrawal and is expressed as an increase in the number of active cells in a network, as well as in their firing rates. Such striking effects of GABA removal may underlie part of the GABA withdrawal syndrome seen elsewhere. INTRODUCTION Exposure of neurons to GABA, the main inhibitory neurotransmitter in the brain, activates in these neurons a chloride conductance that causes a profound inhibition in the affected neurons for as long as GABA is present. Removal of chronically perfused GABA from intact brain tissue causes a characteristic increase in the excitability of the exposed neurons, to the extent that they may undergo severe epileptic seizures. This phenomenon, called the "GABA withdrawal syndrome (GWS)", has been studied extensively by colleagues (Garcia Ugalde et al., 1992 Silva-Barrat et al., 1992;Brailowsky & Garcia 1999). GWS is assumed to be caused by a downregulation of the GABA receptor, such that GABA no longer inhibits cells, thus resulting in their inability to prevent seizures. GABA withdrawal syndrome has been assumed to activate mechanisms that are related to long-term memory, in that GWS can be expressed long after the removal of GABA from the tissue. GABA withdrawal syndrome has been associated with a number of morphological and biochemical changes in affected tissue (Arenda et al., 1994). The molecular mechanisms underlying GWS are largely unknown, partly because the syndrome has been produced only in vivo, where only a small volume of nervous tissue can be chronically exposed to GABA, and the analysis of its cellular and molecular mechanisms is rather limited. Tissue culture provides a simple in-vitro test system that is easily accessible to the biophysical and biochemical analysis of structure and function of central neurons, and where the GABAergic synapse has been studied extensively (Segal & Barker, 1984a;1984b). We explored the possibility of expressing GWS in cultured neurons and wish to report that even short exposures to GABA produce long-lasting changes in the activity patterns of small networks of central neurons maintained in culture. Culture of hippocampal neurons Hippocampal cultures were prepared as described in Papa et al. (1995). Briefly, 19-day-old embryos were taken from anesthetized Wistar rats. The brains were removed and placed in ice-cold medium. The hippocampus was dissected out and mechanically dissociated by gentle trituration, using a Pasteur pipette. Dissociated cells (800,000 The multi electrode recording system. Left, a low power view of the arrangement of the multielectrode surface. The distance between the electrode tips is 200 tm, and the size of each electrode tip is 20 tm. Right, a higher power view of a cell growing on an electrode tip, partially hidden by the electrode. The density of cell plating is such that an electrode is not likely to detect activity of more than one cell, although occasionally the activity of two cells is recorded simultaneously from an electrode. In such cases, the separation between them is quite clear (see below). cells/mL, as determined in a counting chamber) were plated onto 12-mm glass coverslips or onto multi-electrode arrays (MEA, 60 electrodes, spaced 200 tm apart, 20 tm in diameter each, (Egert et al., 1988 and Fig. 1 1998), and most of the medium-size cells (15 to 25tm) were believed to be pyramidal cells. Within the first week in culture, the cells may migrate but are usually stable thereafter. Electrophysiological recording The effect of GABA application and withdrawal on neuronal activity was examined using intracellular, whole-cell patch recording and extracellular recording from a multi-electrode array. Whole cell recording The cultures were transferred to the recording chamber in an inverted Nikon microscope. The culture was perfused with a medium containing 130 mM NaC1, 5 mM KC1, 2 mM CaCI, 1 mM MgCI, and 30 mM glucose. The pH was buffered to 7.4 with 25mM Hepes, and the osmolarity was adjusted with sucrose to 320mOsm. Patch pipettes contained 140 mM K-gluconate, 2 mM Mg-ATP, 0.2mM EGTA, 10 mM Na-phosphocreatine, 5 mM NaC1, and 0.3 mM GTP. The pH was buffered to 7.2 with 10mM Hepes, and the osmolarity was adjusted to 290mOsm. QX-314 (5mM) was added to block action potential discharges. Conventional whole-cell patch recording was conducted with a 1.5 mm OD glass pipette, having a 21xm tip and a resistance of 2 to amplifier, 1060MEA (Multi channel system, Rutlingen, Germany). The culture chamber was equipped with a temperature controller to maintain a stable temperature at 37C. The culture density was low to obtain spike activity at each electrode from a single neuron. Neuronal activity was sampled over periods of 20 min, using the Alpha-MAP acquisition program (Alpha Omega, Nazareth, Israel) and was analyzed off-line to determine the spike properties, the burst activity, and the dynamics ofthe network. Spike and burst analysis Analysis of the extracellular signals was made off-line; the mean value (t) and standard deviation (o) of the background noise was estimated for each channel. Signals whose amplitude was greater than t+5tr were identified as spikes. Each spike was defined by its amplitude and time of appearance for further analysis. The stability of firing frequency was verified by examining the firing along the sample duration. Only cells with stable firing rates were analyzed. Action potentials of the same shape and size could be recorded over periods up to a week. Lowpass digital filtering was used to define bursts of spikes. The analysis included the evaluation of interspike intervals, interburst intervals, and averaged amplitudes. In preliminary experiments, we examined cross-correlations between active channels. RESULTS continuous non-bursting behavior lasted about 10 to 20 min and was eventually recovered to normal bursting activity over the subsequent 30 min. To test if the enhanced activity was a result of the continuous lack of rhythmic activity caused by GABA, we exposed three cells to 1 tetrodotoxin (TTX), which also caused cessation of the action potential and spontaneous bursting. Following the removal of TTX, bursting activity was restored, and no long-lasting effects of TTX were recorded (data not shown), in sharp contrast to the effects of GABA withdrawal. Single cell recordings At the age of 2 to 3 wk in culture, patchclamped neurons expressed spontaneous rhythmic bursting activity (Fig. 2) at the rate of one burst every 2 to 10 see. Each burst consisted of many EPSCs and IPSCs discharged simultaneously for about 500 msec. The IPSCs could be easily distinguished from the EPSCs by their reversal potential at about -50mV. Perfusion of 50tM GABA into the culture produced an immediate and long-lasting cessation of spontaneous synaptic currents and an inward current that was associated with a marked increase in membrane conductance. Within a minute of the onset of GABA perfusion, the inward current sagged but then remained constant thereafter for the entire duration of GABA presence in the dish. Likewise, the conductance change caused by GABA was not reduced, indicating that at least for 10 min of GABA presence, no additional desensitization of the GABA receptor occurred beyond a possible fast one. In all 7 cells tested, each in a different culture dish, washout of GABA from the recording chamber, after either or 10 min of exposure to the drug, resulted in a rapid return of the membrane current and conductance to predrug levels. Recovery was followed by a dramatic increase in EPSC and IPSC discharges in a tonic manner (Fig. 2). The high-frequency, Multi-chanuel recording To further characterize this unique short term effect of GABA withdrawal and extend it to hours and days after treatment, we resorted to a similar treatment of hippocampal cultures plated on MEA (Fig 3). Recording the extracellular activity from the culture enabled us to monitor the behavior of many neurons in a non-invasive manner and to follow changes in their intrinsic firing properties and their interconnections over periods of up to 2 wk in culture (Maeda et al. 1995). At the age of to 2 wk, activity was frequently detected in 5 to 6 of the 60 electrodes. Hippocampal neurons in a 9-dayold culture expressed moderate levels of activity, averaging 0.6 Hz. Most cells discharged highfrequency bursts, interspersed with long periods of quiescence, with a preference for a 2 to 8 see interburst interval (Fig. 3), in much the same way as that seen with the intracellular recording shown in Fig. 2 above. Exposure to GABA (201xM) for 3 to 5 min totally suppressed action potential discharges in all of the recorded cells (n=25). Fast washout of GABA resulted in most of the cells in a high frequency of spike firing and complete disappearance of slow burst-firing pattern. An example of the activity of 4 different cells before the GABA application, within 10 min, and 24h after the GABA washout, is presented in Fig. 4. As seen in this example, all cells showed a 2to 4-fold increase in spike frequency within the first 10 min after GABA washout. The same trend Before GABA Wash Fig. 2: Spomaneous activity recorded from a patch-clamped hippocampal neuron growing in culture for 3 weeks. Cell clamped at-60mV, and negative voltage commands are applied through the membrane to estimate input conductance of the cell. Left, minute record, right an expanded record to show a typical burst of synaptic currents, and a preceding current response to voltage command. Current scale same for all, time scale only for the right, expanded records. During exposure to 50gM GABA, middle trace, there is complete suppression of synaptic current, and a marked increase in membrane conductance. Bottom, following washout of GABA, EPSCs lost their bursting properties, and a continuous barrage of synaptic currents is seen. m l was seen in the next 10 min (data not shown). All cells fired in a bursting mode a day later. The bursts were now fired at higher frequencies, however. High rates of activity and a change in firing frequency are also illustrated in Fig. 5 Table 1 shows that in all experiments, 24 h after a brief exposure to GABA, an increase in spike frequency still occurred, and a remarkable increase in the number of active cells was observed. Twenty-four hours after GABA withdrawal, more than a 2-fold increase in the number of active cells was observed (from 12 to 27 cells), whereas in a control culture of similar age that was exposed only to a change of medium, the number of cells did not change much (from 31 to 36 cells) during the 24-h incubation period. The pronounced increase in the number of active cells, in addition to the increase in spike frequency, resulted in an overall increase in the activity of the whole culture, as expressed by the total number of spikes/see per culture. Although the cells that were sampled are only a small fraction of the total cell population in the culture, their activity represents the network activity. An average of 3.25 spikes/see per culture was sampled before GABA and 8.55 spikes/see 24 h later. Furthermore, the cells not expressing burst activity, as well as those that did, developed a clear burst activity during the hour after GABA withdrawal, as shown in Fig. 5. This burst activity was still observed 24 h and 96 h later. A clear example of this tendency can be seen in cells 32 and 52, shown in Fig. 6. Cell 58 from the same culture fired at less then one spike per min before exposure to GABA (not shown because of the small number of spikes), but then switched to burst activity over the next 24 h. DISCUSSION The experiments presented here demonstrate that exposure to GABA can alter the properties of a small hippocampal network that is maintained in culture. This small network, probably consisting of several hundreds of interconnected neurons, O slunoD slunoD produced coordinated activity, expressed as bursts of synaptic currents appearing at fairly regular intervals. This network activity was suppressed by GABA. When the drug was washed out, however, the regular bursting activity was replaced by a highfrequency tonic activity. When recorded extracellularly with the multielectrode system, the enhanced synaptic activity was expressed as an increase in the total firing rate of the recorded cells and in the recruitment of "dormant" cells to the network. These effects lasted for at least 1 to 2 d after the washout of GABA. Whereas the phenomenon reported here is not identical to the GWS studied in-vivo, where the neural tissue is chronically exposed to GABA, the results of GABA withdrawal reported here are qualitatively similar in that a dramatic increase in the tone of the network activity was seen for at least several days. Acute exposure to GABA is not expected to result in a major effect on the properties of the GABA receptor or on its associated chloride channel and indeed, continuous recording of membrane currents during a 1 to 10 min exposure to GABA did not reveal any change in conductance or any reduction beyond the initial sag in the inward current produced by GABA. The initial sag is likely to be caused by a redistribution of chloride ions across the membrane (Segal & Barker, 1984). Thus, it is not likely that a change in GABA receptor properties underlies the change in network activity observed here. Nor is it likely that the cessation of activity per se, caused by exposure to GABA, underlies the increase in subsequent activity, as this effect was not mimicked by exposure of the culture to tetrodotoxin, a drug producing a similar suppression of activity but without the withdrawal effect. Thus, our in vitro GWS is not likely to represent a sheer overshoot of cells following their suppression but rather to result from a transient reduction in the efficacy of local intemeurons in regulating the burst f'tring of pyramidal neurons. Such a reduction in efficacy may have long-lasting consequences, as seen here and elsewhere (Bmilowsky et al., 1995), and may produce a long-lasting facilitation ofnetwork activity. The site of action that is most sensitive to GABA exposure is not yet known. Young intemeurons in the developing brain have been shown to excite rather than to inhibit follower neurons because in young cells, the chloride reversal potential is more depolarized than the firing threshold (Chembini et al., 1991). Such is not the case with the cells studied here. Although the use of a patch pipette did not allow the exact determination of the chloride reversal potential in the undisturbed neuron, the results of other studies using cell-attached patch recording (Murphy et. al., 1998) indicate that for cells at an age similar to that used in our experiments, from 1 to 3 weeks of culture, GABA is inhibitory. The ability of GABA to evoke an inward current in our cells, as recorded with the patch pipette, entails the reversal potential of chloride, which is 10 mV below our standard holding potential (-60mV). Because of their strategic locations and functions, GABAergic intemeurons are the targets for extensive modulation in central circuits. Such intemeurons are innervated by an array of extrinsic modulatory systems, including serotonin-, acetylcholine-, and nomdrenaline-containing terminals (Freund & Buzsaki, 1996), and by local excitatory and inhibitory collaterals. In the hippocampus, where local intemeuronal circuits have been studied extensively, subsets of interneurons contain selective calcium-binding proteins and neuropeptides. Intemeurons regulate network activity in the hippocampus, both via the activation of short-lasting GABA-A receptors, and longer-lasting GABA-B receptors. Thus, the ability to regulate network activity by interacting with a minimal number of GABAergic neurons is much more efficient than an interaction with the more abundant pyramidal neurons (Yanovsky et al., 1997). Our present results indicate that even a short exposure of a network to GABA can have long-lasting consequences to the network activity. Thus, intensive GABAergic activity, caused by drug-mediated release of GABA from terminals, may, upon its termination, cause the system to undergo a period of extensive excitation. This phenomenon may have long-term consequences for the ability of the network to react to excitatory afferents and may produce a withdrawal syndrome, expressed at different levels of neuronal activity.
3,819.2
2000-01-01T00:00:00.000
[ "Biology" ]
Future perspectives for a weak mixing angle measurement in coherent elastic neutrino nucleus scattering experiments After the first measurement of the coherent elastic neutrino nucleus scattering (CENNS) by the COHERENT Collaboration, it is expected that new experiments will confirm the observation. Such measurements will allow to put stronger constraints or discover new physics as well as to probe the Standard Model by measuring its parameters. This is the case of the weak mixing angle at low energies, which could be measured with an increased precision in future results of CENNS experiments using, for example, reactor antineutrinos. In this work we analyze the physics potential of different proposals for the improvement of our current knowledge of this observable and show that they are very promising. recent progress in this field is the detection, for the first time, of the coherent elastic neutrinonucleus scattering (CENNS). This reaction was proposed [5] just after the discovery of the weak neutral currents [6] and recently detected by the COHERENT collaboration [7]. Besides the natural interest in confirming this recent detection, there are different issues that are of current interest in nuclear and neutrino physics. Many new physics scenarios can be probed, as it has been proposed in the case of Non Standard Interactions (NSI) [8][9][10][11], a Z gauge boson [12][13][14][15], electromagnetic neutrino properties [16,17] and even the case of an sterile neutrino [18][19][20][21]. Methods alternative to inverse beta decay (IBD) of reactor neutrino detection can also shed light in the so called reactor neutrino anomaly [22], as we have pointed out in [18]. Reactor neutrinos have a great tradition of discoveries, since the first neutrino detection [23] and in the last decades they have played an important role in establishing the three neutrino oscillation paradigm [1], IBD has been the golden channel in reactor neutrino detection. However there are other interesting neutrino reactions that can also be used to probe neutrino fluxes from reactors, as is the case of elastic neutrino-electron scattering (ENES) detected for the first time in the seventies [24] and measured with increased precision by the TEXONO [25] and MUNU [26] Collaborations; and more recently of CENNS measured at the neutron spallation source by the COHERENT Collaboration [7]. It is expected that in the near future improved measurements of ENES reaction can be provided by the GEMMA experiment [27]. The expectation for a new measurement of the weak mixing angle in CENNS has already been studied in the past, for example for the case of the TEXONO [17] and the CONUS [28] proposals. Here we focus in the case of the CONNIE [29][30][31], MINER [32], and RED100 [33] research programs and reanalyse the TEXONO and CONUS case studies in order to compare them on an equal footing and to contrast the importance of different characteristics of each experiment. In particular, we note here how sensitivities can depend on the experiment detection targets due to a different protons to neutrons proportion. The dependence of CENNS cross section on the weak charge Q W allows the study of the weak mixing angle at extremely low momentum transfer, a region where an improvement in the accuracy of this parameter is very much needed [34,35], particularly in measurements with neutrino interactions [36]. We will show that, although the sensitivity to the weak charge is relatively small in CENNS, it will be possible to have competitive measurements of the sin 2 θ W in the low energy regime if the systematic uncertainties are under control. We will discuss that, besides the importance of high statistics, the proportion of protons to neutrons in a given target will also play an important role. II. CENNS EXPERIMENTS WITH REACTOR ANTINEUTRINOS Several future proposals plan to measure CENNS with increased statistics, opening the possibility to test the Standard Model in the ultra-low energy regime. To study the sensitivity of these proposals to the weak mixing angle, we start by considering the CENNS cross section, given by the following expression [37] dσ dT Here, M is the mass of the nucleus, E ν is the neutrino energy, and T is the nucleus recoil energy; F Z,N (q 2 ) are the nuclear form factors that are especially important at higher momentum transfer, as can be the case of neutrinos coming from spallation neutron sources, while for reactor antineutrinos, they have a minimal impact and will be considered as equal to one in this work. The neutral current vector couplings (including radiative corrections) are given by [37], where ρ N C νN = 1.0082,ŝ 2 Z = sin 2 θ W = 0.23129,κ νN = 0.9972, λ uL = −0.0031, λ dL = −0.0025, and λ dR = 2λ uR = 7.5 × 10 −5 [38]. From the previous expressions for the vector couplings, it is straightforward to note that the dependence on the weak mixing angle appears only on the protons coupling and, therefore, nuclei with larger protons to neutrons proportion could be more sensitive to this measurement. On the negative side, we can also notice that this contribution is small in comparison with the neutron one. Despite this, a high statistics CENNS experiment will be sensitive to this coupling and, therefore, the weak mixing angle can be measured with a precision similar to the one at current measurements in this low energy regime. Currently, most of the proposals are working with a relatively small amount of material and considering upgrades in the near future. In what follows, we will consider the optimistic case of the upgraded, high statistics, detectors that are the ones that have the possibility to make an accurate measurement. For estimating the number of expected events (SM) in the detector, we use the expression, where M detector is the mass of the detector under study, φ 0 is the total neutrino flux, t is the data taking time period, λ(E ν ) is the neutrino spectrum, E ν is the neutrino energy, and T is the nucleus recoil energy. The maximum recoil energy is related to the neutrino energy and the nucleus mass through the relation T max (E ν ) = 2E 2 ν /(M + 2E ν ). In our analysis, in order to forecast the sensitivity of the CENNS experiments, we will use two different approaches: we will perform a χ 2 analysis of each proposal, considering that the future experiment will measure the number of events predicted by the Standard Model. To compute this values we will use the predicted value for the weak mixing angle at zero momentum transfer (sin 2 θ W = 0.2386). With this value as the test experimental value, we will perform a fit considering different values of the systematic uncertainties, plus the extreme benchmark case of only statistical error. A second approach, also used in the present article, will be the computation of the χ 2 function considering the predicted statistical error and the systematics coming from the reactor neutrino spectrum [39], this method has been previously used for the case of ENES experiments [36]. For the reactor neutrino spectrum we will use the expansion discussed in Ref. [22], while for energies below 2 MeV the computations reported in Ref. [40] were considered. In each case we assumed as a benchmark one year of data taking. As already mentioned above, in our first approach we will consider an analysis based on the function where the theoretical prediction for the number of events N th will depend on the value of the weak mixing angle and we will considered different values for the future systematic error σ syst = pN th /100, where p will be the percentage of systematic uncertainty. For our second approach, we will consider the current level of uncertainty in the reactor antineutrino spectrum as an input. We have computed the expected number of events taking into account the experimental details of each proposal, summarized in Table I. For the RED100 proposal [33] we consider a 100 kg target of Xe, a material that is currently of great interest for coherent scattering [41] and that has reached a low energy threshold in different tests [42]. A 500 eV threshold is expected in the case of the RED100 experiment. New analyses in this direction are encouraging and it is expected that the detector will perform even better [43]; however, for our analysis we will restrict to this more conservative estimate. The RED100 experiment will be located at the Kalinin power plant. In the case of CONNIE, we consider the most optimistic case of a 1 kg Si detector, with a 28 eV threshold, located at 30 m from the Angra-2 reactor. As for the MINER proposal, we perform our computations considering a detector that will be made of 72 Ge and 28 Si. The proportion between these two materials is of 2 : 1 and the threshold energy is expected to reach 10 eV. The antineutrino source in this case will be a non-comercial TRIGA-type pool reactor that delivers mainly 235 U antineutrinos [44]. We will consider an event rate of 5 kg −1 day −1 [32] and, as in the case of all other proposals, one year of data taking. For the case of TEXONO, we have considered their proposed High-purity Germanium detectors as a target with the threshold energy T thres ∼ 100 eV [45,46] exposed to an antineutrino flux coming from the Kuo-Sheng nuclear power plant. Finally, in the case of the CONUS proposal we follow [28], where a detector of up to 100 kg of germanium is considered, with a recoil energy threshold as low as 100 eV. III. WEAK MIXING ANGLE SENSITIVITY With the information given above, we have computed the expected sensitivity to the weak mixing angle, sin 2 θ W . We have assumed that the future experimental setups will measure exactly the Standard Model prediction and computed the corresponding fit as mentioned in Eq. (4) for three different cases: (i) when the experiment is capable of an optimal efficiency (100 %), (ii) when it reaches an efficiency of 50 %, and (iii) in the case when we include the current systematic uncertainty corresponding to the theoretical antineutrino flux, with a statistical error corresponding to a 100 % efficiency. We can see the results of this analysis in Fig. (1), where we show the cases of CONNIE [29][30][31], MINER [32] and the RED100 [33] proposals. For the value of the weak mixing angle, we have considered the extrapolation to the low energy regime: with κ(0) = 1.03232 [47]. From Fig. (1) we can notice that the perspectives for a precise measurement of the weak mixing angle are promising, and that they are dominated by the systematic error from the reactor spectrum. However, it is expected that this error will be reduced, thanks to the The results are shown in terms of δ(sin 2 θ W ) as well as in percent. progress in the current knowledge of the reactor spectrum from its direct measurement at IBD experiments. We can also notice that for the case of the CONNIE collaboration, it will be necessary to have a higher mass detector in order to reduce the statistical error. This is due to the fact that the detector has very low mass and the target material is also lighter. We show in Table II, the corresponding 1σ error for sin 2 θ W for the three different configurations under discussion. We have also included for comparison the results for CONUS and TEXONO. We can see that the results can be competitive, especially if systematical errors can be reduced. In order to have a better idea of the dependence of the sensitivity on the systematics, we have plotted in Fig. (2) the expected error on the weak mixing angle, depending on the systematic error that each particular experiment can reach. In this case, we have also included the result for the Texono and the Conus proposals. From this figure, it is possible to see that CONNIE is slightly less affected by the systematics than other experiments. Being an experiment where the proportion of protons to neutrons is higher, this result seems natural, while among Texono and CONUS, the dependence is very similar, since they use the same target material. Fig. (1). In the right panel are shown TEXONO and CONUS, two proposals that use the same nucleus as a target and, therefore, have a similar dependence. IV. DISCUSSION AND CONCLUSIONS The weak mixing angle is one of the fundamental parameters of the Standard Model and it has been measured with great accuracy at the Z-pole [38]. At very low momentum transfer there are also measurements of this important quantity, although the precision is lower. The main results in this energy window come from the measurement of the weak charge, such as in the recent measurement by Qweak [51], and from atomic parity violation experiments [38], a measurement that will be improved by the P2 [52], SoLID [53] and Moller [54] experiments. Both measurements are extracted from the weak charge in protons or electrons. The measurement of the weak mixing angle at the low energies in neutrino scattering processes has plenty of room for improvement [36] and the CENNS experiments have the potential to obtain a competitive accuracy, provided that systematic errors can be reduced. In this work we have computed the expected sensitivity for different CENNS proposals and we have shown the viability of such a measurement with a reasonable accuracy. Moreover, if the systematic errors can be reduced, the measurement of the weak mixing angle from with the SM prediction [34,48], in the M S renormalization scheme. Electron weak charge Q W (e) comes from Moller scattering [49], and both the former [50] and recent [51] measurements of the proton weak charge Q W eak(P ) are also shown. CENNS experiments can be even better than the one coming from electron weak charge. We show this potential in Fig (3) the result of Table II is presented in a graphical representation comparing the future measurement of the weak mixing angle in CENNS with current measurements. We can see that the CENNS experiments can really give a good measurement of this observable through a different and new channel.
3,311
2018-06-04T00:00:00.000
[ "Physics" ]
Recent advances in friction and lubrication of graphene and other 2D materials: Mechanisms and applications Two-dimensional materials having a layered structure comprise a monolayer or multilayers of atomic thickness and ultra-low shear strength. Their high specific surface area, in-plane strength, weak layer-layer interaction, and surface chemical stability result in remarkably low friction and wear-resisting properties. Thus, 2D materials have attracted considerable attention. In recent years, great advances have been made in the scientific research and industrial applications of anti-friction, anti-wear, and lubrication of 2D materials. In this article, the basic nanoscale friction mechanisms of 2D materials including interfacial friction and surface friction mechanisms are summarized. This paper also includes a review of reports on lubrication mechanisms based on the film-formation, self-healing, and ball bearing mechanisms and applications based on lubricant additives, nanoscale lubricating films, and space lubrication materials of 2D materials in detail. Finally, the challenges and potential applications of 2D materials in the field of lubrication were also presented. Introduction Friction is a common phenomenon that is encountered in people's daily lives and in industrial production. It has been estimated that 1/2 to 1/3 of the world's energy is consumed in various forms of friction [1−3]. Friction causes wear and even failures in mechanical equipment. Therefore, the effective reduction of friction and the control of wear are important considerations in the performance of mechanical equipment and even in the improvement of the national economy. The use of lubrication is an effective method of controlling friction and wear in mechanical equipment. Approximately 4,000 years ago, according to ancient Egyptian murals, water, animal products, and vegetable oils were first used as lubricants to reduce friction, thereby increasing the service life of tools [4]. Liquid lubricants have been widely used owing to their effective anti-friction effect. However, the performance of liquid lubricants is seriously affected in severe working environments, which may result in the failure of the equipment. Thus, solid lubricants have been considered in the industry as they are more stable than liquid lubricants under the harsh conditions of heavy loads, high speeds, etc. As the micro-and nanoscale phenomena in tribology played an increasingly important role in modern technologies such as micro-electro-mechanical systems, traditional lubricants began to exhibit their limitations in the case of microscale friction. With the unparalleled superiority of traditional lubricants, the small size and surface effects of nanoparticles provide an excellent lubrication performance. Therefore, extensive studies on the friction properties and wear resistance of nanomaterials have been conducted by researchers in the field of tribology [5−8]. Two-dimensional (2D) materials are representative of new nanomaterials and are expected to introduce new opportunities for application in the traditional technology and engineering fields [9−19]. Atoms in the same atomic layer are combined through a covalent bond, which results in the formation of a monolayer structure having a high modulus and high strength. In addition, the low shear resistance between the adjacent atomic layers combined by the Van der Waals force allows the atomic layers to slide easily [20]. Because of the higher specific surface area of 2D materials, they are easily adsorbed onto the contact surface, which prevents the direct contact of the friction pair. Thus, the lubrication performance of 2D materials is superior to that of other nanomaterials. Graphene is a new type of 2D material with a hexagonal honeycomb 2D grid structure and comprising singlelayer carbon atoms exhibiting sp 2 hybridization. The thickness of monolayer graphene is 0.335 nm [21]. The excellent tribological properties of 2D materials have also been widely explored. Cao et al. [22] focused on their mechanical properties including elastic modulus, strength, and fracture, and the frictional properties of graphene, MoS 2 and BN. In 2010, Lee et al. [23] studied the tribological properties of 2D materials such as graphene, MoS 2 , and BN. The experimental results they obtained showed that 2D materials have good anti-friction properties. Penkov et al. [24] reviewed the tribological properties of graphene and concluded that the key factors affecting tribological properties are the number of layers, stacking mode, and substrate material. This review systematically describes the nanoscale friction mechanisms of 2D materials including the interfacial friction and surface friction mechanisms. 'Superlubricity' and three other major mechanisms including electron-phonon coupling effect, puckering mechanism, energy dissipation mechanism are described in detail. This review also introduces lubrication mechanisms based on the film formation mechanism, self-healing mechanisms, and ball bearing mechanism and applications based on lubricant additives, nanoscale lubricating films, and space lubrication materials of 2D materials in detail. Finally, we present the challenges and prospects of 2D materials in this field. Nanoscale friction mechanism of 2D materials Despite great advances in science and technology and remarkable research progress on friction, the current understanding of friction and lubrication phenomena is macroscopic. Owing to the development of micromechanical devices, macroscopic tribology theory has become obsolete, and the characteristics of friction at the microscale have been extensively investigated [25,26]. Tribology has also followed this developmental trend and has gradually evolved into nanotribology. Prof. Winer, a famous American tribology scholar, pointed out that microscopic and atomic scale tribology is a new field. Since the 1990s, several international scientific research teams have conducted numerous experiments using instruments such as scanning electron microscope (SEM), scanning tunneling microscope (STM), and atomic force microscope (AFM) in combination with molecular dynamics [27−32], first principle calculation [33−39], and finite element analysis [23,40,41]. They concluded that the friction mechanisms can be divided into inter-layer and surface-sliding friction mechanisms, which are explained in detail in the following sections ( Fig. 1). Discovery of 'superlubricity' In 1928, Bragg [42] attributed the low-friction behavior of flake-like materials to the low shear resistance between their adjacent atomic layers. Tomlinson [43] used a well-known mechanical model to understand the mechanism of solid friction and revealed that solid sliding is caused by the dissipation of elastic energy owing to the relative sliding of two contacting solids. However, Tomlinson did not study the possible contact movement at a crystal surface at the atomic level. In 1990, Hirano et al. [44−47] surfaces and found special cases in which friction completely disappears at a completely clean solid surface. Subsequently, they confirmed the occurrence of the phenomenon called 'superlubricity' using ultrahigh vacuum STM. The presence of superlubricity depends on the commensurability of a contact surface. This conclusion was confirmed by Dienwiebel et al. [48,49], who used a self-made friction microscope to examine the energy dissipation between sliding tungsten tips on a graphite surface by measuring the atomic friction as a function of the angle of rotation. They verified that the ultra-low friction of graphite was owing to the incommensurability of rotating graphite layers. Similarly, Martin et al. [50] studied the superlubricity of MoS 2 and attributed it to the incommensurability of its adjacent layers during intercrystallite slip. To understand the concept of lattice commensurability, Zheng et al. [51] creatively developed the "egg box" model. It is equivalent to the perfect match of the lattice constants and the orientations of two crystal flakes when two egg boxes are completely stuck, as shown in Fig. 1. Thus, the two flakes either exhibit commensurability or incommensurability. Lattice commensurability dramatically affects the interlayer friction of graphene; ultra-low friction (superlubricity) occurs especially when surfaces come into contact incommensurately [52]. Moreover, these findings provided the basis for the further understanding of the friction mechanism between 2D material interlayers. Interlaminar friction in homogeneous and heterogeneous sliding contact To further explore the influence of the lattice commensurability of interlayers on the friction between layers, using STM, Feng et al. [53] found that graphene nanosheets slide easily on a graphene surface at temperatures as low as 5 K. This phenomenon includes translational and rotational motions in the initial and final states. In addition, they presented the sliding mechanism that occurs on graphene, which revealed the commensurate-incommensurate transition, as shown in Fig. 2. Furthermore, superlubricity may also occur in other 2D materials such as MoS 2 and BN because of their same layered structure. Li et al. [54] obtained a friction coefficient of 10 −4 in the regime of superlubricity in the case of the sliding between incommensurate MoS 2 monolayers by combining the in situ SEM technique with a Si nanowire force sensor. The obtained results provided a new approach and guidance for studying the friction mechanisms of 2D materials, which was of great significance. The registry index (RI) concept is used to quantify interlaminar copolymerization levels in layered materials using simple geometric considerations. An accurate relationship between a commensurate interlayer and nonabrasive friction in layered materials has been presented by Hod, and its simple and intuitive model can be used to capture the behavior of hexagonal graphite sheets sliding on a graphite surface [55]. Furthermore, the ultra-high-speed superlubricity of micrometer-sized graphene flakes has been observed [56]. However, the superlubricity that occurs in finitesized sheets is unstable. Experiments have shown that the low-friction sliding of an incommensurate graphite flake on graphite can be broken by rotation causedby torque, thus resulting in large and irreversible friction [57]. Superlubricity behavior easily develops into slip motion under high loads [58]. These findings have been supported by theoretical results. Wijn et al. [59] concluded that certain scanning lines, thermal fluctuations, and high load forces can destroy the stability of an ultra-lubricated track. The optimal conditions for superlubricity are a large layer, low temperature, and low load. In addition, Liu et al. [60] studied two methods of suppressing frictional scattering to maintain the state of superlubricity. One such method involves the use of graphene nanoribbons for eliminating the friction scattering by limiting the rotation of nanosheets, which are called frictional waveguides. Another such method includes the effective suppression of frictional scattering via the biaxial stretching of a graphite substrate. In addition to effectively maintaining the superlubricity at a nanoscale, Berman et al. [61] found that superlubricity can be realized at an engineering scale. They concluded that nanoscrolls slide on a diamondlike carbon surface at which incommensurate contact occurs, thereby reducing the friction coefficient. Wu et al. [62] developed a lubricating system in which a self-assembled graphene film (SGF) slides against an SGF under macroscale contact. Ultra-low friction was discovered owing to the low resistance to shear between the adjacent layers of SGFs. In addition, they found that an annealing process would significantly enhance the tribological properties of SGFs. Another example is the macroscopic superlubricity exhibited between the inner and outer shells of centimeter-sized double-walled carbonnanotubes [63]. In general, the key to achieving stable superlubricity is to realize a sustained incommensurability sliding contact. Liu et al. [64] achieved sustained superlubricity between layers in a high-load atmosphere. They prepared graphene-coated microspheres (GMS) using the metal-free catalyst chemical vapor deposition (CVD) method and obtained an ultra-low coefficient of friction of 0.0025 at a local rough-contact pressure of up to 1 GPa and an arbitrary relative surface rotation angle. This ultra-low friction is attributed to the incommensurate contact that can be realized between randomly oriented graphene nanocrystallites. The specific preparation of GMS is shown in Fig. 3. Vu et al. [65] investigated the friction forces in incommensurate micrometer-sized contacts between atomically smooth graphite surfaces. The ultra-low friction can be obtained in the case of a normal load (maximum pressure 1.67 MPa). Superlubricity easily occurs when 2D materials slide on other 2D materials. This heterogeneous interlayer sliding-friction system has been extensively investigated and is expected to have a robust superlubricity behavior regardless of the orientation of the substrates [66−68]. The theoretical basis for this is that heterostructure materials naturally exhibit a lattice misfit. RI theory-which is based on studies on the phenomenon of homogenous sliding structures-can accurately capture the frictional behavior of a hexagonal graphite flake on the surface of graphite and has been used to examine the sliding behavior of heterogeneous structures. Leven et al. [69] defined the RI of a graphene/h-BN interface and concluded that when sufficiently large graphene flakes slide on top of the h-BN layer, regardless of the relative orientation, a stable superlubricity state can be realized. In addition to the lattice commensurability, the normal force and stacking structure between layers also influence superlubricity [70]. Moreover, graphite contains different types of defects that affect the interlayer friction. The majority of theories and investigations mainly take into consideration perfect graphene sheets without defects [71,72]. Guo et al. [73] examined the effect of interlayer distance variations and on-chip defects on inter-layer graphene friction by using a wide range of molecular field statics calculations. They found that the friction between the graphene layers increases as the interlayer distance decreases. In addition, the introduction of defects significantly affects the interlaminar friction in graphene with incommensurate stacking. The ultra-low friction of graphene layers stacked in an incommensurate manner is insensitive to the chip defects of a certain orientation of vacancies. It has been found that the friction between graphene layers is affected by various factors, such as temperature, load, size, stacking, defects, interlayer spacing, and the number of layers [74]. Xu et al. [75] investigated the effect of layer thickness on the intrinsic frictional properties of few layer graphenes and revealed the strong dependence of stick-slip friction force on the number of layers. As the number of layers decreases, the sliding friction gradually decreases. When there are only two or three atomic layers, the average friction is almost zero. Sliding-friction mechanism of 2D material surfaces In addition to its abundant interlayer sliding friction, the sliding friction of 2D materials on a graphene surface has been extensively investigated. This frictional mechanism is discussed in the following sections. Electron-phonon coupling effect The mechanism of friction at a sliding interface has yet to be elucidated because of the inherent complexity. Two basic mechanisms of energy dissipation owing to friction have been obtained: electron and phonon contributions [76,77]. Filleter et al. [77] observed that the difference in friction between monolayer and bilayer graphene is attributed to the significant variation in electron-phonon coupling by using angle-resolved photoelectron spectroscopy. The possible explanation for the higher friction that occurs in monolayer graphene is that the lattice vibrations are efficiently damped, and thus, the majority of the energy can be dissipated only through electron excitation. Dong et al. [78] applied molecular dynamic simulation and the two-temperature method (TTM) to confirm that electron-phonon coupling in graphene slightly affects friction as compared with the substrate roughness. Although electron-phonon coupling may be essential for atomic friction, simulations based on a TTM model show that friction is slightly related to electron-phonon coupling. Puckering mechanism The mechanisms of substrate morphology, electronphonon coupling, and wrinkling are possible reasons for the thickness dependence of graphene friction, while the effect of electron-phonon coupling is weak. Thus, the state of the substrate and the puckering effect may seriously influence the dependence of the thickness of graphene friction. Lee et al. [23,79,80] found that the frictional force of atomically thin sheets obtained through mechanical exfoliation is closely related to the atomically thin-sheets-substrate binding state. When atomically thin sheets of graphene and MoS 2 are exfoliated onto a weakly adherent substrate (SiO 2 /Si), the surface friction decreases as the number of layers increases. Suspended atomically thin sheet films exhibit the same properties. Friction is not affected by the number of layers when graphene is exfoliated onto the surface of fresh mica with a high adhesive strength. Thus, the state of the substrate has a significant influence on the friction force of 2D materials. As the probes slide on the surface of a weakly adherent substrate, the friction force is large because of the obvious wrinkle deformation of graphene. While the probes slide on the surface of fresh mica, the strong binding between graphene and fresh mica inhibits the wrinkle effect, and thus, the thickness dependence is not observed. Cho et al. [81] found that mechanically stripped graphene produces low friction on an atomically smooth substrate. This class of graphene inhibits surface wrinkle deformation as observed using atomic stick-slip imaging. This is because the conformal morphology of graphene on the substrate enhanced the intimate contact, increased the contact area, and suppressed the puckering effect, thereby resulting in low friction. Thus, the influence of the morphology of the substrate is actually caused by the wrinkle effect. This property is also validated in subsequent studies base on puckering mechanism. Furthermore, Deng et al. [82] found that the friction increases with increasing number of graphene layers at low loads and attributed this result to the local deformation of the surface layer and van der Waals forces between AFM tip and the surface layer. Ye et al. [83] observed that the influence of the number of layers on the friction becomes notable when the simulated graphene sheet exceeds 32 nm or longer. Frictional behavior can be directly related to the height of anti-slip approaching wrinkles, the bonding energy between graphene layers decreases as the size of graphene decreases, resulting in an increase in the distance between graphene layers which less able to resist wrinkle formation. Energy dissipation mechanism The layer dependence of the friction force has been extensively investigated. Smolyanisky et al. [84] conducted a Brownian dynamics (BD) simulation to quantify the friction behavior at room temperature between the scanning ends of carbon nanotubes and FLGs or the surface of self-supporting multi-layer graphene. They found that the friction decreases at 5 m/s as the number of layers increases, and this observation is the same as the conclusion presented by Lee et al. Subsequently, they investigated the contact area of different layers of graphene in the sliding process and found that the real contact area changes slightly. Therefore, the contact area between the tip and the sample is not the main reason for the change in the friction at the graphene surface with the number of layers. They proposed an energy dissipation mechanism based on the elastic energy of the specimen deformation. The energy dissipation mechanism of the deformation has been further explored. Deng et al. [85] discovered unusual frictional behaviors at the graphene surface. When a probe gradually departs from the surface of a graphene sheet, the friction force increases as the load under the tip contraction decreases, thus resulting in an effective negative coefficient of friction at a low load. The size of this coefficient depends on the ratio of the adhesion of the tip-sample to the peel energy of graphite, and this unusual phenomenon is attributed to the delamination of the reversible part of the topmost atomic layer. A probe lift-off or applied load reduction results in a considerable surface deformation, thus increasing the frictional force (Fig. 4). Although the energy dissipation mechanism remains unverified, the influence of lateral slip is important. Sun et al. [86] used molecular dynamics simulations to study the lateral sliding behavior of graphene. Their results revealed that the energy corrugation associated with sliding is not only a result of the interfacial interaction but also the interaction between the atomic deformation layers of graphite. In addition, the unusual phenomenon that occurs during the tip retraction was correlated with the local atomic delamination of the top graphene layers (Fig. 5). This is consistent with the anomalous experimental phenomenon observed by Deng et al. at the surface of graphene [85]. Reguzzoni et al. [87] used molecular dynamics simulations to investigate the tribological properties of multilayer graphene under energy dissipation due to shear deformation. They proposed a new friction mechanism related to shear motion, in which the friction force decreases as the number of layers decreases. In summary, whether in theory or in an experiment, the nanoscale friction mechanisms of 2D materials have been extensively studied and have become one of the most popular subjects of study. A plane formed by a strong chemical bond between 2D materials has high in-plane strength, high surface chemical stability, and weak van der Waals force between its layers. Therefore, many strange friction mechanisms occur between the layers and surfaces. At present, although the research methods used vary widely, the results obtained are generally consistent; that is, 2D materials are expected to be promising nanoscale lubricating materials owing to their low friction coefficient and high load-bearing capacity. When 2D materials are used as additives in various types of macro-lubrication systems, protective films are often formed on the friction surfaces (improving surface roughness, repairing wear, bearing loads, preventing contact, providing low shear strength between layers, and causing interlaminar sliding) with nanoscale superlubricity properties. Therefore, they have important applications in the fields of lubricants, lubricating coatings, lubricating films, and space lubrication materials. Nanoscale lubrication mechanism of 2D materials The lubrication mechanism should be studied in order to fully reveal the tribological properties of the nanoparticles. 2D materials provide interlaminar sliding and a low shear force, thus giving them superlubricity properties. Moreover, owing to their nanostructures, they can easily enter the frictional contact surface to form a lubricating film. Furthermore, they can also decrease surface roughness and repair wear. However, the elucidation of this mechanism remains the subject of many debates in research on nanoparticle lubrication systems. Researchers have described the mechanism of lubricant enhancement using nanoparticle suspensions via surface analysis techniques, such as ball bearing mechanism [88−92], film formation mechanisms [93−98] and self-healing mechanisms [99−102]. These mechanisms can be categorized into two broad categories. One category comprises the direct nanoparticle effects, including the ball effect and film formation. The other category comprises the assisting effect of surface enhancement via the repair and polishing effect. These mechanisms are discussed in the following sections. Film formation mechanism Nanoparticles with large surface areas show chemical activities and become quickly adsorbed onto a friction surface to form a physical adsorption film. Some nanoparticles are affected by external factors in terms of friction surface migration and deposited on a friction surface to form a deposited film. Nanoparticles can also react chemically at the friction surface to create a chemical reaction film, thereby enhancing the wear resistance of the friction-pair surface [103]. Hu substances to produce a lubricating film that reduces the surface friction and wear. Su et al. [105] observed the lubricating wear scar surface of 0.25% graphite nanoparticles added to plant-based oil. Graphite nanoparticles can be stably adsorbed on friction surface, which forms a physical adsorption film on a friction surface. Therefore, they speculated that graphite nanoparticles are mainly used to improve the antifriction and anti-wear properties of plant-based oils because graphite nanoparticles can create physically deposited films on a surface. Based on this theory, Xiao et al. [106] stated that the film formation process can be divided into two stages. The initial high specific surface allows 2D materials to be easily adsorbed onto the surface of a substrate to form a physical membrane. The physical membrane can separate the two contact surfaces to prevent direct contact between the two sliding surfaces. In the second stage, the physical film ruptures with an increase in the frictional strength, thereby promoting the chemical reaction between a lubricant and a local contact surface. This chemical reaction forms a new tribological film and gradually replaces the physical film, and this film exists on a local contact surface. As a result, the tribological properties are improved. Self-healing mechanism With inherent limitations in manufacturing technology, the contact surface remains significantly rough. Nanomaterials can fill a concave area on a friction surface to smoothen it. Su et al. [105] developed a model for simulating the self-healing mechanisms of nanomaterials (Fig. 6). The friction surface is tightly bonded under a high pressure, thus resulting in friction and wear. When an amount of nanomaterials is added to the lubricating oil to penetrate the concave friction surface, the damaged surface is repaired. The instantaneous high temperature during sliding can even melt the nanoparticles and repair defects on the sliding surface. Similarly, Gulzar et al. [107] investigated the self-healing effect of nanomaterials. Nanoparticles deposit on interacting surfaces and compensate for the mass loss, thereby reducing wear and tear. Ball bearing mechanism Nanoparticles disperse through friction repair to form "class bearings" and transform sliding friction into rolling friction, thereby reducing the friction coefficient and exhibiting an excellent anti-friction performance. Gulzar et al. [107] established a mechanism model to study the class bearing effect of nanomaterials and found that nanoparticles transform sliding friction into a combination of sliding friction and rolling friction. This lubrication mechanism is attributed to a friction pair system with a stable low-load condition between the shear surfaces to maintain the shape and stiffness of the nanoparticles (Fig. 7). Xiao et al. [106] summarized the lubrication mechanism of 2D materials into four processes (Fig. 8) and proposed other mechanisms. For example, (1) 2D materials easily enter the friction pair because of their small size, and the relative movement of the contact surfaces results in a shear force acting on these materials. Consequently, multi-layered 2D materials are easily cut, and they readily come in contact with a friction surface to form a sliding system, thus resulting in effective lubrication. For instance, a MoS 2 sheet with lubricating grease comes in contact with the friction surface between two friction pairs. The sheet structure is easily absorbed onto the friction surface because of the lamellar crystal structure of MoS 2 Fig. 7 Schematic of ball bearing mechanism of nanoparticles. Reproduced with permission from Ref. [107], © Springer Nature 2016. and the combined van der Waals forces between the layers. Under the effect of the frictional shearing force and normal load, cleavage easily occurs between the lamellar MoS 2 layers. Cleavage is also observed along the cleavage plane slip, which plays an anti-friction role. (2) 2D materials are impervious to liquids and gases because of their ultra-high chemical stability. The adsorption of these materials on substrates helps to prevent chemical attacks on the lubricants or other active elements in a given system, thereby slowing the corrosion and oxidation of materials and further reducing the wear on the sliding surfaces. Application of 2D materials in lubrication 2D materials have excellent friction behavior mechanisms, and they have a high specific surface area (cover a greater surface area), high in-plane strength (bond by covalent bonding), good surface chemical stability, and low interlaminar shear strength (allow adjacent layers to slide easily against each other). Therefore, their surface lubrication effect is apparent, which is very suitable for applications in the lubrication field, such as lubricant additives, space lubrication materials, and nanoscale lubricating films. Lubricant additives Given their unique molecular structure and lubrication properties, 2D materials are widely used in the field of tribology. They are also excellent candidates for the lubrication of friction surfaces because of their high strength and the low interlaminar shear strength between their atoms. The anti-friction and anti-wear performance of 2D materials is superior to that of conventional lubricant additives. In addition, 2D materials can also be used to reduce the emission of harmful substances and make an important contribution to building a green environment [108−111]. 2D materials are widely used as oil-based lubricants. Zhao et al. [112] used a UTM-2 friction and wear testing machine for a graphene abrasion resistance test, and the obtained results demonstrated that graphene can significantly improve the friction and wear properties and even reduce the maximum friction coefficient by 78%. Furthermore, the maximum wear rate can be decreased by 95%. Senatore et al. [113] studied the tribological behavior of graphene oxide nanosheets in mineral oil. The experimental results indicated that the average friction coefficient decreases by approximately 20%, and the anti-wear rate decreases by approximately 30% at 25 °C − 80 °C and an average contact pressure of 1.17 GPa. The friction reduction mechanisms of graphene and graphene oxide are similar. They easily form a protective film to prevent the direct contact between the contact surfaces. In addition, the nanoscale thickness and extremely thin laminated structure offer a lower shear strength, thereby easily causing interlayer sliding and resulting in a lower friction. Although graphene can effectively play anti-friction and anti-wear roles, graphene is prone to agglomerating in the base lubricating oil, which seriously affects the lubricating effect. In order to avoid the agglomeration of graphene, (1) an appropriate amount of dispersant is used to improve the uniform dispersion in the lubricating oil, and (2) an appropriate chemical modification (such as fluorination or hydrogenation) of graphene is used to enhance the uniform dispersion of graphene in the lubricating oil. Zhang et al. [114,115] utilized the modification effect of an oleic acid surfactant to disperse graphene evenly and uniformly in poly-α-olefin (PAO9) lubricating oil and used a four-ball abrasion testing machine to test the anti-friction and anti-wear performance of modified graphene (Fig. 9). The obtained experimental results show that the friction coefficient decreases by 17% when the mass fraction of the oleic-acid-modified graphene is 0.02%, whereas the wear spot diameter is reduced by 14% when the mass fraction is 0.06%. In addition, they concluded that an appropriate quality of graphene is required in the lubrication in order to achieve a friction reducing effect. Lin et al. [116] conducted studies on the role of such modifiers and found that graphene is effectively modified by stearic acid and oleic acid, which is uniformly dispersed in the lubricating oil as an additive. The wear resistance and bearing capacity of the lubricating oil are significantly improved. This is because long paraffin chains on the surface of graphene produce a steric hindrance when modified graphene is dispersed in the base lubricating oil to prevent graphene sheets from being precipitated and agglomerated. Considering that the modification effect can obviously improve the dispersibility of graphene, Zheng et al. [117] applied ball milling to prepare graphene fluoride to improve the bearing capacity and wear resistance of lubricants, and the thus obtained base oil had good dispersion stability. In addition, they noted that fluorinated graphene sheets can easily enter the contact surface and form a protective layer, which reduced the wear and improved the load carrying capacity. However, when the concentration exceeds a critical value, the excess fluorinated graphene sheets might agglomerate with the metal wear debris, thereby reducing the wear resistance. The conclusion of this study has been confirmed. Dou et al. [118] used crumpled graphene balls as additives to polyalphaolefin base oils. Owing to the unique self-dispersibility of crumpled graphene balls, they reduce the friction coefficient and wear coefficient by approximately 20% and 85%, respectively. In summary, to realize the wide applications of 2D materials in the field of lubrication, the problem of agglomeration is required to be further studied. 2D materials also act as water-based lubricating additives to maintain excellent thermal conductivity and the bearing capacity of water-lubricated films. Song et al. [119] investigated the tribological properties of graphene oxide nanosheets as water-based lubricating additives by using a UMT-2 ball friction tester. The result showed that graphene oxide nanosheets slide between the friction surfaces and form a thin physical friction film on the friction surface, which can bear friction and prevent the direct contact of the steel ball surface. Cho et al. [120] studied the lubricating effect of the water dispersions of h-BN nanoplates. They synthesized h-BN aqueous dispersions at concentrations of 1%, 0.05%, and 0.01% and evaluated the friction and wear of the aqueous dispersions using SiC balls sliding on a disk. They demonstrated that even a small amount of h-BN nanosheets can enhance the wear resistance and reduce the friction. A repeated peeling and deposition of h-BN occurs on the sliding surface, thereby forming a protective film that can reduce friction and wear. He et al. [121] studied the lubricating effect of α-zirconium phosphatein both oil and aqueous media using a characterization technique, and revealed that they are effective lubricant additives in oil and aqueous media. Subsequent to the addition of them in mineral oil and water, the friction is reduced by 65% and 91%, respectively. Their unique 2D structure promotes the arrangement of zirconium phosphate nanosheets in lubricants, thereby resulting in effective lubrication. Space lubrication materials Sputtered MoS 2 solid lubricants are widely used in the aviation field, e.g., high-speed long-life bearings for gyroscopes and accelerometers and gears and bearings for spacecraft harmonic drive devices [122]. Magnetron sputtering technology provides numerous advantages, including a high film-formation rate, low deposition temperature, uniform film thickness, and dense film structure. This technology has been applied for preparing various wear-resistant, corrosion-resistant, and anti-oxidation coatings and solid lubricant coatings [123,124]. Numerous practical applications of MoS 2 sputtering films have been developed, and using the Aerospace 510, it has been shown that the service life of these sputtering films can exceed times, and individual test results have reached times [125]. Extreme conditions such as vacuum, low temperature, and strong radiation in the space environment demand special performance requirements of solid lubricants [126]. To satisfy the high reliability requirement of moving parts in space equipment and extend their life expectancy, Cheng [127] used nanoscale MoS 2 as an additive for space grease in an atmospheric environment and a simulated space environment, experimentally studied the anti-friction and anti-wear effects of the grease, and concluded that the friction and wear resistance of nano-MoS 2 are superior to those of MoS 2 . Space grease with MoS 2 nanosheets has the best extrusion performance and anti-friction and anti-wear properties and good lubrication performance under high-load conditions. However, an excessive amount of nano-MoS 2 affects the flow of space grease, thus resulting in significantly decreased lubrication. Luo [128] experimentally concluded that the addition of nano-MoS 2 particle/polyester polymer aviation lubricants significantly improves the anti-wear performance, and the optimal concentration is 3 wt% to 4 wt%. Song et al. [129] found that graphene can solve the problem of vacuum lubrication and proposed a unique mechanism. They introduced graphene into a carbon film and grew 2D nanostructured carbon-based film materials on a carbon film using a field-induced growth method. An ultra-low coefficient of friction of 0.02 was thus obtained. The graphite surface is disorderly tiled, which impedes relative motion, thus resulting in a large friction coefficient. A graphene sheet has π electrons and certain surface interaction forces at the 2D structure surface and can form an oriented layered structure. Only a weak van der Waals force acts between its layers, which makes it prone to slipping, thus allowing it to exhibit ultra-low frictional characteristics with nanoscale superlubricity properties. Nanoscale lubricating film Graphene-layered structures have an ultra-thin thickness, ultra-low shear strength, high chemicalstability, and high specific surface area; thus, they are suitable for use in micro/ nanofilm equipment. Lee et al. [130] deposited graphene onto a copper surface through CVD to form a thin film, and the friction coefficient of a copper film deposited with graphene is lower than that of bare copper foil. Watanabe et al. [131] prepared a WS 2 /MoS 2 nanocomposite film, i.e., a superlattice structure multilayer film using radio frequency sputtering. The multilayer films exhibit a wear resistance superior to that of individual monolayers. This multilayer film shows potential for the improvement of wear resistance relative to current MoS 2 -based low-friction films and may be suitable for use under severe frictional conditions because of the increase in film hardness caused by the multilayer structure. Kim et al. [132] discovered the excellent frictional properties of graphene films grown on Cu and Ni metal catalysts and transferred the two graphene films to SiO 2 /Si substrates through the CVD method. It was found that graphene films effectively reduce the adhesion and friction force. Graphene grown on Ni has a friction coefficient of 0.03, and the frictional difference is the appearance of a tortoise-like amorphous carbon layer on the Ni-grown graphene. In general, CVD-grown graphene films exhibit a strong potential for reducing adhesion and friction and protecting the substrate surface. The liquid phase exfoliation method can be used to produce high-yield graphene films, while the film performance obtained using this method is degraded owing to the functional groups and structural defects. Sun et al. [133] presented an annealing method in which nickel sputtering is used for the structural repair of graphene sheets by stitching the spacers, thus forming a continuous film, which improves the crystal quality. Similarly, Li et al. [134] performed thermal annealed on copper foils, coalesced the electrochemically exfoliated graphene flakes, and finally recrystallized it into a continuous film. The new growth mechanism of this recrystallized and coalesced graphene can be used to prepare a high-quality graphene film having a large area, which has wide applications. Summary and outlook 2D materials exhibit anti-friction and anti-wear properties because of their unique layered structure, and thereby providing great prospects for the development of nanoscale lubricants. In this paper, we reviewed the friction mechanism of 2D materials and found that this mechanism can be divided into the interfacial friction and surface friction mechanisms. The interfacial friction mechanism is mainly influenced by temperature, load, size, stacking, defects, layer spacing, and the number of layers, while the surface friction mechanism is mainly categorized into the electronphonon coupling effect, out-of-plane puckering mechanism, and deformation energy dissipation mechanism. The lubrication mechanism includes the film formation mechanism, ball bearing mechanism, self-healing mechanism, and other lubrication mechanisms. In practical applications, 2D materials are commonly used as lubricant additives, nanolubricating films, and vacuum space lubricating materials because of their unique anti-friction and anti-wear properties. Finally, to facilitate the use of 2D materials, we opine that future studies should be performed in the following areas: 1) The study of atomic level friction is a new direction in the field of tribology. Research on superlubricity may help to promote the development of the industry and energy, and thus, further research is required for achieving superlubricity under macro-and microscale conditions and maintaining the durability of superlubricity [135,136]. 2) Although 2D materials have excellent frictional properties, they are easily affected by the surface conditions, thus resulting in a degraded performance. Therefore, the optimization of various processes is inevitable during the process of preparation and the experiment. 3) Functionally modified 2D materials have superior friction properties owing to the inhibition of the agglomeration of graphene. Therefore, the preparation of functional 2D materials and the mechanism of the suppression of agglomeration should be further studied. 4) 2D materials quickly adsorb onto the surfaces of the friction pair because of their high specific surface area, and further chemical reactions occur. Therefore, further study is required to understand those physical and chemical reactions in order to make the lubrication mechanism of 2D materials more reliable. 5) 2D non-lamellar materials have great advantages in energy conversion and storage in the electronics industry because of their complex electronic structure [137,138]. 6) The preparation of new materials in the 2D materials family and the physical and chemical properties of the materials require further exploration. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Lincong LIU. He received his bachelor degree in mechatronic engineering in 2017 from Harbin University of Science and Technology. After then, he was a master student in Guangxi University of Science and Technology. His research interests include liquidphase lubrication and micro-tribology.
9,192.4
2019-03-26T00:00:00.000
[ "Materials Science", "Physics" ]
Noncommutative Geometry: Fuzzy Spaces, the Groenewold-Moyal Plane In this talk, we review the basics concepts of fuzzy physics and quantum field theory on the Groenwald-Moyal Plane as examples of noncommutative spaces in physics. We introduce the basic ideas, and discuss some important results in these fields. In the end we outline some recent developments in the field. Introduction Noncommutative geometry is a branch of mathematics due to Gel'fand, Naimark, Connes, Rieffel and many others [1,2,3]. Physicists in a very short time adopted it and nowadays use this phrase whenever spacetime algebra is noncommutative. There are two such particularly active fields in physics at present 1. Fuzzy Physics, 2. Quantum Field Theory (QFT) on the Groenewald-Moyal Plane. Item 1 is evolving into a tool to regulate QFT's, and for numerical work. It is an alternative to lattice methods. Item 2 is more a probe of Planck-scale physics. This introductory talk will discuss both items 1 and 2. History The Groenewold-Moyal (G-M) plane is associated with noncommutative spacetime coordinates: It is an example where spacetime coordinates do not commute. The idea that spatial coordinates may not commute first occurs in a letter from Heisenberg to Peierls [4,5]. Heisenberg suggested that an uncertainty principle such as can provide a short distance cut-off and regulate quantum field theories (qft's). In this letter, he apparently complains about his lack of mathematical skills to study this possibility. Peierls communicated this idea to Pauli, Pauli to Oppenheimer and finally Oppenheimer to Snyder. Snyder wrote the first paper on the subject [6]. This was followed by a paper of Yang [7]. In mid-90's, Doplicher, Fredenhagen and Roberts [8,9] systematically constructed unitary quantum field theories on the G-M plane and its generalizations, even with time-space noncommutativity. Later string physics encountered these structures. What is noncommutative geometry According to Connes [1,2,3], noncommutative geometry is a spectral triple, where A = a C * -algebra, possibly noncommutative, D = a Dirac operator, H = a Hilbert space on which they are represented. If A is a commutative C * -algebra, we can recover a Hausdorff topological space on which A are functions,using theorems of Gel'fand and Naimark. But that is not possible if A is not commutative. But it is still possible to formulate qft's using the spectral triple. A class of examples of noncommutative geometry with A noncommutative is due to Connes and Landi [10]. If some of the strict axioms are not enforced then the examples include SU (2) q , fuzzy spaces, the G-M plane, and many more. The introduction of noncommutative geometry has introduced a conceptual revolution. Manifolds are being replaced by their "duals", algebras, and these duals are being "quantized", much as in the passage from classical to quantum mechanics. Fuzzy physics In what follows, we sketch the contents of "fuzzy physics". Reference [11] contains a detailed survey. For pioneering work on fuzzy physics, see [12,13,14]. What is fuzzy physics [11] We explain the basic ideas of fuzzy physics by a two-dimensional example: S 2 F . Consider the two-sphere S 2 . We quantize it to regularize by introducing a short distance cut-off. For example in classical mechanics, the number of states in a phase space volume is infinite. But we know since Planck and Bose that on quantization, it becomes This is the idea behind fuzzy regularization. In detail, this regularization works as follows on S 2 . We have Now consider angular momentum L i : wherex i ∈ Mat 2l+1 ≡ space of (2l+1)×(2l+1) matrices. As l → ∞, they become commutative. They give the fuzzy sphere S 2 F of radius r and dimensions 2l + 1. Why is this space fuzzy Asx i ,x j (i = j) do not commute, we cannot sharply localizex i . Roughly in a volume 4πr 2 there are (2l + 1) states. Field theory on fuzzy sphere A scalar field on fuzzy sphere is defined as a polynomial inx i , i.e., Differentiation is given by infinitesimal rotation: A simple rotationally invariant scalar field action is given by Simulations have been performed [15,16] on the partition function Z = dΦe −S(Φ) of this model and the major findings include the following: • Continuum limit exists. Also S 2 F can nicely describe topological features. Hence it seems better suited for preserving symmetries than lattice approximations. Strings [33] If N D-branes are close, the transverse coordinates Φ i become N × N matrices with the action given by where f ijk are totally antisymmetric. The equations of motion give solutions when f ijk are structure constants of a simple compact Lie group. Thus we can have If L i form an irreducible set, then we have L · L = l(l + 1), (2l + 1) = N, and we have one fuzzy sphere. Or we can have a direct sum of irreducible representations: Then we have many fuzzy spheres. Stability analysis of these solutions including numerical studies has been done by many groups. Quantum gravity and spacetime noncommutativity: heuristics The following arguments were described by Doplicher, Fredenhagen and Robert in their work in support of the necessity of noncommutative spacetime at Planck scale. Space-space noncommutativity In order to probe physics at the Planck scale L, the Compton wavelength M c of the probe must fulfill Such high mass in the small volume L 3 will strongly affect gravity and can cause black holes to form. This suggests a fundamental length limiting spatial localization. Time-space noncommutativity Similar arguments can be made about time localization. Observation of very short time scales requires very high energies. They can produce black holes and black hole horizons will then limit spatial resolution suggesting ∆t∆| x| ≥ L 2 , L = a fundamental length. The G-M plane models above spacetime uncertainties. What is the G-M plane The Groenewald-Moyal plane A θ (R d+1 ) consists of functions α, β, . . . on R d+1 with the * -product For spacetime coordinates, this implies, Conversely these coordinate commutators imply the general * -product up to certain equivalencies. The G-M plane also emerges in quantum Hall effect and string physics. Quantum Hall ef fect (the Landau problem) [34] Consider an electron in 1-2 plane and an external magnetic field B = (0, 0, B) perpendicular to the plane. Then the Lagrangian for the system is is the electromagnetic potential and x a are the coordinates of the electron. Now if eB → ∞, then This means that on quantization we will have which defines a G-M plane. Strings [35] Consider open strings ending on Dp-Branes. If there is a background two-form Neveu-Schwarz field given by the constants B ij = −B ji , then the action is given by As B → ∞ or equivalently g ij → 0, which is just a G-M plane. Fig. 1 indicates different sources wherefrom fuzzy physics and the G-M plane emerge. The question mark is to indicate that the G-M plane may not regularize qft's. Until 2004/2005, much work was done on • QFT's on the G-M plane and its renormalization theory, uncovering the phenomenon of UV/IR mixing [36]. This brings into question much of the prehistory-analysis. Examples include the following new results: 1. The Pauli principle can be violated on the G-M plane. 3. There need be no ultraviolet-infrared (UV-IR) mixing in the absence of gauge fields [54]. There is also a striking, clean separation of matter from gauge fields due to the Drinfel'd twist [55], (in the sense that they have to be treated differently) reminiscent of the distinction between particles and waves in the classical theory. Literature should be consulted for details of these developments.
1,786.4
2006-06-14T00:00:00.000
[ "Mathematics", "Physics" ]
Analysis plan for primary cohort GWAS for blood lipid levels for the Global Lipids Genetics Consortium This protocol describes the guidelines for generating GWAS summary statistics that were provided to all participating cohorts in the Global Lipids Genetics Consortium HRC + 1KGP3 imputation meta-analysis collected jointly with the GIANT consortium. It is applicable for generating the input phenotype and genotype files for use with the rvtests software. Key stages include imputation, generating phenotype files, and running the analysis to generate summary statistics. The exact time needed to carry out this protocol varies depending on the existing genotype files and size of the cohort. The latest version of rvtests is available at: http://zhanxw.github.io/rvtests/ Overview There are three major components to this analysis plan: 1) Genome-wide genotypes must be on the correct build (37/hg19) and correct strand (forward). 2) For ALL studies, imputation of genotypes is performed using the 1KG phase 3 (if you have not already done so) and, for studies with samples of European-ancestry, also to the large haplotype panel Loading [MathJax]/jax/output/CommonHTML/fonts/TeX/fontdata.js from the Haplotype Reference Consortium. If you have already imputed to a different large haplotype panel (e.g. UK10K) please contact us. 3) Association analysis is conducted using speci c software tools that will provide all necessary summary statistics for exible central meta-analysis. This coordinated plan between Global Lipids and GIANT is meant to reduce the burden on primary analysts. We welcome you to join one or both consortia. We thank you for your participation in past projects, and particularly welcome studies that are new to either consortia. Software Below is a list of the software you will need to complete this analysis plan. For some tools using the speci c version listed may be important. Phasing algorithms such as SHAPEIT, Eagle, HapiUR, and Minimac are only necessary if you're conducting imputation in-house, rather than using an imputation server. We provide generic instructions and instructions speci c for the University of Michigan imputation server. Some may have used or be using other servers (e.g. the Sanger server); please contact them for instructions speci c to that imputation server. All studies must have some version of a genome-wide array, for example with >200,000 genome-wide common variants. Targeted arrays with limited/incomplete coverage of the genome (e.g. MetaboChip, ImmunoChip, Exome Chip, etc.) should not be used unless merged with a genome-wide array. If you are unsure, please contact Joel and/or Cristen for speci c advice. Individual studies should provide information in the Excel spreadsheet about the manufacturer and version of the array(s) they are using. QC should be done separately for individual studies, for major continental ancestries within a study, and also separately for samples of the same study that were genotyped on different arrays. Genotype QC Typical pre-imputation QC criteria: These are some steps that we recommend for sample QC. Additional QC steps may be needed and should be determined by the local analysts for each study. Studies should provide a brief description of QC criteria in the Excel sheet. Prepare les for imputation The "HRC/1KG Imputation Preparation and Checking Tool" developed by Will Rayner will check input data for accuracy relative to expected HRC or 1000G inputs prior to imputation. This process will identify errors in your input data, including incorrect REF/ALT designations, incorrect strand designations, extreme deviations from expected allele frequencies, and palindromic (A/T and G/C) SNPs with allele frequency near 0.5 that are often the source of imputation errors, and generates commands to make les that have xed or removed these problematic variants. The tool also requires frequency les from plink, which can be created as follows: plink --b le <binary plink pre x> --freq --out input_ le_pre x With these input les the tool can be run as follows: The perl script automatically produces a shell-script called Run-plink.sh. The shell script contains a set of plink commands that should be run to update or remove SNPs based on the checks and to create one updated binary plink le per chromosome. For this, the paths to the original study binary le should be adjusted in the shell script and the script should be started by typing ./Run-plink.sh at the command prompt. The cleaned/updated binary les (one for each chromosome) generated by this tool should be ready for upload to the imputation server below after changing them to VCF format using plink2, bgzip and tabix in the UNIX environment: There are two ways to conduct imputation: a) an Imputation Server or b) in-house imputation. The following instructions provide detailed instructions for imputation with the Michigan Imputation Server, but others such as the Sanger server are also acceptable. We recommend you use an imputation server if possible. Note: If you prefer, the imputation server has a 'Quality Control only' option you can use prior to imputation to ensure that no samples will be eliminated during imputation of autosomes or X Loading [MathJax]/jax/output/CommonHTML/fonts/TeX/fontdata.js chromosome. If you run 'QC and imputation', and samples are removed (e.g., due to gender discordance), please make sure that number and order of samples in the X chromosome and autosomes will match as described in section 2.3. Ancestry of samples and imputation panels: We ask all studies that have not imputed to 1KG phase 3 to use the imputation server to do so. In addition, for studies with individuals of European-ancestry, we ask that they also use the imputation server to impute using the HRC panel. Phasing The imputation server will automatically determine if you provide phased or unphased VCF les. We encourage re-phasing using Eagle (which is implemented by the imputation server). If you would like to provide phased haplotypes please convert them to VCF format. An example command to convert haplotype format of HAPS/SAMPLE les from SHAPEIT to VCF would be: shapeit -convert -input-haps study. Make an account and follow the instructions on the website. You will upload VCF genotype les to the server (only VCF le format is supported; instructions on converting to VCF are contained on the imputation server website under "Help" and above in the chapter "Prepare les for imputation"). One VCF le must be submitted for each chromosome. Downloading your imputed genotypes, info les, and QC report When imputation has nished you will receive an email alert. The imputation server will automatically encrypt all your imputed genotypes (for protection during download). The password to decrypt the les will be in the email noti cation, so don't delete that email! When you download your imputed genotypes please be sure to download all available les (the qcreport, statistics, zip les, and all the log les). We will ask that you submit most of these les to us along with all other les generated as part of this analysis plan. Please do a quick check of the qcreport.html and statistics.txt les before proceeding with the analysis plan. If you are unsure of your imputation quality, please contact us. You will also upload these les to the server (see 8c). X chromosome NOTES on chromosome X: 1) If the server detects sex discrepancy between genotypes and the provided PED le, discordant samples will be removed automatically prior to phasing and imputation. You will need to resolve these discrepancies genome-wide before proceeding with analysis. 2) You will receive two output VCF les for chromosome X, one for males and one for females. You will need to merge these into a single chrX VCF. Both of these issues are addressed below in section 2.3. Imputation In-House If you have chosen to use an imputation server, then you can ignore this step. Loading [MathJax]/jax/output/CommonHTML/fonts/TeX/fontdata.js Some studies will not be able to use the imputation server, for example if original participant consents forbid it. These studies will have to conduct imputation on their own. Please contact us if you need assistance. Post-imputation sample harmonization and variant pruning There are a few post-imputation processing steps that are necessary prior to analysis. A. Harmonize samples and samples order between autosomes and chromosome X As mentioned above, chrX imputation may have different numbers and order of samples from the autosomes due to automated ltering on the imputation server. You need to reconcile these possible discrepancies before creating a single whole genome VCF le, which will be used to create a genomic relationship matrix (kinship matrix) for the linear mixed model. If the sets of samples are not identical between the X and autosome VCFs (most likely because possibly sex-discordant samples were dropped during imputation of the X) you will need to eliminate any samples present only in autosome or X VCFs, and then reorder the samples in the X chromosome VCF le. The following bcftools commands 1) make a list of samples IDs in the chr22 VCF, 2) merges the male and female chrX VCFs and reorders the resulting chrX VCF according to the order of samples in chr22, then 3) generates an ordered list of IDs from chrX. Compare the two ID lists to con rm they are identical before proceeding. Note that some cohorts have a small number of males who are heterozygous at many X chromosome markers outside the pseudoautosomal region. Because these create problems later in analysis, we recommend removing these individuals as well, in addition to sex-discordant individuals. It then proceeds to reorder the le, omitting these observations. If you encounter this message, you will need to use the sample list le ID_order_chrX.txt to select the overlapping samples when creating the polymorphic datasets in section B below. If you do NOT receive the warning message, please check that the sample list les ID_order_chrX.txt and ID_order_autosomes.txt are exactly the same (in content and order). If these two les are identical, you may proceed without the sample selection option in section B. This will speed up the operation of bcftools in section B. B. Create a list of monomorphic sites and subset polymorphic variants These reference panels impute a great many variants into your samples, the vast majority of which are rare, and many will not be polymorphic in your sample. To minimize the size of output les in the association analyses below, we ask that you remove all monomorphic sites (monomorphic de ned as at least one variant hard call genotype) from input VCF les prior to analysis. In conjunction, we ask that you upload a list of the variants that you drop in this process. The following commands with bcftools will generate both the list of monomorphic SNPs and the pruned VCF les. · If LDL was measured after 1994 (measured and not estimated by Friedewald), then adjust LDL values for individuals taking lipid-lowering medication by using LDL/0.7 to approximate pre-medication LDL levels. (iii.a) Inverse normal transformed Model: Generate residuals and inverse normal transform in men and women separately and separately by case status if appropriate (e.g. where disease status is correlated with cholesterol you will have MenCase, MenControl, WomenCase, WomenControl, SexCombinedCase, and SexCombinedControl). For the sex combined analyses, combine the "men-speci c_LDL_INV" and women-speci c_LDL_INV" phenotype values to analyze together (all_LDL_INV). (iii.b) Raw trait Model: Only sex-combined. Generate residuals with sexes combined, but separately by case status if appropriate (e.g. where disease status is correlated with cholesterol you will have SexCombinedCase and SexCombinedControl). (iv) Triglycerides (iv.a) Natural log + inverse normal transformed Model: Generate residuals and inverse normal transform in men and women separately and separately by case status if appropriate (e.g. where disease status is correlated with cholesterol you will have MenCase, MenControl, WomenCase, WomenControl, SexCombinedCase, and SexCombinedControl). For the sex combined analyses, combine the "men-speci c_logTG_INV" and women-speci c_logTG_INV" phenotype values to analyze together (all_logTG_INV). ln(raw Triglycerides trait value in mg/dl) = age + age 2 + PCs (+ other study speci c covariates as needed) -> residuals -> inverse normal transformation (logTG_INV) NOTE: GLGC requests that you include ~4 PCs as study-speci c covariates. ln(raw Triglycerides trait value in mg/dl) = age + age 2 + sex + PCs (+ other study speci c covariates as needed) -> residuals (logTG_RAW) NOTE: GLGC requests that you include ~4 PCs as study-speci c covariates. Non-HDL cholesterol (raw trait value in mg/dl for men) = age + age 2 + PCs (+ other study speci c covariates as needed) -> residuals -> inverse normal transformation (men-speci c_nonHDL_INV). Non-HDL cholesterol (raw trait value in mg/dl) = age + age 2 + sex + PCs (+ other study speci c covariates as needed) -> residuals (nonHDL_RAW) Genotype-phenotype association A. Group samples for analysis · Please analyze major ancestry groups separately. · We anticipate that some studies will have data from multiple genotyping arrays on samples from the same cohort. We expect there will likely be three typical situations: o No sample overlap: analyze studies separately (batches with reasonably similar arrays can be analyzed together, using the array type as a covariate). o All samples overlap: either a) select 1 array for imputation or ideally b) merge genotypes prior to imputation and perform a single analysis. o Partial but signi cant overlap of samples -contact us, if needed, to customize a plan. The goal should be to upload sets of results from non-overlapping samples that can be combined in meta-analysis. · If a set of samples being analyzed together is particularly large (for example, N>30000), the association plan may need to be modi ed. Please contact us to discuss further if this is the case. B. Perform association analyses Always test additive models using linear regression, using a method that accounts for genotype imputation uncertainty, and also accounting for known or cryptic relatedness (see below). Please indicate Loading [MathJax]/jax/output/CommonHTML/fonts/TeX/fontdata.js in your submission README le what method you have used for association analysis. Each individual study will perform data quality control (QC) and analysis and provide summary results for meta-analysis. Results les will be deposited to a central repository (details are provided below) where QC/data cleaning and meta-analysis will be performed. We request association analyses be carried out using rvtests (version 20170210). This software can be used for samples that include families or when an empirical kinship matrix is required for analysis. The input les required (phenotype, covariates) for rvtests are compatible with PLINK formats. It also calculates Hardy-Weinberg p-value and call rate for quality control purposes. We ask each study to generate kinship matrices (one for the autosomes and one for chromosome X) and t linear mixed models to deliver single variant analysis results for the additive model. Documentation for rvtests can be found here: https://github.com/zhanxw/rvtests If you are unable to use a linear mixed model for some reason, please correct for ancestry/relatedness. At a minimum, use ~10 principal components as covariates to correct for population strati cation and include principal components as indicated in the analysis models below. If you do not use rvtests, your results may not be usable for conditional analyses and aggregated analyses of rare variants. If in doubt, we are happy to advise. Please indicate in the Excel le requested below the method used to account for ancestry/relatedness. Summary level statistics for meta-analysis of variant associations The following summary level statistics will be generated and shared for meta-analysis of variant associations, ideally using rvtests. rvtests requires an indexed VCF le (for genotypes), a PED le (for phenotypes) as input. Instructions below are for rvtests. If you are using a different method, please contact us. i. Basic VCF metrics, including reference and alternative alleles, chromosome positions and strand. These statistics are not directly used in the computation of the variant tests but are needed for interpretation and meta-analysis. ii. Single variant association test statistics, including direction of effect. We use score statistics calculated at each variant site. Another possible option would be to share estimates of genetic effects -but, when variant frequencies are low, score statistics are more numerically stable and preferred. iv. Covariance matrix for each genetic region. We compute the genotype covariance for all variants in a sliding window, with a width of 500,000 base pairs. This matrix re ects linkage disequilibrium in the region and will be used for meta-analysis of aggregate region-or gene-based tests of rare/low-frequency variation. 2. Indexing the VCF les rvtest works with tabix, and takes indexed bgzipped VCF les as input. Specify VCF les You should use --inVcf $your_vcf_ le to specify which VCF to use. Specify phenotypes The rvtest tool requires a simple pedigree le that starts with the standard 5 columns (family id, individual id, father id, mother id and sex) followed by trait or trait residuals. You can use --mpheno $phenoypeColumnNumber or --pheno-name to specify a given phenotype. Phenotype le is speci ed by the option --pheno example.pheno. The default phenotype column header is "y1". If you want to use alternative columns as phenotype for association analysis (e.g the column with header y2), you may speci c the header names using either · --mpheno 2 or · --pheno-name y2 NOTE: to use "--pheno-name" the header line must start with " d iid" as PLINK requires. In phenotype le, missing values can be denoted by NA or any non-numeric value. Individuals with missing phenotypes will be automatically dropped from subsequent association analysis. For each missing phenotype value, a warning will be generated and recorded in the log le. Optional: If using rvtests for phenotype transformation: If you would like to calculate residuals using the rvtest software please refer to the covariates that need to be included for each trait as described in the phenotype transformation step and use --covar and --covarname options to designate covariates, and the --inverseNormal and --useResidualAsPhenotype while performing the analysis. Analyses will loop through chromosomes Many of the steps in this analysis plan will be done chromosome by chromosome. One very easy way to run by chromosome is with a for loop in bash or c-shell. An example is provided below; modify as appropriate for the phenotypes, sexes, and ancestries available in your study, the panel(s) being used for imputation, and for job submission with your compute resources. NOTE: Some of these lines are not used if you have already generated residuals and performed inverse normal transformation. These are indicated in red. If using rvtest to generate residuals, input covariates are not consistent for all analyses. For chrX, male genotypes in rvtest can be coded as 0, 1, 0/0, or 1/1, and dosages should be between 0 and 1. rvtest will convert these to a 0-2 scale automatically in analysis. rvtest already has preset values for the pseudo-autosomal regions (PAR), and properly adjusts analysis accordingly. rvtest will perform analysis on both PAR and non-PAR regions, but only calculates HWE and alleles counts in females for non-PAR regions. rvtest association analysis should be conducted on polymorphic variants only.
4,333
2021-11-29T00:00:00.000
[ "Biology" ]
Statistical analysis of the processes of intensification of the introduction of information and communication technologies in the socio-economic sphere of the Russian Federation The article analyzes the effectiveness of the development of digitalization processes in the socio-economic sphere of the Russian Federation and its territorial entities. Based on the information arrays of the Federal statistical observation on the use of information technologies and information and telecommunications networks by the population, the level of use of information and communication technologies by the population of the Russian Federation in everyday life is studied. In order to deepen the analysis, the authors produced a ranking of regions of southern Federal district by key indicators, characterizing the use information and communication technologies in 2019 by its population, allowing to allocate and justify the leading factors in the development of the information society and the intensification of the processes of digitalization of economy and social sphere of regions. The authors formulated a number of recommendations for the further development of the information society in the Russian Federation and its territorial entities. Introduction One of the most important tasks of implementing the process of digitalization of the economy is to improve the quality of life of the population by opening new directions for development and increasing human capital, expanding the accessible environment for people with disabilities. In turn, the digital economy also imposes new requirements for the individual: the ability to work with information and communication technologies (ICT) and to apply innovative products for professional and personal needs. In this regard, there is a pressing need for statistical analysis and assessment of the readiness of the population to move to a qualitatively new level of social development. In terms of the high degree of territorial differentiation in the Russian Federation, it is very difficult to assess the readiness of the transition to the digital economy at the regional level. This is due to the fact that many of the indicators that characterize the process of digitalization of the economy and are calculated at the country level as a whole are not used in the formation of regional information resources. Along with the above, today, the solution of the problem of obtaining an assessment of the compliance of regional development with the challenges of the digital economy is relevant and significant. First, diversification of the processes of regional informatization unable to restrain the total transition to the digital economy of the country, and second, the developed methodology for assessment of compliance of the regions to the challenges of the digital economy is necessary for maximizing the economic benefits from the use of this type of information technology. Materials and methods In this study, information resources of the Federal state statistics service ("Rosstat") were used, in particular, data from the sample Federal statistical observation on the use of information technologies and information and telecommunications networks by the population (hereinafter referred to as the "ICT survey") [1]. The most important area of application of the information arrays of the abovementioned survey is the analysis of the results of implementation of major state programmes aimed at the development of the information society and the digital economy, monitoring the implementation of Goals of sustainable development and the realization of inter-territorial comparisons, including ratings of the level of ICT development. In this study, the authors used data from a sample survey of ICT to rank the regions of the southern Federal district based on non-parametric methods of analysis and evaluation, in particular, on the basis of the Pattern method. This method is based on the estimation of a multidimensional average value based on the formula of the arithmetic mean simple, calculated from the relative values of comparison of territorial entities by the best value of the indicator for each component of the information system under study [2]. Thus, the Pattern method allows to bring various indicators to a single comparison base, calculate the integral indicator of the studied area and on this basis make a rating assessment of territories, which will allow to identify areas of sustainable growth or insufficient development of the phenomenon under study, which is the basis for developing recommendations for improving the situation within a specific territorial entity. Results and Discussion In order to conduct statistical analysis and evaluation of the processes of digitalization of economy and social sphere, as well as the intensification of ICT use in the everyday life of the population of the Russian Federation and southern Federal district (SFD), the authors developed an algorithm comprising the following steps: − selecting of indicators from the information base of the ICT surveys in 2019 as characteristics of information society development in Russia; − conducting a comparative assessment of the selected indicators for the Russian Federation and the southern Federal district; − implementation of the rating assessment of the entities of the southern Federal district based on the main criteria that characterize the intensity of the population's use of the Internet in their daily lives; − evaluation of use of state and municipal services by the population of the southern Federal district via the Internet, as the most important factor in the intensification of the introduction of ICT in the socio-economic sphere of territorial entities; − identifying points of growth and slowing down the intensification of digitalization processes in the regions of the southern Federal district. According to the authors, the use of this algorithm will enable the analysis and make a conclusion about the nature and intensity of development of information society and the speed of introduction of ICT in socio-economic sphere of the territorial entities of the Southern Federal district, that will expand the information base of regional statistics in this area and will enable governments to make effective decisions in terms of achieving the goals of sustainable development. Based on the data from the ICT survey for 2019, authors selected the main indicators that characterize the degree of use of the Internet by the population of the Russian Federation and the southern Federal district (see figure 1). We should also notice a rather high level of Internet users in the age group of 60-69 years, which indicates the involvement of citizens of pre-retirement and retirement age in the processes of digitalization of the socio-economic sphere of the country and its regions. According to the ICT survey, in 2019, in most cases, the population of the Russian Federation and the Southern Federal district aged 15 years and older used the Internet at home (96.1% and 97.4% of the total population of the surveyed age who used the Internet, respectively), at work (46.1% and 42.3%, respectively), in public access points (hotels, airports, cafes, etc.) (31.5% and 26.3%, respectively). According to the results of the ICT survey, the majority of the population of the Russian Federation and the southern Federal district in 2019 preferred to use a smartphone to access the Internet via a cellular telephone network (65.5% and 60.5%, respectively) and via wireless networks (40.1% and 35.5%, respectively). Other types of mobile devices (laptops and tablets) were used by 5-8 % of the population of the Russian Federation and the Southern Federal district. According to the ICT survey, the largest part of the population of the Russian Federation and the southern Federal district in 2019 faced such problems as unauthorized mailing (22.2% and 21.6%, respectively) and infection of devices with virus components (7.5% and 6.5%, respectively). Thus in 2019, population's of Russia and Southern Federal district the most common means of information protection were antivirus programs (75,5 % and 76.4%, respectively) and anti-spam filters (18.4% and 20.7% respectively). The results of the ICT survey allowed authors to identify the most common reasons for the refusal to use the Internet by the population of the Russian Federation and the Southern Federal district in 2019. These include: lack of need or desire (75.4% and 74.9%, respectively), insufficient level of Internet skills (34.1% and 36.0%, respectively), sufficiently high cost of connecting to the Internet (13.2% and 13.1%, respectively), the presence of certain technical difficulties associated with connecting to the Internet (4.1% and 4.7%, respectively), ensuring the security of personal data (3.0% and 3.1%, respectively) [1]. The next stage of the authors' algorithm for studying the processes of digitalization of the socio-economic sphere of the Russian Federation and its territorial entities is ranking of the Southern Federal district regions by the main indicators of ICT use by the population in 2019 based on the Pattern method, which includes the following set of criteria: − the percentage of households connected to the Internet from a personal computer [1]; − the percentage of the population that uses the Internet at work and in everyday life [1]; − the percentage of the population using mobile devices to connect the Internet [1]; − the percentage of the population facing it security problems [1]; − the percentage of the population using various information security tools [1]; − the percentage of the population that ordered goods and/or services via the Internet [1]; − the percentage of the population that interacted with state and local authorities via the Internet [1]; − the percentage of the population who used mobile devices in obtaining state and municipal services through official websites and portals [1]; − the percentage of the population who faced problems in obtaining state and municipal services through official websites and portals [1]; − the percentage of the population who fully satisfied with the quality of state and municipal services provided in electronic form [1]. The results of ranking the regions of the Southern Federal district based on the multidimensional average of the indicators calculated using the Pattern method for 2019 are shown in figure 3. Based on the data in figure 3, it is clear that the Republic of Adygea is the leader among the regions of the Southern Federal district in terms of ICT use, mainly due to a significant proportion of the population using mobile devices connected to the Internet, and a high overall degree of satisfaction with the quality of state and municipal services provided in electronic form due to the minimum number of problems in obtaining this type of services through electronic portals. Also, we should note the small proportion of the population of the Republic of Adygea, faced with information security problems. The group of regions that have a fairly high level of ICT use by the population includes the Astrakhan, Rostov, Volgograd and the Krasnodar regions. These territories are generally characterized by a high proportion of the population using the Internet, including for state and municipal services and using mobile devices connected to the Internet, as well as using information security tools. The regions with an average level of ICT use by the population are Sevastopol and the Republic of Crimea. In these subjects, the proportion of the population using the Internet, including from a personal computer, is quite high. In the Republic of Crimea in 2019, the largest share of the population who used information security tools was recorded, and in Sevastopol -the population who ordered goods or services via the Internet. The Republic of Kalmykia is in the last place in this rating, due to the fact that in 2019 there was a fairly high proportion of the population that faced problems with it security and receiving state and municipal services via the Internet, as well as a low proportion of the population that ordered goods and services via the Internet. In General, the so-called Electronic government services (E-government services) are developing quite actively, allowing the population to get a huge range of state and municipal services without leaving home via the Internet. So, in 2019, the most popular E-Government services received by the population of the Southern Federal district via the Internet were the following: making an appointment and receiving other services by healthcare organizations (52.4%), making tax payments and fees (37.4%), services provided by the Ministry of internal Affairs and the State road safety Inspectorate (28.4%), services of housing and communal services (19.7%), services of educational organizations (12.6%), services for obtaining a passport/registration (11.9%), services of organizations, providing social security for the population (8.9%). [1]. In most cases, the population of the Southern Federal district applied to E-Government services to obtain the necessary data through official websites and portals of state and municipal services (72.8%), make an appointment (59.8%), make mandatory payments and fees (48.3%), download standard forms of necessary documents (40.6%), send electronic forms of documents (37.1%), and receive results of providing E-Government services (36.2%) [1]. At the same time, most of the problems faced by the population of the Southern Federal district regions in the process of obtaining E-Government services via the Internet in 2019 were related to technical problems (16.3%). And only 5.8% of respondents noted difficulties in obtaining reliable information, while 2.7% of respondents could not get explanations and assistance in obtaining a particular electronic service [1]. In General, the study of ICT use by the population of the Russian Federation indicates a high degree of ICT coverage and the quality of information provided, the availability of a particular type of service on the electronic portals of state and municipal services. However, the public authorities should pay attention to improving the level of technical support for these services. Conclusions The study of the processes of intensification of ICT in socio-economic development of the Russian Federation and the subjects of the Southern Federal district in 2019 has allowed to draw some conclusions: − information base of the ICT survey allows to provide statistical analysis and evaluation of the processes of the digitalization of the socio-economic sphere both in the territorial and temporal aspects; − the study showed a high degree of use of ICT by the population of the Russian Federation as a whole and the Southern Federal district, in particular, in their daily lives; − the most active Internet users in 2019 in the Russian Federation and the Southern Federal district were the population aged 25-39 years, however, the share of Internet users in the pre-retirement and pension age was also high. Consequently, public authorities should consider features of this age group during the implementation of the concept of digitalization of economy and social sphere, to develop assistance programs to expand the opportunities of e-services to this category of population; − the results showed a high degree of use of mobile devices by the population, in particular mobile phones, for accessing the Internet, purchasing goods and receiving various electronic services. Therefore, in the future it is necessary to expand the network of cellular stations, thereby improving the quality and speed of the Internet, as well as to reduce the cost of using the Internet via mobile phones. An important aspect is the development and improvement of special mobile versions and applications for electronic portals that provide state and municipal services to the population; − the study revealed a small proportion of the population faced information security problems and a fairly high degree of loyalty of the population of the Russian Federation to domestic information security protection tools, which creates prospects for the active development of the Russian market of computer and information security products, thereby minimizing the losses of possible sanctions measures from other countries aimed at this area; − the ranking of Southern Federal district's territorial entities of the ICT use in 2019 showed that the leading factors in the development of the information society and the intensification of the processes of digitalization of the economy and the social sphere were the use state and municipal Internet portals, including via mobile devices by the population, and high degree of it information security. Therefore, it is necessary to transfer the most part of the state document flow to an electronic format, and in the future -fully provide this type of service via the Internet. Along with this, there is a need to develop mobile versions of websites of state and municipal authorities in order to facilitate access to them for all users, to fill them with high-quality content and to ensure the safety of users ' personal data.
3,746.4
2020-01-01T00:00:00.000
[ "Economics", "Computer Science" ]
Parallel, Serial and Hybrid Machine Tools and Robotics Structures: Comparative Study on Optimum Kinematic Designs After their inception in the past two decades as possible alternatives to conventional Computer Numerical Controlled (CNC) machine tools structures that dominantly adapt serial structures, Parallel Kinematic Machines (PKM) were anticipated to form a basis for a new generation of future machining centers. However this hope quickly faded out as most problems associated with this type of structures still persist and could not be completely solved satisfactorily. This especially becomes more apparent in machining applications where accuracy, rigidity, dexterity and large workspace are important requirements. Although the PKMs possess superior mechanical characteristics to serial structures, particularly in terms of high rigidity, accuracy and dynamic response, however the PKMs have their own drawbacks including singularity problems, inconsistent dexterity, irregular workspace, and limited range of motion, particularly rotational motion. To alleviate the PKMs’ limitations, considerable research efforts were directed to solve these problems. Optimum design methods are among the various methods that are attempted to improve the dexterity as well as to maximize the workspace (Stoughton and Arai, 1993, Huang et al., 2000). Various methods to evaluate the workspace were suggested (Gosselin, 1990; Luh et al., 1996; Conti et al., 1998; Tsai et al., 2006). Workspace optimization is also addressed (Wang and Hsieh, 1998). A new shift in tackling the aforementioned problems came when researchers start to look at hybrid structures, consisting of parallel and serial linkages as a compromise to exploit the advantageous characteristics of the serial and parallel structures. This shift creates new research and development needs and founded new ideas. Among the early hybrid kinematic designs, the Tricept was considered as the first commercially successful hybrid machine tools. This hybrid machine which was developed by Neos Robotics, has a three-degrees-of-freedom parallel kinematic structure and a Introduction After their inception in the past two decades as possible alternatives to conventional Computer Numerical Controlled (CNC) machine tools structures that dominantly adapt serial structures, Parallel Kinematic Machines (PKM) were anticipated to form a basis for a new generation of future machining centers. However this hope quickly faded out as most problems associated with this type of structures still persist and could not be completely solved satisfactorily. This especially becomes more apparent in machining applications where accuracy, rigidity, dexterity and large workspace are important requirements. Although the PKMs possess superior mechanical characteristics to serial structures, particularly in terms of high rigidity, accuracy and dynamic response, however the PKMs have their own drawbacks including singularity problems, inconsistent dexterity, irregular workspace, and limited range of motion, particularly rotational motion. To alleviate the PKMs' limitations, considerable research efforts were directed to solve these problems. Optimum design methods are among the various methods that are attempted to improve the dexterity as well as to maximize the workspace (Stoughton andArai, 1993, Huang et al., 2000). Various methods to evaluate the workspace were suggested (Gosselin, 1990;Luh et al., 1996;Conti et al., 1998;Tsai et al., 2006). Workspace optimization is also addressed (Wang and Hsieh, 1998). A new shift in tackling the aforementioned problems came when researchers start to look at hybrid structures, consisting of parallel and serial linkages as a compromise to exploit the advantageous characteristics of the serial and parallel structures. This shift creates new research and development needs and founded new ideas. Among the early hybrid kinematic designs, the Tricept was considered as the first commercially successful hybrid machine tools. This hybrid machine which was developed by Neos Robotics, has a three-degrees-of-freedom parallel kinematic structure and a standard two-degrees-of-freedom wrest end-effector holding joint. The constraining passive leg of the machine has to bear the transmitted torque and moment between the moving platform and the base (Zhang and Gosselin, 2002). Recently the Exechon machine is introduced as an improvement over the Tricept design. The Exechon adopts a unique overconstrained structure, and it has been improved based on the success of the Tricept (Zoppi, et al., 2010, Bi andJin, 2011). Nonetheless, regardless of the seemingly promising prospect of the hybrid kinematic structures, comprehensive study and understanding of the involved kinematics, dynamics and design of these structures are lacking. This paper is attempting to provide a comparative study and a formulation for the kinematic design of hybrid kinematic machines. The remainder of this paper is as follows: Section 2 provides a discussion on the mobility of serial, parallel and hybrid kinematic structures and the involved effects of overconstrain on the mobility of the mechanism. Section 3 provides a discussion on kinematic design for hybrid machines and the implication of the presented method. Concluding remarks are presented in Section 4. Mobility of robotic structures Mobility is a significant structural attribute of mechanisms assembled from a number of links and joints. It is also one of the most fundamental concepts in the kinematic and the dynamic modeling of mechanisms and robotic manipulators. IFToMM defines the mobility or the degree of freedom as the number of independent co-ordinates needed to define the configuration of a kinematic chain or mechanism (Gogu, 2005, Ionescu, 2003. Mobility, M, is used to verify the existence of a mechanism (M > 0), to indicate the number of independent parameters in the kinematic and the dynamic models and to determine the number of inputs needed to drive the mechanism. The various methods proposed in the literature for mobility calculation of the closed loop mechanisms can be grouped in two categories (Ionescu, 2003): (a) approaches for mobility calculation based on setting up the kinematic constraint equations and their rank calculation for a given position of the mechanism with specific joint location, and (b) formulas for a quick calculation of mobility without need to develop the set of constraint equations. The approaches for mobility calculation based on setting up the kinematic constraint equations and their rank calculation are valid without exception. The major drawback of these approaches is that the mobility cannot be determined quickly without setting up the kinematic model of the mechanism. Usually this model is expressed by the closure equations that must be analyzed for dependency. There is no way to derive information about mechanism mobility without performing kinematic analysis by using analytical tools. For this reason, the real and practical value of these approaches is very limited in spite of their valuable theoretical foundations. Many formulas based on approach (b) above have been proposed in the literature for the calculation of mechanisms' mobility. Many of these methods are reducible to the Cebychev-Grubler-Kutzbach's mobility formula given by Equation 1 below (Gogu, 2005). Using this formula, the mobility M of a linkage composed of L links connected with j joints can be determined from the following equation. where f i is the DOF associated with joint i. Equation 1 is used to calculate the mobility of spatial robotic mechanisms as most industrial robots and machine tools structures are serial structures with open kinematics chains. Mobility of planner mechanisms To gain an insight into the effect of mobility on the kinematic analysis and design of serial, parallel and hybrid kinematic structures, we will also look at the mobility of planner mechanisms, which can be obtained from the following planner Kutzbach-Gruebler's equation (Gogu , 2005, Norton, 2004. where M, L, j and f i are as defined before in Equation 1. As shown in Figure 1, the robotic structures are arranged in serial, parallel and hybrid kinematic chains, and thus have different number of links and joints. Using Equation 2, all the three structures in Figure 1 have three degrees of freedom, or mobility three. This gives the end-effector two translational degrees-of-freedom to position it arbitrarily in the x-y plane, and one rotational degree-of-freedom to orient it about the z-axis. In the serial kinematic structure all three joints are actuated, whereas for the parallel and hybrid structures only the three prismatic joints are actuated whereas the revolute joints are passive. The parallel kinematic part of the hybrid structure in Figure 1.c, has two degrees of freedom, which is achieved by reducing the number of legs to two and eliminating one of the passive revolute joints. Figure 2 shows an alternative way to reduce the degrees of freedom of the parallel kinematic mechanism, and hence to reduce the number of actuated prismatic joints. In this example this is done by eliminating one of the revolute joints which connect the legs to the platform. The corresponding leg has a passive prismatic joint to constraint one of the degrees-offreedom. By removing the revolute joint though the leg becomes a three-force member and hence it will be carrying bending moments necessitating considerable design attention to maintain desired stiffness levels and accuracy. This concept of reducing the degrees of freedom is adopted in the spatial Tricept mechanism to reduce the degrees-of-freedom from six to three. Compared to the mechanism in 1.c, this mechanism has more joints and links, which is not desirable from the design point of view. It should be noted here that the planner mechanisms are realized by necessitating that the involved revolute joints to be perpendicular to the plane and the prismatic joints to be confined to stay in the plane. As such these mechanisms can also be viewed as special cases of spatial mechanisms that are confined to work in a plane through overconstrains and thus Equation 1, with proper modification, rather than Equation 2 could be use, as discussed in the next section, Over-constrained mechanisms Formula for a quick calculation of mobility is an explicit relationship between structural parameters of the mechanism: the number of links and joints, the motion/constraint parameters of the joints and of the mechanism. Usually, these structural parameters are easily determined by inspection without a need to develop a set of kinematic constraint equations. However, not all known formulas for a quick calculation of mobility fit for many classical mechanisms and in particular parallel robotic manipulators (Ionescu, 2003). Special geometric conditions play a significant role in the determination of mobility of such mechanisms, which are called paradoxical mechanisms, or overconstrained, yet mobile linkage (Waldron and Kinzel, 1999). However, as mentioned above, there are overconstrained mechanisms that have full range mobility and therefore they are mechanisms even though they should be considered as rigid structures according to the mobility criterion (i.e. the mobility M < 1 as calculated from Equation 1. The mobility of such mechanisms is due to the existence of a particular set of geometric conditions between the mechanism joint axes that are called overconstraint conditions. Overconstrained mechanisms have many appealing characteristics. Most of them are spatial mechanisms whose spatial kinematic characteristics make them good candidates in modern linkage designs where spatial motion is needed. Another advantage of overconstrained mechanisms is that they are mobile using fewer links and joints than it is expected. In fact, the planner mechanisms in Figures 1 and 2 can also be viewed as overconstrained spatial mechanisms, and thus the spatial version of Kutzbach-Gruebler's equation (Equation 1), does not work for some of these planner mechanisms. In particular, for the parallel and hybrid kinematic planner mechanisms, Equation 1 will result in negative mobility values suggesting that these mechanisms are rigid structures, although they are not. Since this is not true, it should be concluded that Equation 1 cannot be used for these over-constraint mechanisms (Mavroidis and Roth, 1995). The overconstraint in planner parallel and hybrid kinematic mechanisms is due to the geometrical requirement on the involved joint-axes in relation to each other. To solve the problem when using the spatial version of the Kutzbach-Gruebler's equation for planner mechanisms or for over-constraint mechanisms in general, Equation 1 has to be modified by adding a parameter reflecting the number of overconstaints existing in the mechanism (Cretu, 2007). The resulting equation is called the universal Somo-Malushev's mobility equation. For the case of mechanisms that do not involve any passive degrees of freedom it is written as where s is the number of overconstraint (geometrical) conditions. For example, the parallel kinematic mechanism in Figure 1.b has L = 8, j= 9, and f i = 9. Using these parameters in Equation 1 gives M = -3. However using Equation (3) and observing that there are 6 overconstraints in this mechanism, the mobility will amount to M = 3. The overconstraints in this mechanism are due to the necessity for confining the axes of the three prismatic joints to form a plane or parallel planes (two overcosntraints), and for the axes of the three revolute joints of the moving platform (two overconstraints), and the three revolute joints of the base to be perpendicular to the plane formed by the prismatic joints. Kinematic designs of robotic structures A widely used kinematic design strategy for serial kinematic robotic structures to optimize the workspace is to use the first group of links and joints to position the end-effector and the remaining links and joints to orient the end-effector, and thus breaking the design problem into two main tasks. For the 6-DOF Puma robot schematic shown in Figure 3, the first three links and joint are responsible for positioning the end-effector at the desired position, while the last three joints and links form a 3-DOF concurrent wrest joint that orient the endeffector. Conventional five axis machining centers achieve similar decoupling by splitting the five axes (three translational axes and two rotational axes) into two groups of axes. One group of serially connected axes is responsible for positioning/orienting the worktable which is holding the workpiece, while the other group of axes moves/orients the spindle (Bohez, 2002). Unfortunately this strategy cannot be adopted for parallel kinematic structures due to the similarity of the legs and their way of working in parallel. As such decoupling of the two functions (positioning and orienting the end-effector) is not straightforward to do for parallel kinematic structures if not impossible. Partial decoupling has been attempted by Harib and Sharif Ullah (2008) using the axiomatic design approach. On the other hand, it should be noted here that parallel structures, and to some extent hybrid structures, can be built from identical parts and modules, and thus lend themselves well to adaptation as reconfigurable machines (Zhang, 2006). This attribute is not strongly relevant to serial structures which consist of axes that are stacked on each other making the links and joints differ considerably in terms of size and shape. Parallel kinematic designs A main objective of the optimal design of parallel kinematic machines is to maintain consistent dexterity within the workable space of the machine. Dexterity of the mechanism is a measure of its ability to change its position and orientation arbitrarily, or to apply forces and torques in arbitrary directions. As such the Jacobian matrix of the mechanism is widely used in formulating the dexterity measure. For a six degrees-of-freedom hexapod mechanism (Harib and Srinivasan, 2003), shown in Figure 4, the Jacobian matrix J relates the translational and rotational velocity vectors of the moving platform to the extension rate of the legs as indicated below (Harib and Sharif Ullah, 2008). where J = [J 1 J 2 ] is the Jacobian matrix of the hexapod, which consists of two 63 submatrices J 1 and J 2 that are given as where u i and C a i are respectively a unit vector along the ith leg and the position vector of its attachment point to the moving platform in the platform coordinate frame C, and M R C is the rotational matrix of the moving platform. The Jacobian matrix J relates also the external task space forces and torques and the joint space forces as indicated below.    16 Fig. 4. Typical Construction of a hexapodic machine tools. where F and T are respectively the resultants 3-D external force and torque systems applied to the movable platform. This result suggests that to support external force and torque along arbitrary directions, J 1 and J 2 must both have a rank three. Now to support these external force and torque resultants using bounded joint space forces, the condition numbers of J 1 and J 2 must be both as close to unity as possible. An overall local performance measure PM can be obtained from the following relation where w is a weighing factor in the range   01  which signify how much emphases is given to translational and rotational dexterities, and PM 1 and PM 2 are respectively performance measures for the translational motion and the rotational motion of the structure, and are defined as Sharif Ullah, 2008, Stoughton andArai, 1993). In the previous equations,     is the condition number function and V is the workspace which is a subset of the total reachable space of the mechanism V. PM will then be in the www.intechopen.com range   01  , with 1.0 corresponding to the best possible performance, which in turn corresponds to a perfectly conditioned Jacobian matrix. The workspace of PKMs is another design issue that needs careful attention due to the computational complexity involved. Algorithms proposed in the literature to determine the workspace of PKM structures use the geometric constraints of the structures, including maximum/minimum leg lengths, passive joint limits. The complexity of these computational methods varies depending on the constraints imposed. For example if the cross sectional variation hexapod legs is also considered as a factor to avoid leg collisions considerable computational requirement will be necessary (Conti et. al, 1998). If the considered design would ensure that the operation of the machine is far enough from possibility of leg collisions in the first place considerable design efforts could be saved. Harib and Sharif Ullah (2008) used the axiomatic design methodology (Suh, 1990) to analyze the kinematic design of PKM structures. In terms of the kinematic functions of PKM structures and based on the aforementioned contemplation, the following basic Functional Requirements (FRs) were identified: (1) The mechanism should be able to support arbitrary 3-D system of forces i.e. PM1 should be as close to unity as possible. (2) The mechanism should be able to support arbitrary 3-D systems of torques i.e. PM2 should be as close to unity as possible. (3) The mechanism should be able to move the cutting tool through a desired workspace. (4) The mechanism should be able to orient the spindle at a desired range within the desired workspace. On the other hand, to achieve the FRs the following two Design Parameters (DPs) are often used: (1) Determine the lengths and strokes of the legs. (2) Determine the orientation of the legs relative to the fixed base and to the moving platform in the home position. From the perspective of AD this implies that the kinematic design of hexapodic machine tools is sort of coupled design. Therefore, gradual decomposition of FRs and DPs are needed to make the system consistent with the AD. Figure 5 shows a 2-DOF planar parallel kinematics structure. The structure includes two extendable legs with controllable leg lengths l 1 and l 2 and three revolute joints a 1 , a 2 , and c. The controlled extension of the two legs places the end-effectors point c at an arbitrary position (x, y) in the x-y plane. The way the function requirement s a r e f u l f i l l e d i s t h i s d e s i g n i s b y a s s e m b l i n g t h e mechanism such that the two legs are orthogonal to each other at the central position of the workspace as shown in Figure 5. This result is coherent with the isotropic configuration that could be obtained for this mechanism (Huang et al., 2004). Away from that position the mechanism is not expected to deviate much from this condition for practical configuration if the limits of the leg lengths are appropriately selected. It is clear that arbitrary strokes and average lengths of the two legs can be selected while maintaining leg orthogonally condition by adjusting the position of b 1 and b 2 . The reachable space of the 2-DOF PKM of Figure 5 is bounded by four circular arc segments with radii l 1-max , l 1-min , l 2-max and l 2-min and centers b 1 and b 2 . With the two legs normal to each other the workspace can be modified along any of the two orthogonal directions independent from the other. An extension of the previous design method to three DOF planner PKM structures is shown in Figure 6. Selecting the reference point of the mechanism to be the concurrent attachment point of the two orthogonal legs serves the purpose of showing the validity of the previously established result of uncoupled design in terms of the previously defined FRs and DPs. As indicated on Figure 6, with this choice of reference point, the same workspace of the 2-DOF structure is obtained. (Harib and Sharif Ullah, 2008) www.intechopen.com The previous 3-DOF PKM design of Figure 6 suggests extending the idea to a 6-DOF structure, as shown in Figure 7. The six legs of the suggested structure are arranged such that the idea remains the same (two parallel legs connected by a link and one orthogonal leg) in each of three mutually orthogonal planes. The purpose of the design is to support an arbitrary 6-DOF force and torque system. Fig. 7. A schematic of a 6-DOF spatial PKM (Harib and Sharif Ullah, 2008) While the FRs' and DPs' of the axiomatic design methods are difficult to be decoupled here, this design of the 6 DOF mechanism is shown to be a logical extension from planner mechanisms designed with such design methodology. Hybrid kinematic designs Similar to the serial kinematic robotic design strategy, hybrid kinematic structures could be designed such the first three links and joints, forming the parallel structure, handle the gross positioning of the end-effector. The rest of joints and links could be made to form a concurrent serial kinematic structure that is responsible for orienting the end-effector. Thus this strategy decoupled two main functional requirements (FRs) of the mechanism and their design parameters (DPs). Now, while the serial kinematic part, which is responsible for the orientation of the end-effector, could be a standard wrest joint consisting concurrent revolute joints, the focus could bow be directed on the design of the parallel kinematic part which still requires considerable design attention. The decoupling of the design requirements reduces the design problem to a design of a three-degrees-of-freedom parallel kinematic spatial structure that position the concurrent wrest joint along the x, y and z axes. Although the design requirements on the orientation are not part of the design requirements of the parallel part of the mechanism, ability to support a system of transmitted torque is still part of the design requirements. This is in addition to the requirements of having ability to provide arbitrary motion along three directions and to support associated force system along these directions. The Exechon mechanism The Exechon machining center is based on a hybrid five degrees-of-freedom mechanism that consists of parallel and serial kinematic linkages (Zoppi et al., 2010). The parallel kinematic structure of the Exechon is an overconstrained mechanism with eight links and a total of nine joints; three prismatic joints with connectivity one, three revolute joints with connectivity one, and three universal joints with connectivity two. This mechanism is shown schematically in Figure 8. The number of overconstraint (geometrical) conditions s is 3. These conditions require that the two prismatic joints l 1 and l 2 form a plane, and that the two axes of the joints a 1 and a 2 to be perpendicular to this plane, and the axis of joint a 3 be perpendicular to the axes of joints a 1 and a 2 . The parameters of the underlying mechanism can be identified as: L = 8, j = 9, f i =12 for all the nine revolute, prismatic and universal joints. The mobility of this mechanism is erroneously calculated by Equation 1 as M = 0, which indicates that the mechanism is a structure. Nevertheless, if the geometrical constraints involved in this mechanism are considered and Equation 3 is applied, the mobility is correctly calculated as M = 3. These three degrees of freedom obviously correspond to the three actuating linear motors. The overconstraints in this mechanism considerably reduce the required joints, which obviously improves the rigidity of the mechanism. However, the geometric constraints that result in reducing the mobility to three require structural design for the joints to bear the transmitted bending moments and torque components. This requirement is more stringent in the case of the prismatic joints of the three legs. These legs will not be two-force members as in the six DOF hexapodic mechanism and have to be designed to hold bending moments. The parallel kinematic part can be viewed as a 2-DOF planner mechanism formed by the two struts l 1 and l 2 and the platform, which could be revolved about an axis (the axes of the base joints b 1 and b 2 , shown as dashed line in Figure 8) via the actuation of the third strut l 3 . To achieve 2-DOF in the planner mechanism, three overconstraints are required. As indicated before these overconstraints come as requirements on the axes of the revolute joints a 1 and a 2 to be normal to the plane formed by l 1 and l 2 , and on the third revolute joint a 3 to be normal to the other two joints. Thus the projection of this strut onto the plane is constraining the rotational degree-of-freedom of the moving platform in the plane. This situation resembles the 2-DOF planner mechanism of Figure 2. When this projection onto the plane vanishes (i.e. when the angle between the third strut and the plane made by other two struts is 90 degree), the mechanism becomes singular (attains additional degree-offreedom). Alternative hybrid kinematic mechanism In this section we demonstrate employoing the Axiomatic Design to evaluate a potential design of a 5-axes alternative hybrid kinematic machine tools mechanism consisting of a 3-DOF parallel kinematic structure and a 2-DOF wrest joint. Axiomatic design is a structured design methodology which is developed to improve design activities by establishing criteria on which potential designs may be evaluated and enhanced (Suh, 1990). The general function requirements (FRs) for the proposed hybrid mechanism can be listed as follows. The mechanism should 1) provide required positioning and orientation capabilities, 2) have adequate and consistent dexterity throughout the workspace, 3 ) h a v e g o o d s t r u c t u r a l rigidity, and 4) have a large and well shaped workspace. The design parameters (DPs) that could be used to achieve the function requirements concerning the parallel kinematic part of the mechanism include 1) the configuration of the wrest joint, 2) the configuration of the parallel kinematic mechanism, 3) the types of the end joints, and 4) the strokes and average lengths of the legs. Fig. 8. A schematic of the Exechon hybrid kinematic machine tools mechanism Based on the discussion in the previous sections and the axiomatic design formulation previously used for planner parallel kinematic structures (Harib and Sharif Ullah, 2008) a kinematic design of an alternative design for a hybrid kinematic machine tools mechanism is proposed. A schematic of the proposed mechanism is depicted in Figure 9 below. The parallel kinematic part has three perpendicular struts when the mechanism is at the center of the workspace, and consists of movable platform and three extendable struts. As shown in Figure 1, the first strut is rigidly connected to the platform, which in turn is connected to other two struts via revolute and universal joints. The struts are connected respectively to the machine frame via universal joints and a spherical joint with connectivity three. The number of overconstraint (geometrical) conditions s is 2. These conditions require that the two prismatic joints l 1 and l 2 form a plane and that the axis of joint a 2 to be perpendicular to this plane. Calculating the mobility using Equation 1 yields M = 1. However considering the overconstraints (s = 2), the mobility of the mechanism, as calculated by Equation 3, will be M = 3. Fig. 9. A schematic of a proposed hybrid machine tools mechanism In order to reach an optimum design, the Axiomatic Design FRs and DPs are grouped hierarchically. The design problem is also formulated such that the FRs are independent from each other (to fulfill the Independence Axiom), and the DPs are uncoupled at least partially (to fulfill the Information Axiom). Thus, the design strategy is directed to fulfill the FRs using uncoupled DPs first. Figure 10 shows main FRs for a hybrid kinematic mechanism design arranged hierarchically. The fundamental function requirement (FR1 = positioning and orientation capabilities) is split into two independent function requirements (FR11 and FR12) which can be addressed using independent design parameters. FR12 is split into three function requirements (FR121, FR122, FR123). For a given configuration of the parallel kinematic mechanism, the function requirements (FR121, FR122, FR123) can be addressed using the following design parameters: DP121i: the type of the ith platform end joint a i DP122i: the type of the ith base end joint b i DP123i: the stroke of ith leg (l i-max -l i-min ) DP124i: average lengths of the ith leg (l i-max + l i-min )/2 It is worth mentioning here that the joint axes resemble the five axes of the machine tools at the center of the workspace, and could be maintained to be close to this situation by proper design and choice of the leg strokes and mean lengths. Also as an alternative configuration, the 2-DOF wrest joint that hold the spindle could also be replaced by a 2-DOF rotary table, transferring the relative rotational motion to the workpiece. A redundant hybrid structure consisting of a hexapod machine tools and a 2-DOF rotary table is suggested and analyzed by Harib et al. (2007). Conclusions The considerable interest that is shifted to hybrid kinematic structures to exploit the advantageous features of the serial and parallel kinematic structures and avoiding their drawbacks has brought about some interest in overconstrained hybrid mechanisms. A study on the mobility of the three classes of mechanisms is presented and focuses on the mobility of overconstrained structures in view of their application in parallel and hybrid structures to reduce the number of passive joints. The mobility of the Exechon mechanism is analyzed and discussed as an example of a successful machine tools mechanism. The study of this mechanism reveals that its 3-DOF parallel kinematic part is a revolving 2-DOF planner mechanism. Strategies for kinematic designs of planner parallel mechanisms were developed and discussed based on the axiomatic design methodology. Optimum configurations for planner mechanisms were presented for 2-DOF planner mechanisms and were shown to be extendable to 3-DOF planner and spatial mechanisms by proper choice of joints and constraints. An alternative optimum parallel and hybrid mechanism is discussed and analyzed. The robotics is an important part of modern engineering and is related to a group of branches such as electric & electronics, computer, mathematics and mechanism design. The interest in robotics has been steadily increasing during the last decades. This concern has directly impacted the development of the novel theoretical research areas and products. This new book provides information about fundamental topics of serial and parallel manipulators such as kinematics & dynamics modeling, optimization, control algorithms and design strategies. I would like to thank all authors who have contributed the book chapters with their valuable novel ideas and current developments.
7,376.6
2012-03-30T00:00:00.000
[ "Materials Science" ]
Global Predicted Bathymetry Using Neural Networks A coherent portrayal of global bathymetry requires that depths are inferred between sparsely distributed direct depth measurements. Depths can be interpolated in the gaps using alternate information such as satellite‐derived gravity and a mapping from gravity to depth. We designed and trained a neural network on a collection of 50 million depth soundings to predict bathymetry globally using gravity anomalies. We find the best result is achieved by pre‐filtering depth and gravity in accordance with isostatic admittance theory described in previous predicted depth studies. When training the model, if the training and testing split is a random partition at the same resolution as the data, the training and testing sets will not be independent, and model misfit is underestimated. We solve this problem by partitioning the training and testing set with geographic bins. Our final predicted depth model improves on old predicted depth model RMSE by 16%, from 165 to 138 m. Among constrained grid cells, 80% of the predicted values are within 128 m of the true value. Improvements to this model will continue with additional depth measurements, but predictions at higher spatial resolution, being limited by upward continuation of gravity, should not be attempted with this method. Introduction According to mapping requirements proposed by the Seabed 2030 project, less than 25% of the ocean floor has been mapped (GEBCO compilation group, 2023).At a uniform spatial resolution of 1 arcminute, this figure is still less than 30%.From efforts such as the Seabed 2030 project (Mayer et al., 2018), coverage in publicly available compilations has improved in recent years, but the distribution of shipboard depth measurements remains heterogeneous and sparse, providing nearly complete high resolution coverage in some coastal areas but leaving unmapped gaps the size of western US states in remote regions (Figure 1a).While there is no substitute for shipboard surveys to recover high resolution bathymetry, we can make a good guess of the seafloor depth, at a limited resolution, in these gaps using gravity field data derived from satellite altimeters (e.g., Smith & Sandwell, 1994). Satellite measurements have provided a wealth of information on the gravity field with global coverage, at a spatial resolution of 12 km and accuracy nearing 1 mgal (Sandwell et al., 2021).A new generation of swath altimeters will improve the resolution of the gravity field which may improve the accuracy beyond 1 mgal.Since gravity anomaly and depth are correlated within certain wavelength bands (Smith, 1998), we can infer depth from gravity.Within a restricted region, depths may be directly inverted from gravity measurements (e.g., Parker, 1972), but this requires a priori knowledge of crustal density and other geologic quantities such as the degree of isostatic compensation (Watts, 2001).There are limitations to this method preventing its application at very large scales.Smith and Sandwell (1994) developed (and revised in Smith and Sandwell (1997)) an algorithm for this purpose, and that algorithm is currently used in the popular global elevation product SRTM15+ (Tozer et al., 2019).The procedure uses admittance theory to design filters for bathymetry and gravity to linearize their relationship.The filtering steps remove most assumptions of isostatic compensation.After filtering, the algorithm determines the local slope of the bathymetry-gravity relationship on a coarse grid, and the predicted bathymetry is the product of the filtered gravity and slope (Smith & Sandwell, 1994).Smith and Sandwell (1994) termed the post-filtering process the "inverse Nettleton," referring to Nettleton (1939) to describe the process of selecting a best-fitting slope to describe the relationship of gravity anomaly and topography.As such, we will refer to this method as the Nettleton method.The quality of the estimation has improved with increasingly precise gravity recovery (number of measurements, orbits, instrument quality, processing techniques) and the greater quantity of shipboard measurements, especially in some of the previously poorly mapped regions.The most recent iteration of this prediction is described in detail by Tozer et al. (2019).While the predicted depth product has been widely adopted, some of the details in the prediction method may not be optimal and leave room for improvement. There has been recent interest in using modern methods from machine learning to improve upon the prediction of bathymetry.For example, Annan and Wan (2022) and Wan et al. (2023) have used neural networks with various architectures to predict absolute depth from gravity and gravity-related quantities (e.g., deflections of the vertical, gravity gradients).These models have been limited to a particular study area for training, and the predictions are thus limited to the selected area.The present study is an attempt to update the global predicted depth grid, and our goal is to replace the Nettleton method of depth estimation with a new approach using techniques based on machine learning.Specifically, we will train a deep neural network (DNN) to predict depth globally using a publicly available collection of depth measurements.We distinguish our method from previous machine learning predicted bathymetry studies in a few key ways: we attempt a global prediction; we predict depth in a certain waveband rather than the absolute depth; and we split training and testing data in a unique way. Data Preparation and Feature Generation We begin with the collection of shipboard depth measurements.The collection is based on the collection used in the SRTM15+V2 product (Tozer et al., 2019), and a description of the data sources is found in that study and Becker et al. (2009).Since Tozer et al. (2019), data from 905 cruises (retrieved from the National Centers for Environmental Information (NCEI)) have been added to the collection.Data have been manually edited to remove erroneous measurements.For our purposes, data provenance is treated equally, and data are not distinguished by cruise ID or instrument type (multibeam or single-beam).Raw shipboard data are reduced by a median filter to 15 arcsecond resolution.These data are combined and blockmedian-reduced to 1 arcminute resolution on a spherical Mercator projected grid, spanning 80.738°-80.738°latitude.The result is a collection of 52,253,670 records of type [longitude, latitude, depth] (or [ϕ, θ, d]).Some potentially useful information is lost in the blockmedian reducing operation, but it is necessary for the filtering we apply (see next section). We can use the coordinates ϕ, θ of any constrained depth record to sample the global gravity anomaly grid (Figure 1c) (Sandwell et al., 2021).Since the constrained depth cells are co-registered with the gravity grid, sampling is trivial The result is records of type [ϕ, θ, d, g].From these records, the target quantity we wish to predict is depth, and the features we may use to train the prediction are ϕ, θ, and g.We could use other geophysical or geographical grids as features, since they can be sampled by longitude and latitude.We will explore such additional features in the discussion.That longitude is cyclical is not captured by the simple numerical value, so we decompose the longitude, ϕ, to sin( ϕπ 180 ) and cos ( ϕπ 180 ), or ϕ s , ϕ c .With this treatment of longitude, we avoid a discontinuity at the Greenwich Meridian in the predicted depth grid.Following this amendment, our feature vector is [ϕ s , ϕ c , θ, g]. Filtering Depth and Gravity Since gravity and bathymetry are only correlated over certain wavelengths (Smith, 1998), it would be less-thanoptimal to try and predict the absolute depth which contains wavelengths where gravity and bathymetry are uncorrelated.Here we use the filtering steps established by Smith and Sandwell (1994).The depth measurements are gridded on a 1 arcminute spherical Mercator grid using continuous splines in tension (Smith & Wessel, 1990) with a tension factor of 0.6, and this grid is separated into low-and high-frequency components with a Gaussian filter with 0.5 gain at 160 km (Equation 9, (Smith & Sandwell, 1994)).The high-pass depth is then low-pass filtered with 0.5 gain at 16 km, resulting in a 160-16 km band-pass depth grid-we will call this h.The constrained points of this depth grid are extracted (Figure 1d).We are losing some high-frequency information this way in order to match the spectral content of the gravity.The gravity anomaly is high-pass filtered in the same way as the depth measurements.Finally, the high-pass filtered gravity anomaly is downward-continued to the low-pass filtered depth using a depth-dependent Wiener filter (Equation 11, (Smith & Sandwell, 1994))-we will call this g* (Figure 1e).We will examine the effects of omitting this pre-processing step in the discussion. Data Splitting We must split our data set into training, validation, and testing sets.If we are to simply select 20% for testing and validation at random, then we will find that almost any given record in the testing or validation set is within 1 arc min of a record in the training set.Marks et al. (2010), analyzing predicted depths generated by the Nettleton procedure, showed the prediction error increases with distance from constrained nodes.How strong this effect is depends on the roughness of the seafloor, but it appears the errors become decorrelated beyond a certain distance (15-30 km).In other words, records that are sufficiently close in position are not independent (Figure 3b).In practice, the training loss and validation (and testing) loss will be nearly identical, and we will not have a good idea of when the model is overfitting during training or how the model generalizes to the unmapped gaps. By splitting the data into longitude, latitude bins and randomly selecting bins for testing and validation, we can reduce the dependence of the data sets (Figure 3c).We use a bin size of 30 arc min (∼50 km at the equator) to group records, and then randomly select those bins for training, validation, and testing.This bin size could be made larger or smaller, or it could be made to vary based on prior knowledge of seafloor roughness, but it is important not to tune the bin size by model loss performance.The training, validation, and testing data sets comprise 31,320,896, 10,460,197, and 10,472,576 records respectively. Model Architecture and Training We use the TensorFlow software library to design and train the neural network (Abadi et al., 2015).The neural network comprises only successive densely connected layers (Figure 2) each using a rectified linear unit (ReLU) activation function to introduce non-linearity.Input features are normalized by the mean and variance of their distribution in the training data set.We use eight successive dense layers with 256 neurons per layer, and a final linear output layer.Model architecture can be tweaked ad nauseum, so we cannot claim this is a strictly optimal configuration, but this particular arrangement was selected because we found it performed as well as a wider but shallower model (e.g., 4 layers of 1,024 neurons each) while using far fewer parameters (4 × 10 5 vs. 3 × 10 6 ). We choose mean squared error (MSE) as the loss function.Each dense layer is regularized with L2 regularization (λ = 0.01).We use the Adam optimizer (Kingma & Ba, 2017) with a learning rate of 0.001.Model training proceeds until the validation loss is no longer decreasing. Inference: Generating the Global Predicted Grid The end goal of this model is a global grid of predicted depths.After the model is trained, predictions of h are generated on a 1 arcminute spherical mercator grid (on-shore values are masked).The long wavelength depth, saved from the filtering step, are added to the predicted depth, h, to give absolute depth, d.Finally, for distribution, the predicted depth grid is "polished"-grid cells with constrained depth measurements are reset to the measured depth-but this step is omitted in the following discussion and analysis. Base Model Using the feature vector [ϕ s , ϕ c , θ, g*] to predict h, we achieve a training RMSE of 85 m, validation RMSE of 108 m, and testing RMSE of 109 m.Loss values are useful for comparing one trained model against another, but they are imperfect when comparing to the Nettleton method.Since no "testing" data is withheld from the Nettleton method (i.e., the prediction is tuned on all available data), we must caution the comparison of RMSE between the two methods.With that in mind, the RMSE of the Nettleton prediction and h is about 143 m.We will look more carefully at model misfit in the discussion. Modeling Without Filtering Depths and Gravity For comparison, a model trained without filtering depths and gravity anomaly performs much worse and offers no improvement over the Nettleton method.For this model, we achieve a training RMSE of 140 m, validation RMSE of 173 m, and testing RMSE of 175 m.This poor performance may be because the distribution of depth is more variable with location.For example, where the regional depth is 6,000 m, the mean depth will be near 6,000 m, and similar for a regional depth of 1,000 m.By high-pass filtering the depth, the overall variance of the data are reduced.Omitting the low-pass filter at 16 km also contributes to the worse performance.The Nettleton method RMSE above is for the band-pass filtered data, h.If the short wavelengths are included, the Nettleton RMSE is about 165 m. 10.1029/2023EA003199 Alternatively, we can high-pass filter depth and gravity but omit the low-pass filter at 16 km-in fact, we may desire to do this so we do not lose short-wavelength details.For this model, we achieve a training RMSE of 124 m, validation RMSE of 141 m, and testing RMSE of 142 m.We cannot compare these directly to the base model since the target quantities are different.Instead we can evaluate the RMSE of the base model prediction and the high-pass depth.In this case, testing RMSE values are nearly identical for the two models, suggesting the trained models are similar and the greater loss reflects the greater variance of the target data. Added Features It might seem that adding features from other global grids would be a good approach to decrease model loss and improve performance.For example, the spreading rate at the time crust is created is known to affect the roughness of bathymetry (Small & Sandwell, 1994).Crustal age and sediment cover will also influence the correlation of gravity and bathymetry (Smith & Sandwell, 1994).We use ϕ, θ to sample grids of crustal age, paleo-spreading rate (Seton et al., 2020), and sediment thickness (Whittaker et al., 2013), add these to the feature vector, and train a new model.In practice, there are problems with using these features. First, these grids have many regions of missing data, and the missing values must be handled somehow.Since a key purpose of this model is prediction on a global grid, grid cells with missing values cannot simply be thrown out.We tested different schemes to replacing missing values: replacement with the mean feature value; replacement with mean feature value plus an additional boolean feature indicating replacement; and filling missing values with nearest neighbor interpolation of the feature grid.In our attempts to use crustal age, paleospreading rate, and sediment thickness as features, validation and testing loss are not improved (nor are they improved by any one such feature), and in fact model loss is worse with the additional features.In addition, at inference time, sharp discontinuities in the feature grids get mapped to the prediction grid creating unwanted artifacts. Other gravity-related quantities such as deflection of the vertical and vertical gravity gradient (VGG) may be good features to use, but they may only be redundant.We found that adding VGG as a feature (Sandwell et al., 2021) improves model training RMSE only slightly, and it slightly worsens validation and testing RMSE.This model has a training RMSE of 85 m, a validation RMSE of 109 m, and a testing RMSE of 110 m.Since there is no improvement on the base model, we will consider that the preferred model and refer to it as simply the "DNN" model in the following discussion. Generating a Predicted Depth Grid After training, we generate model predicted depths on a global 1 min mercator grid.One problem that results is short wavelength artifacts or "hallucinations" (Figure 4e).These hallucinations typically occur with wavelengths shorter than the 16 km wavelength filter that was applied to the original gravity and bathymetry, so they must be a product of the DNN training.We can reduce these with regularization during training, but not completely nor in a deterministic way.For the distributed predicted depth grid, we apply a low pass filter with 0.5 gain at 16 km to remove these hallucinations.This post-inference filtering method does not weaken model results.In fact, error metrics are very slightly improved, and the prediction RMSE of h after filtering is 107 m for the testing data set.We use the predictions on the filtered grid in the following discussion. Comparison to Nettleton Model While not quantitative, it is important to visually inspect the DNN predicted depth grid and make comparisons to the Nettleton grid (Figure 4).The most obvious qualitative difference is in continental margin areas or areas of relatively smooth seafloor.In these areas, the Nettleton predicted bathymetry has a rough "orange peel" texture (Figure 4d), an artifact of downward continuation of noisy gravity data.This type of seafloor is smoother in the DNN prediction. Using all available depth measurements (not partitioned for training/testing), we compare the error distribution of the Nettleton method and the DNN method.These results are shown in Figure 5.For the Nettleton method, the predicted depth is within 68 m for 50% of points and within 168 m for 80% of points.For the DNN method, these percentiles are 45 and 128 m, respectively.Additionally, the mean error has been reduced from 13 m for the Nettleton to 3 m for the DNN, indicating a less-biased estimate.Figure 5b shows the distribution of absolute model error in the southern oceans.Overall, the spatial patterns of misfit are similar for the two models.At this scale, the noticeable differences are found nearer to land-e.g., the West Antarctic Peninsula, Chile, Australiawhere the DNN model shows clear improvements over the Nettleton. BODC Data Because the Nettleton prediction is tuned using all available data, we do not have a concrete idea of how well it generalizes to unseen areas of seafloor.It is useful to reserve a data set that is not used in either model's construction.To compare the performance of the Nettleton and the DNN model, we used a collection of depth measurements from 279 cruises from the British Oceanographic Data Centre (BODC) that are not yet incorporated into the prediction model.The raw data are decimated to 15 s, and measurements that overlap with data already in the prediction data set are removed.We did not thoroughly inspect the BODC data for erroneous measurements, so measurements that differ from either predicted grid by more than 2,000 m are removed.In total, there are 6,242,414 points.The Nettleton prediction has an RMSE of 150 m for the BODC data set.The final DNN model has an RMSE of 104 m. If we restrict the analysis spatially to the highest concentration of measurements (80% of the data are around the British Isles), the Nettleton RMSE is 73 m and the DNN model RMSE is 62 m-much lower than testing RMSE (Figure 6).This almost certainly reflects the proximity of these data to those in our training set.However, we see from the error distribution that the slight bias in the Nettleton prediction is not present in the DNN prediction.Earth and Space Science 10.1029/2023EA003199 HARPER AND SANDWELL Potential for Improvements Our model is a simple implementation of a neural network to predict depth globally, and we have shown its clear improvement over the Nettleton method.Yet, there are many possible directions for improvement depending on one's objectives.Expansion of the training data set, modifications of model architecture, or a multi-regional approach to the problem all offer potential to improve on our model. If coverage of publicly available bathymetry compilations continues to improve as it has in recent years-and it likely will (Mayer et al., 2018)-model predictions will clearly improve (this would be true of any model).Lowresolution data in remote regions, which can be collected by autonomous vehicles, will likely offer the greatest benefit in our model approach. We have not made use of high-resolution multibeam data in our model, and we do not aim to predict features at such resolution.Upward continuation of gravity anomalies limits the resolution of gravity from satellite altimetry to a length scale of about π times the regional depth (e.g., Smith & Sandwell, 2004), so it is not possible to realistically predict depth from only gravity (and its derivatives) at such scales.An approach using convolutional neural networks, as demonstrated by Annan and Wan (2022), may successfully learn from higher resolution bathymetry in regional settings. A prediction model trained on regional data will, everything else equal, perform better in that region than a model trained on global data.Moran et al. (2022) identified regions where various learning algorithms might preferentially excel.This suggests a global model should alternatively be constructed from a suite of regional models.A particular case where such a multi-regional model would excel is in predicting higher resolution depth in areas where that is realistic.Susa (2022) showed such an approach to predicting depth in near-coastal regions.In this setting, altimetric ranging and gravity accuracy suffer from land contamination (Raney & Phalippou, 2011), and visible spectra may be correlated with bathymetry, making this an ideal case for alternative depth prediction. Conclusions 1. Using a large collection of depth measurements and satellite-derived gravity anomalies, we trained a deep neural network to predict seafloor depth.2. We find that applying filters (described by Smith and Sandwell (1994)) to bathymetry and gravity before training is necessary for a good result, and conforms the data more closely with the assumption of identical distributions.3. When dealing with sparse heterogeneous sampling, the training-testing split must be treated carefully.If the training and testing split is a random partition at the same resolution as the data, the training and testing sets are not independent, and model misfit results will be too optimistic.Earth and Space Science 10.1029/2023EA003199 Figure 1 . Figure 1.An overview of the data sets in map view.(a) Distribution of shipboard depth measurements in the global oceans based on publicly available data.(b) Zoomed-in view of depth measurements in the South China Sea, colored by absolute depth, d.(c) Free air gravity anomaly, g, for the same region as (b).(d) High-pass filtered depths, h, and (e) filtered gravity anomaly, g* (filtering described in the text). Figure 2 . Figure 2. Schematic of the neural network architecture.A feature vector with normalized inputs is transformed by successive hidden layers to predict depth.Continuing dots in the input leave the possibility of additional or alternative features.Output depth may be absolute or filtered depth depending on how the model is trained. Figure 3 . Figure 3.An example of partitioning the data.Same map area shown in Figures 1b-1e.(a) The collection of depth measurements.The data must be partitioned into a training, testing, and validation set.(b) Randomly withholding 20% of the data, sampled uniformly.Withheld data are shown in red.Using this partition scheme, almost any withheld point has a nearly identical point in the training set, so the sets are not independent.(c) Sampling the data after binning into groups of 30 arc min.Withheld data are shown in red. Figure 4 . Figure 4. Model-predicted depths with long-wavelength depth added.The same map area shown in Figures 1b-1e.(a) Nettleton method; (b) Raw DNN-predicted grid; (c) DNN-predicted grid, low-pass filtered at 12 km.(d) Example of the "orange peel" texture in the Nettleton prediction, boxed region in (a).(e) Example of "hallucinations" in DNN prediction, boxed region in (b). Figure 5 . Figure 5.Comparison of Nettleton and DNN models by prediction error.(a) Distribution of prediction error for all 1 arc min data (N = 52,253,670).(b) Average absolute difference (prediction -measurement) for Nettleton (upper) and DNN (lower) models. Figure 6 . Figure 6.Nettleton and DNN model predictions for withheld BODC data.(a) Depth measurements in the full data set shown in black, measurements from the disjoint BODC data set shown in red.Misfit of BODC data for (b) Nettleton predicted depth and (c) neural network predicted depth.(d) Distribution of BODC data misfit for Nettleton predicted depth and neural network predicted depth.
5,527.2
2024-03-01T00:00:00.000
[ "Computer Science", "Environmental Science" ]
Extension of the Poincar\'e group with half-integer spin generators: hypergravity and beyond An extension of the Poincar\'e group with half-integer spin generators is explicitly constructed. We start discussing the case of three spacetime dimensions, and as an application, it is shown that hypergravity can be formulated so as to incorporate this structure as its local gauge symmetry. Since the algebra admits a nontrivial Casimir operator, the theory can be described in terms of gauge fields associated to the extension of the Poincar\'e group with a Chern-Simons action. The algebra is also shown to admit an infinite-dimensional non-linear extension, that in the case of fermionic spin-$3/2$ generators, corresponds to a subset of a contraction of two copies of WB$_2$. Finally, we show how the Poincar\'e group can be extended with half-integer spin generators for $d\geq3$ dimensions. I. INTRODUCTION Nowadays, we have the good fortune of witnessing the era in which the simplest minimal realistic versions of supersymmetric field theories are about to be either tested or falsified by the LHC. The underlying geometric structure of these kind of theories, as well as most of their widely studied extensions, relies on the super-Poincaré group (see, e.g., [1], [2], [3]). According to the Haag-Lopuszański-Sohnius theorem [4], this is a consistent extension of the Poincaré group that includes fermionic generators of spin 1/2. Indeed, in flat spacetimes of dimension greater than three, the addition of fermionic generators of spin s ≥ 3/2 would imply that the irreducible representations necessarily contained higher spin fields, which are known to suffer from inconsistencies (see, e.g., [5], [6], [7], [8], [9], [10], [11]). However, in three spacetime dimensions, higher spin fields do not possess local propagating degrees of freedom, and as a consequence, it is possible to describe them consistently [12], [13], [14], [15], [16], [17], [18] even on locally flat spacetimes [19], [20], [21], [22]. Hence, in the latter context, since no-go theorems about massless higher spin fields can be circumvented, it is natural to look for an extension of the Poincaré group with fermionic half-integer spin generators. Results along these lines have already been explored in [23]. In what follows, we begin with the construction of the searched for extension of the Poincaré group in the case of spin 3/2 generators, that for short, hereafter we dub it the "hyper-Poincaré" group. It is shown that the algebra admits a nontrivial Casimir operator and, as an application, we explain how the hypergravity theory of Aragone and Deser [24] can be formulated so as to incorporate the hyper-Poincaré group as its local gauge symmetry. Concretely, we show how hypergravity can be described in terms of hyper-Poincaré-valued gauge fields with a Chern-Simons action. The results are then extended to the case of fermionic generators of spin n+ 1 2 , as well as to the minimal coupling of General Relativity with gauge fields of spin n+ 3 2 , so that super-Poincaré group and supergravity are recovered for n = 0. The hyper-Poincaré algebra is also shown to admit an infinite-dimensional nonlinear extension that contains the BMS 3 algebra, which in the case of spin-3/2 generators, reduces to a subset of a suitable contraction of two copies of WB 2 . We conclude explaining how the hyper-Poincaré group is extended to the case of d ≥ 3 dimensions. II. FERMIONIC SPIN-3/2 GENERATORS In three spacetime dimensions, the nonvanishing commutators of the Poincaré algebra can be written as The additional fermionic generators are assumed to transform in an irreducible spin-3/2 representation of the Lorentz group, so that they are described by "Γ-traceless" vectorspinors that fulfill Q a Γ a = 0, where Γ a stand for the Dirac matrices. Their corresponding commutation rules with the Lorentz generators are then given by Therefore, requiring consistency of the closure as well as the Jacobi identity, implies that the only remaining nonvanishing (anti-) commutators of the algebra read where C is the charge conjugation matrix 1 . It is then simple to verify that apart from I 1 = P a P a , the algebra admits another Casimir operator given by which implies the existence of an invariant (anti-) symmetric bilinear form, whose only nonvanishing components are of the form It is worth highlighting that the inclusion of the higher spin generators Q a α does not jeopardize the causal structure, since there is no need to enlarge the Lorentz group. 1 In our conventions, the Minkowski metric η ab is assumed to follow the "mostly plus" convention, and the Levi-Civita symbol fulfills ε 012 = 1. Round brackets stand for symmetrization of the enclosed indices, without the normalization factor, e. g., It is also useful to keep in mind the Fierz expansion of the product of three Dirac matrices, given by Γ a Γ b Γ c = ε abc +η ab Γ c +η bc Γ a −η ac Γ b . Afterwards, the presence of the imaginary unit "i" in the product of real Grassmann variables is because we assume that (θ 1 θ 2 ) * = −θ 1 θ 2 . A. Hypergravity In order to describe a massless spin-5 2 field minimally coupled to General Relativity, let us consider a connection 1-form that takes values in the hyper-Poincaré algebra described above, which reads where e a , ω a and ψ α a stand for the dreibein, the dualized spin connection (ω a = 1 2 ε abc ω bc ), and the Γ-traceless spin- 5 2 field (Γ a ψ a = 0), respectively. The components of the field strength F = dA + A 2 are then given by where the covariant derivative of the spin-5 2 field can be written as with T a = de a + ε abc ω b e c , andψ aα = ψ β a C βα is the Majorana conjugate. Note that under an infinitesimal gauge transformation δA = dλ + [A, λ], spanned by a hyper-Poincaré-valued zero-form given by λ = λ a P a + σ a J a + ǫ α a Q a α , the components of the gauge field transform according to The invariant bilinear form (5) then allows to construct a Chern-Simons action for the gauge field (6), given by which up to a boundary term, reduces to It is worth pointing out that, despite the action (11) is formally the same as the one considered by Aragone and Deser in [24], it does possess a different local structure. Indeed, note that under local hypersymmetry transformations spanned by λ = ǫ α a Q a α , the nonvanishing transformation rule for the spin connection considered in [24], agrees with ours only on-shell. Actually, by construction, as in the case of supergravity [25], here the algebra of the local gauge symmetries (9) closes off-shell according to the hyper-Poincaré group, without the need of auxiliary fields. In the case of negative cosmological constant, it can be seen that hypergravity requires the presence of additional spin-4 fields [26], [27], [28]. where χ a 1 ...an is Γ-traceless and completely symmetric in the vector indices, which can be seen as a flat connection that fulfills Note that the Jacobi identity now translates into the consistency of the nilpotence of the exterior derivative (d 2 = 0), which for the algebra (13) is clearly satisfied. The nontrivial Casimir operator now reads It is also worth pointing out that the super-Poincaré algebra corresponds to the case of n = 0, while the hyper-Poincaré algebra described above is recovered for n = 1. A. Hypergravity in the generic case The minimal coupling of General Relativity with a massless fermionic field of spin s = n+ 3 2 , described by a completely symmetric Γ-traceless 1-form ψ a 1 ...an , can then be formulated in terms of a gauge field for the hyper-Poincaré algebra, which now reads A = e a P a + ω a J a + ψ α a 1 ...an Q a 1 ...an α . The Casimir operator (14) then implies the existence of an (anti-) symmetric tensor of rank 2, which once contracted with the wedge product of two curvatures, gives being an exact form that is manifestly invariant under the hypersymmetry transformations (19). Therefore, as in the case of (super)gravity [29], [30], the action can also be written as a Chern-Simons one (10), which up to a boundary term reduces to R a e a + iψ a 1 ...an Dψ a 1 ...an , so that the field equations now read F = 0, with F given by (16). Note that the standard supergravity action in [31], [32], [33] is recovered for n = 0; and as it occurs in the spin-5 2 case, the generic theory agrees with the one of Aragone and Deser only on-shell. We would like to stress that a deeper understanding of the theory cannot be attained unless it is endowed with a consistent set of boundary conditions. In this sense, one of the advantages of formulating hypergravity as a Chern-Simons theory is that the analysis of its asymptotic structure can be directly performed in a canonical form, as in the case of negative cosmological constant [28]. Indeed, in analogy with the case of three-dimensional flat supergravity [34], the mode expansion of the asymptotic symmetry algebra of hypergravity with a spin-5 2 fermionic field is defined through the following Poisson brackets [35] i i {ψ m , ψ n } = 1 4 6m 2 − 8mn + 6n 2 − 9 P m+n + 9 4k q P m+n−q P q +k m 2 − 9 4 n 2 − 1 4 δ m+n,0 , which describe a nonlinear hypersymmetric extension of the BMS 3 algebra [36], [37], [38]. It can also be shown that this algebra corresponds to a subset of a suitable contraction of two copies of the WB 2 algebra [39], [40], [28]. When fermions fulfill antiperiodic boundary conditions, the modes of the fermionic global charges ψ m are labelled by half-integers, so that the wedge algebra of (22) reduces to the one of the hyper-Poincaré group. In fact, dropping the nonlinear terms, and restricting the modes according to |n| < ∆, where ∆ stands for the conformal weight of the generators, the hyper-Poincaré algebra is manifestly recovered provided the modes in (22) are identified with the generators J a , P a , Q αa , according to It is also worth noting that (22) can then be regarded as a hypersymmetric extension of the Galilean conformal algebra in two dimensions [41], [42], which is isomorphic to BMS 3 and turns out to be relevant in the context of non-relativistic holography. Another advantage of formulating hypergravity in terms of a Chern-Simons action is that, as in case of supergravity [43], [34], the theory can be readily extended to include parity odd terms in the Lagrangian. This can be explicitly performed by a simple modification of the invariant bilinear form, so that it acquires an additional component given by J a , J b = µη ab , followed by a shift in the spin connection of the form ω a → ω a + γe a , so that the constants µ, γ parametrize the new allowed couplings in the action. As a consequence, when hypergravity is extended in this way, the hyper-BMS 3 algebra (22) acquires an additional nontrivial central extension along its Virasoro subgroup. IV. ENDING REMARKS The hyper-Poincaré group admits a consistent generalization to the case of d ≥ 3 spacetime dimensions. In the case of fermionic Γ-traceless spin-3 2 generators, the nonvanishing (anti-) commutators of the algebra are given by J ab ,Q αc = 1 2 (Γ ab ) β αQ βc +Q αa η bc −Q αb η ac , whereQ a = Q † a Γ 0 stands for the Dirac conjugate. In the generic case, the spin-n + 1 2 generators correspond to completely symmetric Γtraceless tensor-spinors that fulfill Γ a 1 Q a 1 ...an = 0. In order to avoid the intricacies related to the latter condition, as well as with the suitable (anti-) symmetrization of the (anti-) commutation rules of the generators, it is better to express the algebra in terms of its Maurer-Cartan form. It is now given by where χ a 1 ...an is Γ-traceless and completely symmetric in the vector indices, so that its This algebra can be easily written in terms of Majorana spinors when they exist, and it reduces to super-Poincaré for n = 0. Note that there was no need to enlarge the Lorentz group in order to accommodate the higher spin generators, so that the additional symmetries do not seem to interfere with the causal structure. Indeed, as in the case of supersymmetry, the quotient of the hyper-Poincaré group over the Lorentz subgroup now defines a hyperspace which is an extension of Minkowski spacetime with additional Γ-traceless tensor-spinor coordinates. However, as anticipated by Haag, Lopuszański and Sohnius, the irreducible representations, which could be obtained from suitable hyperfields, necessarily contain higher spin fields. Nevertheless, it would be worth to explore whether the hyper-Poincaré algebra may manifest itself through theories or models whose fundamental fields do not transform as linear multiplets, as it would be the case of nonlinear realizations, hyper-Poincaré-valued gauge fields, or extended objects.
3,115
2015-05-22T00:00:00.000
[ "Mathematics" ]
Circulating Interleukin-18 as a Biomarker of Total-Body Radiation Exposure in Mice, Minipigs, and Nonhuman Primates (NHP) We aim to develop a rapid, easy-to-use, inexpensive and accurate radiation dose-assessment assay that tests easily obtained samples (e.g., blood) to triage and track radiological casualties, and to evaluate the radioprotective and therapeutic effects of radiation countermeasures. In the present study, we evaluated the interleukin (IL)-1 family of cytokines, IL-1β, IL-18 and IL-33, as well as their secondary cytokines’ expression and secretion in CD2F1 mouse bone marrow (BM), spleen, thymus and serum in response to γ-radiation from sublethal to lethal doses (5, 7, 8, 9, 10, or 12 Gy) at different time points using the enzyme-linked immune sorbent assay (ELISA), immunoblotting, and cytokine antibody array. Our data identified increases of IL-1β, IL-18, and/or IL-33 in mouse thymus, spleen and BM cells after total-body irradiation (TBI). However, levels of these cytokines varied in different tissues. Interestingly, IL-18 but not IL-1β or IL-33 increased significantly (2.5–24 fold) and stably in mouse serum from day 1 after TBI up to 13 days in a radiation dose-dependent manner. We further confirmed our finding in total-body γ-irradiated nonhuman primates (NHPs) and minipigs, and demonstrated that radiation significantly enhanced IL-18 in serum from NHPs 2–4 days post-irradiation and in minipig plasma 1–3 days post-irradiation. Finally, we compared circulating IL-18 with the well known hematological radiation biomarkers lymphocyte and neutrophil counts in blood of mouse, minipigs and NHPs and demonstrated close correlations between these biomarkers in response to radiation. Our results suggest that the elevated levels of circulating IL-18 after radiation proportionally reflect radiation dose and severity of radiation injury and may be used both as a potential biomarker for triage and also to track casualties after radiological accidents as well as for therapeutic radiation exposure. Introduction Radiation injuries are heterogeneous disorders that involve many pathophysiological pathways and affect both cells directly exposed to radiation and cells not directly exposed. Normal tissue injuries induced by ionizing radiation differ depending on the type of radiation, dose and dose-rate of radiation exposure, and the varied radiation-tolerances in target organs and cells. For example, a c-radiation dose above 1 Gy in humans or mice poses a risk of destruction of the bone marrow (BM) and damage to the hematopoietic system [1,2], whereas only high-dose (10 Gy or more) total-body irradiation (TBI) in experimental mice can result in acute generalized gastrointestinal (GI) syndrome with loss of intestinal crypts, damage to crypt stem cells, and breakdown of the GI mucosal barrier, leading to animal death [3][4][5]. In addition, total-body 60 Co c-radiation induced 90% mortality within 30 days (LD 90/30 ) with a 95% confidence interval (CI) at doses of 9.6 Gy in CD2F1 mice [6], 1.86 Gy in Gottingen minipigs [7] and 7.56 Gy in rhesus macaques (LD 90/60 without supportive care) [8], showing that the radiation sensitivity in various animal species differs significantly. The mechanisms of these complex biological responses of tissues to harmful radiation damage are not well understood, and rapid, easy-to-use, inexpensive and accurate methods for assessing radiation doses and evaluating radiationinduced injury as well as the effects of radiation countermeasures are not available, although multiple parameter biomarkers have been reported [9][10][11]. Radiation causes cellular DNA damage leading to ''danger signals'' and antigen release. These signals and antigens are important proinflammatory causal factors involved in proinflammatory and immune reactions in target cells [12,13]. Massive radiation-induced pro-inflammatory factor release from injured cells may further result in stress response signal activation and cell damage and depletion [14][15][16][17][18]. The interleukin (IL)-1 family cytokines are linked closely to the innate immune response and as the first line of host defense against stress-induced acute and chronic inflammation [19,20]. IL-1 family members IL-1b, IL-18, and IL-33 play key roles in inflammatory and immune responses and have been described as having significant influence on the pathogenesis of diseases [21]. IL-1b and IL-18 are first synthesized as low levels of inactive precursor presenting in healthy human and animal cells, and after cleavage by active caspase-1 become mature active factors secreted in response to disease, stress, and inflammatory stimuli [22,23]. IL-1b induces production of secondary inflammatory factors IL-6, IL-8, tumor necrosis factor-alpha (TNFa), interferon-gamma (IFNc), granulocyte colony-stimulating factor (G-CSF) and granulocyte-macrophage colony-stimulating factor (GM-CSF) [24,25]. IL-18 has a role in destructive inflammatory disorders and stimulates neutrophil migration and activation as well as T helper 1 (Th1) cell differentiation and IL-2, GM-CSF and IFN-c secretion in a variety of cell types through Toll-like receptor (TLR) signaling [20,23]. In contrast, full-length IL-33 is bioactive and its inactivation results from caspase-1-mediated cleavage. Bioactive IL-33 exists in cells and the inactive cleavage form is released by necrotic and dead cells. IL-33 can induce release of T helper 2 (Th2) cells, as well as Th2 type cytokines such as IL-4, IL-5 and IL-13 [26,27]. Ionizing radiation-induced IL-1b, IL-6, IL-8, G-CSF and/or GM-CSF production in mouse BM and intestinal cells and human osteoblast cells have been reported by our laboratory [5,15,18]. A recent study demonstrated that wholebody low dose (0.05-2 Gy) radiation induced IL-12 and IL-18 secretion by mouse peritoneal macrophages [13]. Because radiation tolerances vary in different tissues and cells, we hypothesize that elevated levels of circulating IL-1 family cytokines in serum after TBI proportionally reflect the radiation doses and severity of radiation-induced tissue damage, and can be used as potential biomarkers of ionizing radiation injury. In the current study we compared IL-1b, IL-18, and IL-33, as well as their downstream secondary cytokine expression in mouse BM, spleen, thymus, and serum after c-radiation from sublethal to lethal doses (5,7,8,9,10,or 12 Gy) at different time points and demonstrated that the most radiosensitive and stabile cytokine is IL-18 in mouse serum. We further evaluated levels of circulating IL-18 in minipig plasma and nonhuman primate (NHP) serum after radiation exposure. Because hematological biomarkers of exposure to ionizing radiation are well characterized and used in medical management of radiological casualties [28], in this study, close correlations were found between the new radiation biomarker IL-18 and well known hematological radiation biomarkers [29] in our animal models. The present study provides a novel method for determining radiation injury by quantitation of circulating IL-18 in different animal species using ELISA (enzyme-linked immune sorbent assay). Results 60 Co c-radiation-induced expression of IL-1b, IL-18, and IL-33 in mouse tissues Previously we demonstrated that total-body c-radiation induced 50% and 90% mortality within 30 days (LD 50/30 and LD 90/30 ) in CD2F1 mice that had received 8.5 Gy and 9.6 Gy, respectively [6]. Radiation at 8.75 Gy significantly induced apoptosis and death of mouse hematopoietic cells [30], and doses above 10 Gy of total-body irradiation (TBI) destroyed both hematopoietic and gastrointestinal (GI) cells and degraded the GI mucosal-epithelial barrier, which resulted in bacterial translocation from intestines into the blood [5]. Based on these results, the current study was designed so that CD2F1 mice received 60 Co c-radiation exposures at 0 (sham irradiated control), 8, 10 or 12 Gy. Radiation-induced pro-inflammatory cytokine release was first evaluated in different mouse tissues up to 9 days post-irradiation, by which time many mice in the 12 Gy irradiated group were dead. The experiments were stopped at day 14 according to the Institutional Animal Care and Use Committee (IACUC) protocol from the Armed Forces Radiobiology Research Institute (AFRRI). Mouse bone marrow (BM) from femora, humeri, spleens, and thymi were collected on days 1, 3, 6, and 9 after 0, 8, or 10 Gy irradiation (designated as +1 d, +3 d, +6 d, +9 d, with the day of irradiation considered 0 d). Cell homogenates from BM, spleens, and thymi were generated in PBS (phosphate buffered saline). An optimized amount of total protein from each sample in indicated groups (6 mice/group) was applied for determining of IL-1b, IL-18, and IL-33 using quantitative enzyme-linked immune sorbent assay (ELISA). The data are reported as cytokine levels detected in 1 mg of total protein/sample. Results in figure 1A showed that 8 and 10 Gy radiation induced about 4-fold increases of IL-1b and IL-33 in thymi 1 day after irradiation compared to shamirradiated control. Levels of IL-1b reverted to baseline as shown in sham-irradiated control on day 3 and did not change thereafter, whereas IL-33 expression was significantly higher than control up to 6 (8 Gy) and 9 (10 Gy) days post-irradiation with a peak on day 1. In comparison with IL-1b and IL-33, radiation-induced IL-18 increases were observed on day 1 and reached a peak on day 6, with an approximate 2.5-fold increase compared with 0 Gy. At day 9, IL-18 levels in thymi from the 8 Gy and 10 Gy groups were 1.5-fold (8 Gy) and 2-fold (10 Gy) higher than the sham-irradiated control group, respectively. We further examined expression of these cytokines in mice spleens ( figure 1B). As was the case in thymi, IL-1b expression in spleens was relatively low. However, radiation at 8 and 10 Gy significantly increased its level up to 9 days post-TBI. Baselines of IL-18 and IL-33 were high in spleen cells. A radiation-induced 1.5-fold increase of IL-18 after 8 or 10 Gy irradiation was observed in all samples from day 1 to day 9 post-TBI (p,0.05). In comparison with IL-1b and IL-18, IL-33 increased markedly in spleen samples after irradiation with approximately a 3-fold rise on day 3 and 4-to 5-fold increase on day 6 and 9 post-TBI (p,0.01). Previously, we reported a transient IL-1b expression in CD2F1 mouse BM in response to c-radiation [18]. In the current study, we examined IL-18 and IL-33 expression in mouse BM cells after 8 or 10 Gy TBI at different time points. BM samples from each group (6 mice/per group) were pooled for ELISA experiments due to the limited number of BM cells collected from each mouse after irradiation. A 1-fold elevation of IL-18 in mouse BM cells was observed on day 1 after exposure to 8 or 10 Gy, and continually increased up to 5-fold after 8 Gy and 12-fold after 10 Gy 3 days post-TBI. This high level of IL-18 expression lasted up to 9 days post-TBI compared with BM samples from 0 Gy control mice ( figure 1C). In contrast, levels of IL-33 in mouse BM were undetectable using the same ELISA method as used in thymus and spleen samples. Next, to confirm cytokine protein expression in mouse tissues determined by ELISA, immunoblotting with antibodies specific for mouse IL-1b, the active form (18 KDa) of IL-18, and IL-33 were performed in sham-irradiated (0 Gy) and 8 Gy irradiated mouse spleen samples. Figure 2 shows western blot results from one representative of three independent experiments. With spleen samples from 3 mice per group, active IL-18 was expressed on day 1, and significant upregulation occurred on day 3 and day 6 after 8 Gy TBI. Because full-length intracellular IL-33 is bioactive, we further examined full-length IL-33 expression in these spleen cell samples. Radiation-induced IL-33 upregulation was shown 6 days after 8 Gy TBI. IL-1b expression was below the detectable level with the immunoblotting method in spleens. These results were in agreement with results from ELISA experiments. 60 Co c-radiation increased the secretion of bioactive IL-18 in mouse serum Results from experiments described above showed different patterns of radiation-induced IL-1b, IL-18 and IL-33 expression in mouse tissues. We next asked whether these cytokines can be determined in mouse serum after radiation exposure. Mouse serum samples from 0 Gy control, and 5, 7, 8, 9, 10, or 12 Gy TBI mice on 1, 3, 6, 9, and 13 days after irradiation were collected and levels of cytokines in serum of individual mice were measured by ELISA. Data from four independent experiments (6 mice/group in each experiment, N = 24) consistently showed that bioactive levels of IL-18 in mouse sera increased significantly after TBI exposure. The concentration of bioactive IL-18 in sera from the unirradiated control group was 53.569.4 pg/ml. Its levels were increased significantly from day 1 to day 6 after TBI, reaching a peak at day 3 from 225.969.9 pg/ml (5 Gy) up to 1285.26149.9 pg/ml (12 Gy) (figure 3A). Nine days after irradiation, levels of IL-18 remained higher for 5-10 Gy groups compared with unirradiated control. There was no IL-18 data on day 13 after 10 Gy and on day 9 and 13 after 12 Gy irradiation because most mice did not survive in those groups. In addition, the sensitivity and specificity of IL-18 expression in serum after all doses of cradiation at indicated time points were analyzed by receiver operator characteristic (ROC) curves [31]. Figure 3B shows the ROC curve of IL-18 levels in comparison of sham-irradiated vs. 1 day after c-irradiated samples. The ROC curve analysis is summarized by area under the curve (AUC) at 95% confidence intervals (CI), and p-values from statistical analysis (ANOVA) as shown in table 1. With AUC was above 0.94 and p,0.001 in all time points that has been tested, the high specificity and sensitivity Figure 1. 60 Co-radiation induced expression of IL-1ß, IL-18, and IL-33 in mouse tissues. Mouse thymi, spleens, and BM were collected after 1, 3, 6, and 9 days after 0, 8 and 10 Gy of 60 Co-TBI. Their homogenates were generated in PBS, and for each assay an equally determined amount of total protein was applied for quantitative detection of IL-1b, IL-18, and IL-33 by ELISA assay. After normalization, the results were displayed as the cytokine levels measured in 1 mg of tissue homogenates. (A) and (B) show the levels of IL-1b, IL-18, and IL-33 in thymus and spleen homogenates, respectively. Results were from a total of three experiments, N = 6/group in each experiment; *p,0.05, **p,0.01; mean 6 SD. (C) shows the levels of IL-18 from six BM cell lysates combined and tests were performed in duplicate. doi:10.1371/journal.pone.0109249.g001 of IL-18 in response to radiation was observed after TBI. It is noted that serum samples undergoing several freeze and thaw cycles produced almost identical readings in this ELISA method. In comparison with IL-18, the increased secretions of IL-1b and IL-33, as well as IL-1b's downstream cytokines IL-6 and IL-8 in mouse sera from irradiated mice were not observed from day 1 to day 9 post-irradiation. We further extended the experiments and examined IL-1b, IL-6, IL-8, IL-18 and IL-33 expression in mouse serum at early time points (4 and 8 h) after irradiation. Our results demonstrated that radiation-induced elevation of IL-6 in mouse serum was observed as early as 4 h and reached a peak at 8 h before returning to the baseline level as shown in non-irradiated control samples at 24 h post-irradiation (figure 3C). In contrast, radiation-induced elevation of IL-1b, IL-8, IL-18, and IL-33 were not shown by ELISA in mouse serum at the early time points after irradiation. To verify whether there are other IL-1 family-mediated secondary cytokines in mouse serum in response to TBI, we screened release of radiation-induced cytokines in mouse serum using a commercially available mouse cytokine antibody array kit that provided antibodies for detection of 62 cytokines, chemokines, growth factors, and soluble receptors of cytokines. The majority of IL-1 family-mediated secondary cytokines were included (table 2). Pooled sera from 0 Gy control and 1, 3 and 6 day post-8 Gy irradiation groups (N = 6/group) were used for cytokine antibody array assay. Figure 4A shows array images from 0 Gy and 1, 3, and 6 days after exposure to 8 Gy, and figure 4B shows 24 cytokines and chemokines detected in mouse serum. Protein expression is shown as a density ratio normalized to positive control, and the criterion for stating a meaningful difference in protein expression between irradiated and unirradiated samples was at least a 2-fold change [32]. Interestingly, among these factors only three cytokines were changed by radiation. The level of G-CSF was enhanced whereas IL-10 and IL-12p 40/70 (IL-12 subunit) were decreased on days 3 and 6 after radiation exposure compared with sham-irradiated control. 60 Co TBI increased the secretion of bioactive IL-18 in circulation of rhesus macaques and Gottingen minipigs We next asked whether the elevation of circulating IL-18 also reflects radiation injury in large animal models such as NHPs and Gottingen minipigs. In this study, frozen serum from rhesus macaque and minipig plasma samples were shared with our other ongoing projects. Because the LD 50/60 in NHPs (Rhesus macaques) is 6.44 and the LD 90/60 is 7.56 Gy of 60 Co c-radiation without supportive care [8], serum samples from 5 adult rhesus macaques (4 females and 1 male, 3 to 8 years of age, and 4-8 kg of body weight) exposed to 7 Gy of 60 Co-TBI were examined according to the methods described in ''Material and Methods''. Serum samples were collected before and 2 and 4 days after 7 Gy irradiation from individual animals, and levels of IL-18 in these samples were measured using the ELISA method. Using the serum samples collected pre-radiation as control, levels of IL-18 in sera collected from the same animals 2 and 4 days after radiation were evaluated. Results in figure 5 show a significant increase of IL-18 in sera taken from all 5 animals 2 days after TBI compared to preirradiation samples from the same animals (p = 0.021). Four days after irradiation, radiation-induced increases of IL-18 in NHP serum were still observed in 4 (3 females and 1 male) out of 5 animals. We recently reported the suitability of using the Gottingen minipig as an additional large animal model for radiation research and developed dose response curves for bilateral c-radiation of Gottingen minipigs [33]. Our results demonstrated that the Gottingen minipig is very sensitive to c-radiation with an LD 10/ 30 at 1.59 Gy, LD 50/30 at 1.73 Gy, and LD 90/30 at 1.86 Gy of 60 Co-TBI, respectively [7]. Hence we decided to evaluate levels of circulating IL-18 in this animal model in response to 60 Co-TBI. Available frozen plasma samples from male Gottingen minipigs exposed to 1.6 (N = 4) and 1.78 Gy (N = 5) of 60 Co-TBI were examined by ELISA for quantitative detection of pig IL-18. ELISA kits with anti-minipig IL-18 antibodies that detect plasma and serum samples from the minipig were used according to methods described in ''Material and Methods''. Plasma samples were collected from individual animals before and 3 h and 1, 2, 3 and 7 days after irradiation, and levels of IL-18 in these samples were measured. Using the plasma samples collected pre-irradiation as control, levels of IL-18 in the sera collected from the same animal at different time points after radiation were evaluated. Although large variations of IL-18 levels between individual animals at the same time point are shown in figure 6A, radiation significantly induced an increase of IL-18 in 1.6 Gy irradiated minipigs' plasma at 1 and 3 days after TBI (figure 6B). Furthermore, levels of IL-18 were measured in pooled samples at each indicated time point including pre-radiation, 3 h, 2 and 3 days after 1.78 Gy of TBI due to limited sample volume. As shown in figure 6C, results indicated an extensive increase of IL-18 in the pooled minipigs' plasma sample collected 3 days after 1.78 Gy of TBI. Comparison of circulating IL-18 with hematological radiation biomarkers in mice, minipigs and NHPs Peripheral blood was collected from sham-or c-irradiated mice, and pre-and post-irradiation in minipigs and NHPs. The absolute lymphocyte counts (ALC) and ratio of absolute neutrophil counts (ANC) to ALC (ANC/ALC) as hematology radiation biomarkers [29] were compared with circulating IL-18 from same samples of unirradiated and irradiated animals. The discrimination of radiation-induced IL-18, ALC and ANC/ALC in mice, NHPs and minipigs were analyzed and results are shown in figure 7. ANC levels were increased on day 1, followed by significant decreases on day 3 and 7 after radiation, resulting in an elevation Discussion Ionizing radiation can induce a variety of biological injuries depending on the physical nature, duration, doses and dose-rates of exposure [2,34]. Information from individual exposures is essential for early triage during radiological incidents to provide optimum possible life-sparing medical procedures to each person [35]. A rapid, easy-to-use, inexpensive and accurate radiation dose-assessment assay that tests easily obtained samples such as blood or urine with transportable equipment is in urgent need in emergency scenarios to triage and track radiological casualties, and to evaluate the radioprotective and mitigative/therapeutic effects of radiation countermeasures [10,11,36]. A good biomarker should be specific in differentiating pathologies, sensitive to facilitate rapid and significant detection before or during the development of pathology, and stable in different conditions so it can be extracted from biopsies fixed for diagnostic staining or from stored body fluids. However, this biomarker is not yet available. Whole-body radiation-induced multi-tissue injury could result in specific antigen secretion. Ionizing radiation-induced inflammatory cytokine and chemokine production and secretion from injured cells may result in stress response signal activation and cell damage and depletion [12,15,37]. The IL-1 family of cytokines plays key roles in inflammatory and immune responses [21], therefore it can be the first line of host defense against stresses [19,20]. In the current study, we examined the effects of c-radiation on three IL-1 family cytokines, IL-1b, IL-18, and IL-33. We also studied their secondary cytokines' expression and secretion in different mouse tissues and serum using ELISA, immunoblotting and mouse cytokine antibody array as these cytokines have been described as significantly influencing pathogenesis of diseases, including radiation injury [12,21,38,39]. We expected to identify biomarkers that can be used easily as accurate radiation dose-assessment assays after acute radiation injury. Our data identified significant increases of IL-1b, IL-18, and/or IL-33 in mouse thymus, spleen and BM cells after irradiation. However, levels of these cytokines varied in different tissues in response to the same dose of radiation at indicated time points, and it is difficult to determine which of these cytokines is the best radiation biomarker candidate. We further examined these cytokines in mouse serum and hypothesized that elevated levels of circulating IL-1 family cytokines in serum after total-body radiation may reflect proportionally the radiation doses and severity of radiation injury in individual animals. Interestingly, of all cytokines we examined, only IL-18 increased significantly and persisted in mouse serum for at least 13 days after irradiation (figure 3). In four independent experiments with 6 mice per group in each experiment (total N = 24/group), radiation-induced IL-18 increases in mouse serum were observed on day 1 post-irradiation and continually increased and reached a peak on day 3 with 4.5 to 24-fold increases after 5-12 Gy compared to serum samples from sham-irradiated controls. The sensitivity and specificity of circulating IL-18 increases in response to gamma radiation were evaluated by receiver operator characteristic (ROC) curves with 95% confidence intervals (CI), which is a recommended standard statistical method for development of biomarkers [31]. Furthermore, we found that serum samples undergoing several freeze and thaw cycles produced almost identical readings in ELISA, suggesting flexibility in storage conditions. To verify whether there are other cytokines in mouse serum in response to TBI, we screened release of radiation-induced cytokines in mouse serum using a mouse cytokine antibody array kit including 62 cytokines, chemokines, growth factors, and soluble receptors of cytokines. The majority of IL-1 family-mediated secondary cytokines were included. Three cytokines responded to radiation with one showing an increase (G-CSF) and two a decrease (IL-10 and IL-12p 40/70 ). G-CSF is a secondary cytokine of IL-1b [24], and its increase may have reflected IL-1b activation in radiation injury. However, radiation-induced G-CSF increases happened relatively late (3 days post-radiation) and the level of G-CSF in irradiated mouse serum was low, as shown in figure 4. Although it was detected by cytokine antibody array, the result may not be detectable by ELISA. Thus, G-CSF may not be as good a radiation biomarker as IL-18. We further confirmed our finding in frozen minipig plasma and NHP serum after sham-and c-irradiation and demonstrated that radiation significantly enhanced IL-18 in serum from 5 NHPs 2 to 4 days after 7 Gy irradiation and plasma from minipigs (samples of total 9 animals) 1-3 days after 1.6 and 1.78 Gy irradiation. Because hematological biomarkers of exposure to ionizing radiation are well characterized and used in medical management of radiological casualties [28], in this study, comparisons were also made between the new radiation biomarker IL-18 and the wellknown hematological radiation biomarkers [29] in three animal models. Our data demonstrated that a significant reduction of absolute lymphocyte counts (ALC) in animal blood after radiation was negatively correlated with radiation-induced increases in circulating IL-18. Thus, for the first time we demonstrated that circulating IL-18 increased and existed stably in mice, minipigs and NHPs after 60 Co cirradiation in a radiation dose and timedependent manner, although the radiation-tolerance levels in these animals differ significantly. Recent studies suggested that IL-33 represents a group of IL-1 family factors which retain some intracellular functions and are passively externalized upon cell lysis. In contrast, IL-1b and IL-18 are induced in restricted inflammatory cells by inflammatory stimuli and undergo regulated secretion [20,40]. Interestingly, there are specific differences between IL-1b and IL-18. For example, an IL-18 precursor is present constitutively in almost all cells including hematopoietic cells, mesenchymal cells, and epithelial cells of the GI tract in healthy humans and animals, whereas the IL-1b precursor rarely is found in these cells [23]. IL-1b is produced by monocytes, macrophages, dendritic cells (DC), B-lymphocytes and nature killer (NK) cells [41]. In addition, IL-1b activation of cells usually needs picograms (pg) to nanograms (ng) per milliliter (mL), whereas IL-18 requires 10-20 ng/mL or even more [42]. Consistently, our results demonstrated that IL-18 was increased significantly and was easily measured using the ELISA method in mouse and NHP serum and minipig plasma after TBI, and levels of IL-18 in the sera of the three animal models were correlated tightly with radiation exposure. In contrast, IL-1b in mouse serum was undetectable by ELISA regardless of radiation. The results comprise a proof of concept that measurement of IL-18 in blood may be useful for estimating radiation exposure at the indicated time points. Furthermore, using the inexpensive and easy-to-use ELISA method to evaluate the level of IL-18 from easily obtained serum samples after radiation exposure may be useful to assess the activity and severity of radiation-induced damage, and to track health status after radiation injury and therapy. The mechanism(s) by which radiation regulates inflammatory cytokine IL-18 expression and secretion are under investigation. Shan at al. recently demonstrated that whole-body low dose (0.05 and 0.075 Gy) ionizing radiation (X-ray) induced IL-12 and IL-18 secretion by mouse peritoneal macrophages [13]. Another report by Kang et al. indicated that low dose (10 cGy) of TBI or half-body radiation (only the area below the xyphoid process was Table 2. Cytokine antibody array map. irradiated) upregulated IL-18 mRNA expression in mouse peripheral blood mononuclear cells [43]. It suggests that low dose radiation promotes the innate immune response which can induce IL-18 expression. In future studies, we will evaluate circulating IL-18 after low dose (,100 mSv or 1 Gy) gamma radiation in mouse and/or large animal models, since radiation accidents can cause low dose radiation exposure and low dose radiation-induced health risks not only involve neoplastic diseases but also mutations that may contribute to different diseases [44]. Ethics Statement Animals were housed in an Association for Assessment and Accreditation of Laboratory Animal Care (AAALAC) -approved facility at the Armed Forces Radiobiology Research Institute (AFRRI). All procedures involving animals were reviewed and approved by the AFRRI Institutional Animal Care and Use Committee (IACUC) and all efforts were made to minimize suffering. Animals received total-body irradiation (TBI) in a bilateral gamma radiation field at AFRRI's Cobalt-60 ( 60 Co) facility. The day of irradiation was considered day 0. Control animals were sham-irradiated and treated in the same manner as the irradiated animals, except the 60 Co source was not raised from the shielding water pool. Animals Mice. Twelve-to 14-week-old CD2F1 male mice (Harlan Laboratories, Indianapolis, IN) were used according to methods described in previous reports [30]. All animals were acclimatized upon arrival and representative animals were screened for evidence of disease. Animal rooms were maintained at 2162uC with 50%610% humidity on a 12 h light/dark cycle. Commercial rodent chow (Harlan Teklad Rodent Diet 8604) was available ad libitum as was acidified water (pH J 2.5) to control opportunistic infections. Animals were chosen randomly for each experimental group and received either 0 (sham-irradiation), 5, 7, 8, 9, 10 or 12 Gy at a dose rate of 0.6 Gy/min. After irradiation, mice were returned to their home cages with food and water provided as usual. Minipigs. Male Gottingen minipigs (Total 9 animals were used in this study. 4 months of age, 9-11 kg) were obtained from Marshall BioResources (North Rose, NY). The Gottingen minipig is the smallest minipig available specifically bred for biomedical purposes. Procedures were performed in accordance with protocols approved by the AFRRI-IACUC as previously reported [33]. Briefly, minipigs were singly housed in adjoining cages that allowed tactile, visual, olfactory and auditory contact through cage bars. Room temperature was kept between 64 and 79uF (17.8 to 26.1uC) and humidity between 50%620%. Environmental enrichment and stimulation were provided in the form of physical devices (treats, sanitized toys) and positive interactions with caretakers. Minipigs were fed twice daily (Harlan Teklad Minipig diet 8753, Madison, WI, USA) according to individual weights and provider recommendations; water was provided ad libitum. The animals were subjected to total bilateral body irradiation using 60 Co, with radiation doses of 1.6 and 1.78 Gy at a dose rate of 0.6 Gy/min as described in our previous report. After irradiation, minipigs were returned to their home cages with food and water provided as usual. Nonhuman primates (NHPs). Rhesus macaques (4 female and 1 male, 3 to 8 years of age, and 4-8 kg of body weight) used in the present study was part of an ongoing project on evaluation of radiation countermeasures in NHPs. Research with animals was conducted according to the principles enunciated in the Guide for the Care and Use of Laboratory Animals prepared by the Institute of Laboratory Animal Resources, National Research Council. The animal protocol describing care, radiation and blood collection was approved by the AFRRI-IACUC and the radiation procedure has been described previously [8,35]. Briefly, Rhesus macaques were housed individually in sanitized stainless-steel cages in conventional holding rooms provided with a minimum of 10-15 changes/h of 100% fresh air, conditioned to 18-29uC and a relative humidity of 50%620% on a 6:00 o'clock light-18:00 o'clock dark full-spectrum light cycle. Environmental enrichment and stimulation were provided in the form of physical devices (treats, sanitized toys) and positive interactions with caretakers. Animal were fed twice daily (Teklad Global 20% Protein Primate Diet Jumbo T-2050J) according to individual weights and provider recommendations. Diets were supplemented with fruit, vegetables and liquid diets, and water was provided ad libitum. Ketamineanesthetized Rhesus macaques were placed in Plexiglas chairs and exposed to a total-body radiation to midline tissue dose of 7.0 Gy at dose rate of 0.6 Gy/min. After irradiation, animal were returned to their home cages with food and water provided as usual. Mouse peripheral blood cell counts and serum and tissue preparation On days 1, 3, 6, 9 and 13 after TBI, mice were humanely euthanized for serum and tissue collection. Euthanasia was carried out in accordance with the recommendations and guidelines of the American Veterinary Medical Association. The mice were deeply anesthetized prior to collecting whole blood through a cardiac blood draw in accordance with the approved IACUC protocol. The blood was immediately divided into two tubes. The samples in EDTA tubes were used for peripheral blood cell counts by a clinical hematology analyzer (Bayer Advia 120, Bayer, Tarrytown, NY) at the AFRRI Veterinary Sciences Department facility, and samples in BD Microtainer Gold tubes were left unmoved on racks. Following 30 minute coagulation at RT, the sera were well separated from the gel by 10 minute-centrifugation at 10,0006g/ min, collected and stored at -80uC for later study. Once blood collection from individual mice and the mouse euthanasia were completed, mouse tissues were collected. Bone marrow cells were collected from mouse femurs and humeri. After erythrocytes were lysed by erythrocyte lysis buffer (Qiagen GmbH, Hilden, USA), total bone marrow myeloid cells were collected for further experiment use. Mouse spleens and thymuses were excised, rinsed with PBS, and snap-frozen in liquid nitrogen then stored at -80uC for further use. Protein extraction and immunoblotting The frozen mouse tissues (spleens, thymuses, livers, and lungs) were homogenized in 1X radio-immunoprecipitation assay buffer (RIPA, Sigma-Aldrich, St Louis, MO, USA) (supplemented with a protease inhibitor tablet) by tissue homogenizer (Fast Prep-24, MP Biomedicals, Solon, OH, USA), following the manufacturer's recommendations. After 15-min centrifugation at 12,0006g/min, the supernatant was collected and protein concentrations were determined using a BCA assay kit (Pierce, Rockford, IL, USA). The collected homogenates were denatured in Laemmli buffer supplemented with DTT (dithiothreitol), and the same amount of protein from each sample (100 to 120 mg) was loaded for SDS-PAGE electrophoresis. Subsequently, immunoblotting was performed following standard procedures with an enhanced chemiluminescence kit (Thermo Scientific, Rockford, IL, USA). The images were captured by CCD camera and the resulting densitometry was assessed using ImageGauge software. Protein densitometry was normalized to beta-actin. Antibodies for mouse IL-1b and IL-18 were purchased from R&D (Minneapolis, MN, USA), for beta-actin from Sigma-Aldrich (St Louis, MO, USA), for mouse IL-33 from Santa Cruz (Santa Cruz Biotechnology, Dallas, TX, USA). Blood sampling from minipigs and NHPs To facilitate collection of minipig blood samples according to the IACUC protocol, animals were quarantined for two weeks and implanted with a vascular access port (VAP). After 3 weeks of recovery from surgical implantation of the VAP, their blood samples were obtained from the VAP before and after irradiation at indicated time points. Blood was collected via a strictly aseptic technique in sample tubes containing EDTA and immediately stored on ice until further processing. After blood sample collection, the animals were returned to their original cages. NHP blood samples from individual animals were collected for all experimental time points (pre-irradiation, 2, and 4 day postirradiation) according to the IACUC protocol. Blood was collected from a peripheral vessel or femoral vein with a 22-25 G heparinized needle/syringe. Serum samples were maintained at -70uC until assay. After blood sample collection, the animals were returned to their original cages. Both minipigs and NHPs' peripheral blood cell counts were performed at the AFRRI Veterinary Sciences Department facility using a clinical hematology analyzer (Bayer Advia 120, Bayer, Tarrytown, NY). Cytokine quantitation by enzyme-linked immune sorbent assay (ELISA) Quantitation of IL-1b, IL-6, IL-8, IL-18, and IL-33 was performed using ELISA kits suitable for detecting these cytokines in sera and cell lysates. Cytokine levels in BM cells, spleen and thymus tissue homogenate were determined following assay instructions provided by manufacturers. Briefly, BM cells after erythrocyte removal, spleen and thymus tissue from individual mice were homogenized and sonicated in PBS plus proteinase inhibitor, followed by 15 min of 12,0006g centrifugation. The supernatant was collected and subjected to protein determination (BCA assay). The supernatant with an equivalent amount of protein (10 to 100 mg) from each sample was evaluated in duplicate. Statistical analysis was conducted from group samples of 6 mice. ELISA kits for determining cytokines in mouse serum and tissues were purchased from R&D (Minneapolis, MN, USA). The IL-18 ELISA kits for minipigs and NHPs were purchased from Bioscience (San Diego, CA, USA). The limits of detection of IL-18 are 25 and 23 pg/ml for mouse and minipig samples, respectively. Cytokine antibody array Pooled mouse sera from 6 mice/group were subjected to cytokine antibody array analysis using the Ray Bio Mouse Cytokine Antibody Array 3 kit (Ray Biotech, Inc. Norcross, GA, USA) according to manufacturer's instructions. In brief, the array membrane coated with cytokine antibodies was first blocked with blocking buffer and then incubated with 1.0 mL of pooled mouse sera (1:3 dilution) followed by washing and incubation with biotinconjugated second antibody and horseradish peroxidase-conjugated streptavidin. The membrane was developed using enhanced chemiluminescence solution (Thermo Scientific, Rockford, IL, USA) and exposed to x-ray film. Protein expression was expressed as a percentage density normalized to background and calculated using Fuji SuperArray analysis. Statistical analysis Differences between means were compared by one way analysis of variance (ANOVA) with Dunnett's Post-Hoc test and multivariate analysis of variance (MANOVA). P,0.05 was considered statistically significant. Results are presented as means 6 standard deviations or standard errors of the mean as indicated. The sensitivity and specificity of single biomarker were analyzed by the receiver operator characteristic (ROC) curve using IBM-SPSS program (SPSS Statistics Professional). Results are presented as area under ROC curves (AUC) with 95% confidence interval (CI).
8,509.2
2014-10-07T00:00:00.000
[ "Biology", "Medicine" ]
Reconstruction of cultural memory through digital storytelling: A case study of Shanghai Memory project This article analyses how digital storytelling (DS) is applied to a digital humanities (DH) research project. It considers the purpose of storytelling and illustrates its use to help to democratize the wider project by including diverse voices and helping to reconstruct cultural memory. How can DS be used as a critical research method to help develop a robust methodology in DH research, particularly for organizing historical and cultural resources to form a story world and addressing biases in the established archival collections? This initiative is the latest phase of the Shanghai Memory project, adding an important additional dimension to the established showcase, A Journey from Wukang Road . Wukang Road, with many historical buildings going back to the colonial era, has important cultural signi fi cance as part of the former French Concession. Originally known as Rue de Ferguson , the name was changed in 1943, at the time of the Japanese occupation, seemingly as part of anti-colonial sentiment while China was being encouraged to resist her occupiers. Participation in the storytelling project is facilitated by user generated content and promotion in the Shanghai Library. The aim is to pre-sent a clearer storyline about the evolution of Wukang Road, explore its historical context, use the stories and re fl ections of the ordinary people to balance that of the elites, importantly encouraging inclusion of the vernacular Shanghainese dialect as part of wider movements to protect local languages. Introduction From classical narrative theory, usually traced back to Aristotle's Poetics, to modern theories such as poststructuralism in the 1960s, narrative and the study of storytelling has always been a crucial science in literary research (Armstrong and Tennenhouse, 1993).However, the form of the narrative never matches any specific literary genre, and in its essence, any record relevant to human expression, creation, interpretation, and construction can be regarded as narrative-that is, a series of symbols and media with internal logic.It is an act of communication between the storyteller and the audience/listener. The act of storytelling can be interpreted as a means of describing and presenting concepts or events in a logical and coherent way to easily reach the listeners and be widely disseminated.Although it may be true that a straight line is the shortest distance between two fixed points, when it comes to two people, it would perhaps be more accurate to say that a story is the shortest distance between them, something that can unite and bring them closer together, particularly if the experiences within the story are shared.Effective storytelling is based on the full participation of the self and of others, offering a mechanism for expression that resonates cognitively and emotionally among the listeners (Chaitin, 2003).As an approach to construct and express meaning, storytelling can also be seen as a process of reconstructing memory, the past as well as the culture of individuals, groups, and communities.For the audience, it can be a process of understanding and reinterpreting their lives and experiences, evoking corresponding emotions and thoughts, and spreading other related effects such as interactions and discussions among the listeners, prompting reflection and encouraging creativity (Bizzini, 2013).Storytelling is a way in which we can make sense of things, understand our world and our place in it.Telling stories also allows the teller themselves to reengage with their memories, opening up those neural pathways to past emotions through episodic retrieval (Rugg and Vilberg, 2013), and perhaps find new meaning themselves; a way for the teller, not only the listener, to make sense of things (M€ unster et al., 2019). In the process of investigating and studying human expression and creativity, humanities scholars have always sought appropriate ways to present, reconstruct, and disseminate human narratives in different settings, and cultural memory institutions play an essential part in this.Cultural memory institutions such as galleries, libraries, archives, and museums (GLAM institutions), as repositories for the human record and creativity, possess cultural resources that are themselves collections of various forms of human narrative.They need to find appropriate ways to fully present, reconstruct, and disseminate them.Works of art tell a story, both with their content and provenance, as do the records held in archives, both local and national.Since the 1990s, the digital turn has brought about a methodological and epistemological shift in humanities research and also the practice in GLAMs (Barber, 2016;Dakovi c, 2021).The concept and method of digital storytelling (DS), as a branch of storytelling, finds its way in creating, expressing, interpreting, and sharing stories by digital tools and new media forms.These provide new possibilities to engage narrative contents more widely, digging down to find knowledge that was always there but never before included in the story (Malita and Martin, 2010). This study builds on previous research on the Wukang Road as part of the Shanghai Memory project. 1 It moves the research to the next planned level which is to engage with local people and to bring in their voices to help to reconstruct cultural memory.It examines how DS can support the reconstruction of cultural memory and asses its value, both epistemologically and methodologically, as a sub-part of an extensive digital humanities (DH) project.It also provides a new angle of approach to help us better understand how these methodologies can support post-colonial research within the wider picture of Shanghai's memory.This article draws on extensive published literature and reflection about DS and its relationship with cultural memory.It analyses how DS as a technique is applied to encourage and facilitate cultural memory reconstruction as part of the Shanghai Memory project, hosted at Shanghai Library. DS as democratization of culture From the perspective of media evolution, human expression and narrative have gone through four key stages: the oral age, the chirographic age, the print, and the digital age (Ryan, 2004).'The medium is the message', as the communications theorist Marshall McLuhan claimed; 'the personal and social consequences of any medium [ … ] any extension of ourselvesresult from the new scale that is introduced [ … ] or by any new technology' (McLuhan, 1964, chapter 1).Storytelling, with its roots in the pre-literate oral tradition, as one of the primary forms of human expression, depends heavily on the medium, and the evolution of media driven by technology constantly provides new forms and possibilities for expressing, creating, delivering, and sharing stories.'Narratives are everywhere.We tell narratives about ourselves, and we make the world meaningful through storytelling.We position others through the narratives we tell and are positioned by stories told about us' (Forchtner, 2021, p. 314). The theory and practice of DS have been developing steadily since the 1990s thanks to the development of the interactive web, with its possibilities for usergenerated content (UGC) and participation, and the advancement of multimedia technology.The concept of DS was first conceived and developed in the field of media, with a focus on audio-visual story creation using digital media (Lambert, 2018).Following this, ideas and practice extended into multiple fields such as public history (Burgess and Klaebe, 2009), and education (Robin, 2008), where there is a close relationship with human narrative.These fields discussed the possibilities for DS as they encountered the digital turn which prompted the move from traditional storytelling approach and techniques into the digital sphere and brought about epistemological as well as methodological shifts (Noiret, 2018).In the media field, it first got attention in popular movements using multimedia digital tools to help ordinary people tell their stories and has since been used in journalism and media studies to refer to various emerging forms of DS.In public history, the reproduction and reconstruction of historical memories generated through the use of DS can be seen as an important addition to both official and private collections concerning local communities (Conrad, 2013).In education practice, it is regarded as an effective teaching tool for enhancing the interaction between students and teachers, encouraging dialogue between the two, and helping students understand important concepts and knowledge (Robin, 2008;Smeda et al., 2014). DS, understood here as a movement or method for creating, expressing, interpreting, and sharing stories and personal experiences with the use of digital tools and new media forms, has been viewed as a democratization of culture (Burgess, 2006).As both consumers and participants of mass media, people publish, share, and disseminate their daily life, experiences, personal stories, and all kinds of subjective reflections through digital means, all of which can be seen as typical DS practice.These reflections are then transformed into the public domain through social media, becoming part of the mass culture; therefore, DS is regarded by many media researchers as an important way of embodying folk creativity with the assistance of new media forms (Burgess, 2006).From the perspective of media research, the act of storytelling itself can be closely related to the expression of social rights and unequal power distribution; the act of storytelling in traditional media channels often lacks the ability to fully represent society, thus the emergence of DS is argued by some be a part of social justice movements that challenge the power of the mainstream discourse (Canella, 2017).As Castells (2011, p. 773) argues, 'wherever there is power, there is [what he calls] counterpower', and DS can be used as a powerful tool in the 'ways in which narratives are crafted and [ … ] the struggle over how dominant paradigms are established, reinforced and [also importantly, how they are] resisted' (Canella, 2017, p. 26). Researchers in public history and archival fields have adopted DS to explore and gather individual and collective memories of the marginalized, the minority, the overlooked, and forgotten (Burgess and Klaebe, 2009).The production and reconstruction of collective and cultural memories generated through DS practice is treated as an important complement to formal and informal historical collections (personal and family archives) (Burgess, 2006;Conrad, 2013).It is not the digital technology used in presenting stories that historical researchers pay most attention to; instead, they are concerned with preserving memories, increasing community interaction, and sharing historical knowledge with the support of digital technologies.For example, Bristol Stories 2 is a DS project run by Watershed (cultural cinema) and M Shed with support from Bristol Museums, Galleries and Archives, and Bristol City Council, in the UK; here local residents use digital technologies to produce and present storytelling materials online to form a story map of the city's history. The underlying ethos of the project is that everybody has a story to tell, and these personal stories have an intrinsic value as a trigger for memory [ … ].What lies at the heart of each story is that person's unique voice-telling us about the people, places and events that are important in their lives.(Bristol Stories, n.d.) GLAM institutions, working with local communities, such as the above, use DS as one of their essential tools for collecting important pieces of evidence and material for preserving the memory of the community.These contain more diverse and efficient memory materials than the traditional single-form historical records used in the past, such as scattered textual archival records, undigitized old photos, untranscribed oral history materials (audio and video recordings), and so on.Despite the ongoing discourse and practice of DS in education, history, and media research, its theory construction in DH and its practice in GLAMs are still at an exploratory stage.Nevertheless, it has been suggested that DS can be repurposed for DH research as a new way of thinking and approaching research, and for updating the DH paradigm epistemologically and methodologically (Barber, 2016).DS provides new opportunities for DH as both academic fields seek to encourage dialogue, make the world comprehensible, and discover new ways of interaction with the support of digital tools (Barber, 2016).DS, as part of the research toolkit, can also serve as a bridge between cultural heritage and DH with 'space and time as shared concept[s]' (M€ unster et al., 2019, p. 814).DS helps us to analyse cultural heritage with the historical and cultural background that is linked to it.Repurposing ideas from Mikhail Bakhtin's theory of Chronotope, the dispersed semantic elements that appear in the stories embedded in cultural heritage can be structured based on the dimensions of time and space (Lawson, 2011).DH methods provide semantic tools such as Resources Description Framework and linked open data to structure the cultural heritage data and so that stories can be retold in temporal and spatial dimensions.The digital method is also claimed to be useful in activating audience participation, which also brings additional value (M€ unster et al., 2019).As an important source of materials with research potential, cultural heritage collections preserved in GLAMs provide DH practitioners with great potential to reconstruct knowledge and cultural information, add new possibilities to their scholarship, discover hidden knowledge, and support knowledge creation with audience participation through the lens of DS. From collective memory to cultural memory Memory is dynamic and complex to analyse.There are many derivations of memory concepts, such as collective memory, social memory, cultural memory, public memory, and so on, which demonstrate the diverse principles, scope, and layers involved.Maurice Halbwachs proposed the concept of collective memory by analyzing the three sociological categories, which he described as family, religion, and class; and specified the oppositional relationship between individual memory and collective memory: 'autobiographical memory' and 'historical memory' (Halbwachs, 1980, p. 50).He asserted that individual memory not only impacts on shaping a person's identity, but also the way in which they respond to their society; in addition, despite the effects of individual memory, collective memory evolves within its own pattern, and any personal memories can potentially be changed and transformed in this process without any awareness (Halbwachs, 1980).Also, individuals 'extend [their] family memory in such a way as to encompass recollections of [their combined] worldly life' (Halbwachs, 1992, p. 81).Consequently, an important function of memory within society is that it brings people who share similar memories together; that is in the collective memory which forms a part of the bonds 'based on social union' that strengthen the ties of association and common interest within the community (T€ onnies, 2001, p. 131).Additionally, Halbwachs claimed that memory not only exists in the private and individual realm, but collectively at a societal level with the definition and formation of relationships in social networks (Halbwachs, 1992).Recollections of memory may differ between individuals, but they help us to understand ourselves within our shared cultural context.In addition, spatial elements (places, locations, roads, architecture, and so on) play an important role and act as triggers in the construction of social and cultural memories (Stankovi c, 2014). In this project, we follow cultural memory theory from Assmann and Czaplicka where cultural memory connects the three aspects of 'memory (the contemporized past), culture, and the group (society)', emphasizing the different ways in which communities form their cultural understanding over the course of their history (Assmann and Czaplicka, 1995, p. 129).Moreover, what is stored in historical archives is materially preserved and catalogued; it becomes part of an organizational structure, which allows it to be easily sourced. [ … ] The archive, therefore, can be described as a space that is located on the border between forgetting and remembering; its materials are preserved in a state of latency, in a space of intermediary storage.(Assmann, 2008, p. 103) To interpret the inert knowledge hidden in the memory archives, it can be inspected and reclaimed by situating it in a new memory context (Assmann, 2008).Moreover, the feelings experienced in places that carry the passage of time and historical events are more vivid than those experienced by reading.The physicality of place combined with personal history and experience can trigger powerful emotions.Moving through a space 'at a particular time, in a particular way [ … ] might deepen our understanding of human interaction with [that] place more broadly.It means communicating these things meaningfully as stories or arguments' (Dunn, 2019, p. 156). Method This article examines how DS has been used as a critical research method in the DH project A Journey from Wukang Road, initiated by Shanghai Library. The site of Wukang Road and its associated buildings, the celebrities that lived there, the historical events, and other recorded knowledge are the research objects.The project uses knowledge organization methods, linked data, and UGC to extract, link, and create narrative elements and relevant details about people, events, activities, and historical changes.It also explores DS as a DH research method and discusses its uniqueness and value in DH research compared with its application in other fields.The data in this project are pulled from the extensive library resource collections, including newspapers, old photos, books, maps, videos, and audio recordings.Through a discussion forum, users are encouraged to input information about their stories, memories, and thoughts about Wukang Road and its history.This constitutes the UGC part of this project.In this way, the project aims to reconstruct and restore the historical evolution of Wukang Road over more than 100 years. By using Semantic Web technologies, the wider project has constructed a data infrastructure that supports the knowledge organization and presentation of the city memory.The knowledge organization method combines the Bibliographic Control and Authority Control of the library along with the technical ontology proposed by W3C providing an implementation scheme (Xia et al., 2021).The implementation of the semantic technologies, knowledge organization, integration, and the data stack is comprehensively described in an earlier publication, which is openly accessible, by the team at Shanghai Library. 3 Through describing the relationships between the different types of sources and resources, the ontology creates the necessary connections to achieve their integration across the various institutional fields (Fig. 1).By organizing cultural resources based on their narrative elements, the evolutionary history of Wukang Road can be reconstructed with a more complete and clear story line.In addition, it also engages citizens by having them upload photos and personal accounts of their own individual memories and experiences of the road; this creates or rather restores a rich picture of diverse voices from the community and challenges the established historiography. Building on the previous scholarship on Wukang Road, this project adds an additional dimension and explores how DS as a primary research method is used to reconstruct the cultural memory of Wukang Road.The project borrows the essential concepts in storytelling and narrative research including storyworld, characters, plot, and narrative structures (Roine, 2019) and uses them for the construction and delivery of our resources related to Wukang Road.These are then displayed through digital techniques, such as the worldbuilding on the user interface, the timeline tool, and image gallery.In building the project, 'while the author creates the storyworld through the production of signs, it is the reader, spectator, listener, or player who uses [ … ] a finished text to construct a mental image of this world.'(Ryan and Thon, 2014, p. 3).The project website is constructed with the goal of evoking users' memory of Wukang Road by organizing the historical elements into a storyworld with persons, events, architecture, and other related objects that witnessed the evolution of the road (Fig. 2).Space and time as dimensions are consciously used to reorganize historical materials and retell the story by digital means (Dunn, 2019;M€ unster et al., 2019). Through the process of collecting, organizing, storing, linking, and displaying historical and cultural information with the support of digital tools, this project is in essence a process of attaching consciousness and various perspectives to the past.It supports inference from the existing resources to supplement and discover hidden and unlocked knowledge by using the memories of the people connected with it.Knowledge that was always there but that has never before been recognized or included in the story. A case study of the Shanghai memory project Wukang Road, situated in the former French Concession of Shanghai (Fig. 3), is well known as the home of many historic buildings going back to the colonial era, with each one having its own unique cultural and historical story.This road, 1.17 kilometre in length, includes thirty-seven government-protected historical buildings and has witnessed the lives of over 200 celebrities, reflecting the style and features of the old Shanghai (Street Stories-Wukang Road, 2018) (Fig. 4).A centrepiece of the Shanghai Memory project is A Journey from Wukang Road, 4 which is tasked to explore its historical evolution over more than 100 years.It does this by using the historical resources and collections pertaining to Wukang Road and its related history, held primarily in Shanghai Library, and the memories of people connected with it. At the top level, the Shanghai Memory project brings together many aspects of memory construction as part of a comprehensive programme of cultural heritage management to reshape the history of the city (Xia et al., 2021).The wider project identifies the material culture embedded in heritage objects and, supported with sources, makes 'literature the historical witness for the material cultural heritage objects themselves' (Xia et al., 2021, p. 844).The focus of the formal literary accounts (presumably shaped by the elites), however, is very different from the more personalized experience of the citizens, or in other words, the history of the people. This DS project derives ideas from Bakhtin to build a narrative Chronotope that organizes the dispersed semantic elements and diverse types of materials (old photos, buildings pictures, audio recording of Shanghainese, and textual information) that inform the history of Wukang Road and arrange them in the dimensions of time and space to give the users a quick and easy way into the story (Lawson, 2011).In addition, the project website was built to bring together the three dimensions of memory (the conteporized past), culture, and the group (society) to organize and construct resources as proposed in the theory by Assmann and Czaplicka (1995).The buildings themselves are monuments to the formal history as part of the urban cityscape, and the 'road is the smallest unit of urban geography [while] another focus of urban memory is the space-time structure' (Xia et al., 2021, p. 849).Deriving thinking from postcolonial studies around critical 're-reading' and 're-writing' of the colonial past along with the continuing effect of memory (Ashcroft et al., 2002, p. 221), the project recognizes and tells the holistic story of the past. Historical context Wukang Road itself has deep cultural significance within the historical context of Shanghai and particularly concerning the Western colonial powers.It is arguably symbolic as a part of throwing off the dominance of the Europeans.Hence, the voice of the Shanghai people is important and particularly so for Shanghainese which was the dominant language in the region before it was replaced by Mandarin as the official language of China in 1949 (Chen and Gussenhoven, 2015).Despite the dominance of Mandarin, the vernacular Shanghainese remains popular among locals as a way of confirming their identity as indigenous people (Shen, 2016).The naming of cities, towns, streets, and urban districts has a strong political impact; renaming them 'have long been key strategies that different political regimes have employed to legitimize spatial assertions of sovereign authority, ideological hegemony, and symbolic power.' (Rose-Redwood et al., 2017, Abstract).Following the Treaty of Nanking (1842), Shanghai was divided up into the International Settlement, the French Concession, and the Chinese city, 'each operating under its own laws and regulations' (Scheen, 2022, p. 9).This was the first of a series of unequal treaties following wars with Western powers and later Japan, resulting in a significant loss of control over aspects of domestic development.Although still sovereign Chinese territory, under the treaty, the land in Shanghai was rented by the foreign colonialists 'in perpetuity' with many of the legal rights passing to the 'foreign municipal authorities' (Mou, 2012, p. 148).The colonial powers enjoyed extraterritorial privileges within these areas of Shanghai, and each maintained a court to oversee trials of their own nationals (Pratt, 1938).Within the French Concession, Wukang Road (武康 路) was unnamed at its construction in 1907 but known locally as Rue de Ferguson, after the funder.The records suggest that it was originally constructed as a housing development by John Calvin Ferguson for staff at what is now Shanghai Jiao Tong University and soon became a fashionable home for the city's growing wealthy population (Qiao, 2015).This was part of the early Twentieth Century expansion of what is known as the former French Concession in a mainly Western architectural style (Mou, 2012) (Fig. 5).The road was shortened slightly in 1915 'when the starting point was changed from the junction with Huashan Road to the junction with Huaihai Middle Road in the south', and renamed as Wukang Road in 1943 (Xia et al., 2021, p. 849). The year of the change of name ( 1943) is significant in the colonial context.In that year, France, along with Great Britain, and the USA, relinquished control of all their extraterritorial concessions in China and the French Concession in Shanghai was signed over by the French Vichy government to the pro-Japanese Wang Jingwei regime (Taylor, 2020;Strauss, 2015;HMSO, 1943).Following this, the Japanese military government and Wang signed an Agreement on the Return of the Concession and the Revocation of Extraterritorial Rights (关于交还租界及撤废治外法权之协定). The renaming of many roads and apartment blocks in the French Concession was performed against this backdrop with the legitimacy of the Wang Jingwei regime not recognized or supported by many Shanghai citizens, the Kuomintang (KMT-the Chinese Nationalist Party), the Communist Party, nor the international community (Taylor, 2020).Nevertheless, the Wang regime took back administrative control of the Concession and changed the names of many roads and buildings, mostly named after foreigners, presumably to get rid of the distinctive colonial characteristics.It may have been hoped to demonstrate independence, stimulate a sense of Shanghai identity and integrity, to promote anti-colonial sentiment among the citizens as well as to gain local support for the Wang regime. Considering the etymology and literary sources, the change of name has considerable significance and hence given attention here.A literal meaning of Wukang (武康) would be armed (or martial) resistance and coming at this particular time it would be interesting to determine who exactly it was that should be resisted.This name change would have been decided by the collaborationist Wang regime and agreed to by the Japanese and so, arguably, (overtly) resisting the colonials whilst engendering local support by (covertly) resisting the current occupiers.Looking at possible textual sources allows such ambiguity as the name could be claimed to refer to a 'hilly area' in the countryside of Zhejiang Province with scenery that looked familiar to Wukang Road. 5 This literary connection would allow a sufficient degree of uncertainty to allow such a name as although Shanghai was officially under Chinese sovereignty effectively it was colonial Japanese rule.Further investigation of the correct interpretation is beyond the scope of this article but, nevertheless, it is pleasing to consider armed resistance at this juncture of Shanghai's history and the end of the (formal) French Concession, but sufficiently concealed beneath a reference to the countryside to allow it to be accepted and convenient for it to remain after the conflict. This renaming occurred within the Second Sino-Japanese war with Shanghai under Japanese control since 1937.Japan's military expansion around Shanghai had escalated in 1941 with attacks on the British-dominated International Settlement coinciding with their attack on Pearl Harbour (which drew the USA into direct conflict with Japan), Thailand, Hong Kong, Malaya, and the Philippines (Paine, 2012).By 1943, Japan had taken Singapore, the foremost British military base in South-East Asia, conquered Burma (Myanmar) and were on the borders of India.The Sino-Japanese war had merged with the global conflict of World War II with Japanese expansion in East-and South-East Asia inflicting a series of military defeats on the Western colonial powers; Australia was also under threat (Paine, 2012).With the Japanese fleet now occupied in the Pacific, reducing the bombing of Shanghai and the surrounding districts from their aircraft carriers, increased Chinese military activity would have had the effect of engaging Japanese troops which could otherwise have been deployed in the other theatres of conflict.It would, then, have been in the interests of the colonial powers to encourage Chinese resistance, in whatever way they could, to put pressure on the occupying Japanese forces. Other evidence for the Western powers encouraging support from the Chinese in the conflict with Japan, and specifically as an ally of the USA, can be seen with the 1943 Repeal of the (USA) Chinese Exclusion Act.The Repeal was 'a decision almost wholly grounded in the exigencies of World War II' and was followed by new legislation allowing limited Chinese emigration under a quota system (DoS, Department of State, Office of the Historian, n.d.).The importance of this repeal was emphasized by President Roosevelt, who regarded 'this legislation as important in the cause of winning the war and of establishing a secure peace' and a move to 'silence the distorted Japanese propaganda', which was attempting to distance the USA and China (Roosevelt, 1943).Increased armed resistance from the Chinese would seem to be helpful to the Allied forces. Wukang Road was not the only name change at that time and, as above, assigning place (and street) names is a clear political act which to a degree defines ownership.Changing the name of this and many other roads and buildings within the French and other Concessions was arguably an act of re-claiming the districts by assigning Chinese names to the former homes of the colonials.As this was done, despite Shanghai being controlled by the Japanese military, it would have made a strong nationalist statement. Facilitating the digital narrative DS, in this project, is used as a tool to first, organize the historical materials as narrative elements and help to create a storyworld; and second, to elicit and collect more personal accounts from the lives of everyday people, such as their old family photos, personal stories about the road, or even their perspective on the history of the road.At the time of writing, the UGC section of the Journey from Wukang Road online platform, although limited, has supplemented the existing records with additional personal photos and effective comments.Reflections and voices about the colonial past are aimed to be collected from ordinary people with the goal of displaying a diverse range of perspectives on the cultural memory, retelling the story, rebuilding a more complete picture, and including the voices that were previously neglected. One guided road tour event was organised by the library (in April 2018) with more than thirty library users attending and they were encouraged to upload their real-time reflections about the road during the visit.The comments included, 'Beautiful architecture design', 'Midget apartments, many movie stars lived here before!' (First author's translations).These comments along with the participants' photos have been uploaded to the UGC interface (Fig. 6), where there is a consent and use notice, and are available online.In addition, the project interface is promoted within the library itself as well as on its online platform as a place for users to upload photos and comments.The project engages local Shanghai people and encourages them through the online interface, the library, and local networks to upload photos and accounts of their memories and experiences of the road.Many more events and activities such as workshops and focus groups are planned to collect local peoples' personal memories, and their family histories, with more guided tours of the road. This shifts the stories and memories from the private to the public sphere, from 'private forms of communication and translating them into contexts where they can potentially contribute to public culture' (Burgess and Klaebe, 2009, p. 155).Using DS in this way adds additional value to the material for researchers in public history; it opens up new channels for them to collect data about ordinary people's opinions and personal narratives of their local history, and importantly allows 'the recording of oral histories' (Earley-Spadoni, 2017, p. 97).Sharing methodologies with oral history and public history, we capture voices of the common people so that the history and culture of Shanghai is democratized in the modern postcolonial era, through the reorganization and critical re-reading and re-writing of the past.Collecting these vernacular stories fills the gaps over time and gives voice to those usually unheard.This brings together what Dunn (2019, p. 39) describes as the 'clear spectrum between the observation of place, the documentation of place, the transmission of that documentation, and the effect that that transmission has', with regards to the story of Wukang Road (Fig. 7). In addition, it foregrounds the underrepresented art forms housed in the library collections (old photos, audio recordings, maps, books and manuscripts, and images of buildings in different historical periods), the places and people that constitute the history of Wukang Road.By using the memory and voices of the people, it is particularly hoped that this would help efforts to collect and preserve the vernacular Shanghai dialect, Shanghainese, and link in with the wider 'ongoing movement to "save Shanghai dialect" across academia and the general public' (Shen, 2016, p. 714).In addition, this vernacular expression can further enrich but also democratize the wider project and help to challenge the established historiography in the modern postcolonial era.In doing so, it unlocks the diverse possibilities for reconstructing its history and the expression of existing narrative materials to meet the needs of different aims, contexts, and communities. Discussion DH projects often involve research objects from multiple fields and disciplines such as humanities, history, philosophy, art, archaeology, and so on.In the process of sorting, organizing, describing, and processing these multi-sourced and heterogeneous resource objects, it is significant to extract, relate, reconstruct, and present the elements of people, places, times, and events.These are also the crucial elements for DS.The process of presenting the results of DH projects is often the process of developing stories through digital means based on the elements and their related relationships.The focus of our DS is to reorganize and present scattered materials with a considerable degree of reliability and efficiency using digital technology to reconstruct history within a certain time period.In the process of this reconstruction, a large amount of historical material is pulled together so that small and missing clues may be found to complete the picture.The application of DS here and in the wider DH context, unlike in other fields, is not to focus on creation nor free expression, but primarily on reconstruction. A primary objective of the research described here is to incorporate the voices, including the local vernacular Shanghai dialect (Shanghainese), of the people whose lives have been touched or in some way affected by their experience of the Wukang Road.In our project, '[ … ] digital methods help us to access and share marginalized or silenced voices and to incorporate them into our work in ways not possible in print or the space of an exhibition gallery.' (Brennan, 2019).The official records held in the archives and libraries have undoubtedly been mediated and represent the official record, regardless of the sources, with all the inherent and unescapable biases.In Shanghai these have been particularly influenced by the long-term colonial occupation and the shorter but to an extent more divisive ravages of the Japanese occupation.The prevailing historical record has been unavoidably shaped by these events and factors external to Shanghai and China.As Jennifer Guilliano argues, 'the embrace of capitalism, and the consequences of colonialism have long affected and been central in the discipline of history' (Guilliano, 2022, p. 5).The voices of the ordinary people of Shanghai, and particularly in the traditional vernacular dialect, can help to redress these biases and the historical record.This is an important additional dimension to the wider Shanghai Memory project.This fills the gaps in the historical and cultural record so that we can 'ensure that the stories and voices which have been underrepresented in both print and digital knowledge production [ … ] can be heard' (Risam, 2018, p. 129). The first phase of the overall project, which is still ongoing, is mainly to reconstruct the historical materials about the Wukang Road and retell the story of people, places, buildings, and events that happened there from the perspective of the records held in the library.By linking different entities about the road, this next phase of the project that includes the DS aspect aims to present a more complete picture (as far as we are able) and give the audience a clearer storyline of the evolution of Wukang Road. In terms of the limitations, there was only one guided road tour before the pandemic when regulations prevented more and also prohibited the planned workshops, interviews, and focus groups.Thus, so far, the data collected from the users are limited but with a planned expansion of activities once the current restrictions allow.In addition, it has so far been problematic to locate people previously connected with the road to encourage them to upload their own or family memories and records about the road; an advertising campaign has been started to use the library membership resources and mailing lists to find more of them.Nevertheless, despite the limited number of photos and comments that were collected, looking at the feedback, it seems that ordinary people (both local citizens and tourists) wish to learn more about the stories and history of the road during the colonial era.Although they may not have experienced those times themselves, they are willing to share their opinions about the cultural memory and their shared heritage that is to be found in the buildings, roads, and districts of the city. Although a story may have a beginning, a middle, and an end (as indicated by Aristotle, Poetics 1450b-1451a), progressing in a linear fashion, 'non-linearity has been common to narrative discourse from the earliest recorded instances of story-telling' (Abbott, 2020, p. 33).Ancient epic poetry such as the Iliad and Odyssey, part of an oral (storytelling) tradition themselves, start in the middle of things in an episodic approach rather than a linear one that takes the listener from the beginning to the end.The Iliad begins in the final year of the ten-year conflict outside the walls of Troy and would have been delivered in episodic chunks by the rhapsode rather than sequentially.Indeed, the Homeric characters themselves employ narrative storytelling as inset or meta-narratives within the plot itself.For example, in Iliad Book 9:524-99, the story of Meleager is told by Phoenix, accompanied by Odysseus and Ajax, in an embassy from Agamemnon to persuade Achilles to return to the fight as the Trojans were gaining control (Burgess, 2017).Similarly, the other Homeric epic has the blind bard Domodocus, singing the story about the fate of Troy and reducing Odysseus to tears when he recalls the painful memories and the tragedy of his long journey home (Odyssey Book 8:62-67).Storytelling predates the written word and, just as the examples above, can be persuasive and/or provoke strong emotions.Memory does not work in a linear way and what is remembered and the way in which it is recalled is dependent on the individual and their experiences; hence, people remember things in different ways and recall what is important to them.This is true for groups as well as individuals where their memories are shaped by events in their collective lives (Halbwachs, 1980). Conclusions This study contributes to our understanding of the history of Shanghai within a postcolonial setting as well as how we in the DH might effectively develop a methodology for using DS to supplement and even redress the (often unconscious but sometimes conscious) biases inherent in formal historiography.We can develop the tools to add the human voice turning memory into narrative to form the missing parts of the history and help us to incorporate and 'share marginalized or silenced voices' (Brennan, 2019, p. 2).The sources at our disposal need to be combined to achieve balance where we recognize and acknowledge the biases within our records that have impacted on the selection process along with ideological and other consequences to rectify the historical record (Guilliano, 2022).Capturing the voices of the people, collecting, and sharing them with digital tools give us the opportunity to take a step towards this.In addition, as well as filling in some of the gaps in the story of Shanghai, DS gives us the opportunity to understand the place of the individual within the wider history.The Shanghai Memory platform provides a conduit through which these stories can pass and be pulled together. This newest initiative to incorporate DS into Shanghai Memory is the latest phase to further democratize the practice and represent the unrepresented by encouraging, presenting, creating, and sharing stories in relation to the past, current, and even the future of Shanghai city.This extension to an already established DH project adds significant value to the reconstruction of cultural memory and acts as a model for other memory projects in East Asia and beyond. The same as all recently conducted qualitative research, this project has been severely impacted by the current pandemic.Because of the Shanghai pandemic control regulations, it has been impossible to facilitate the planned workshops and focus groups which has forced us to significantly alter our approach and move everything online.Nevertheless, it has allowed us to appreciate the many affordances of DH and our practice, putting an emphasis on digital-mediated communication and online platforms to enable data collection and upload.This phase is now operational and both the library and the online platform themselves are used to draw attention and encourage the upload of content. Further work is planned to build on this research project in the form of a longitudinal study to look at changes over the different historic periods in Shanghai, principally from its opening-up as a Treaty Port, following the Opium Wars, to the Japanese invasion and the Long Civil War, particularly from the perspective of the many road name changes not considered here.In addition, more attention will be given to applying oral history techniques to collect voices, stories, and life experiences connected to the history of the city and Wukang Road.It will examine the relationship between personally constructed stories (personal memory) and stories generated by official records (cultural memory).Moreover, this project will continue the exploration of how DS can help to develop a methodology for telling historical stories; how it can help us to understand how the former French Concession, which is now often officially referred to as the 'contemporary French Concession' (Chen et al., 2021, p. 35), has become what it is today, actively promoted as an attraction for foreigners and domestic tourists alike; and how is this reconciled with the colonial past and how does that impact on the wider understanding of Shanghai, its identity, and its people? Figure 1 . Figure 1.The ontology model used for A Journey from Wukang Road project Figure 7 . Figure 7. Showing the online interface with audio introduction in Shanghainese as well as Mandarin Source: http://wkl.library.sh.cn/.
9,867.2
2023-06-26T00:00:00.000
[ "History", "Computer Science" ]
Polymerase II–Associated Factor 1 Complex-Regulated FLOWERING LOCUS C-Clade Genes Repress Flowering in Response to Chilling RNA polymerase II–associated factor 1 complex (PAF1C) regulates the transition from the vegetative to the reproductive phase primarily by modulating the expression of FLOWERING LOCUS C (FLC) and FLOWERING LOCUS M [FLM, also known as MADS AFFECTING FLOWERING1 (MAF1)] at standard growth temperatures. However, the role of PAF1C in the regulation of flowering time at chilling temperatures (i.e., cold temperatures that are above freezing) and whether PAF1C affects other FLC-clade genes (MAF2–MAF5) remains unknown. Here, we showed that Arabidopsis thaliana mutants of any of the six known genes that encode components of PAF1C [CELL DIVISION CYCLE73/PLANT HOMOLOGOUS TO PARAFIBROMIN, VERNALIZATION INDEPENDENCE2 (VIP2)/EARLY FLOWERING7 (ELF7), VIP3, VIP4, VIP5, and VIP6/ELF8] showed temperature-insensitive early flowering across a broad temperature range (10°C–27°C). Flowering of PAF1C-deficient mutants at 10°C was even earlier than that in flc, flm, and flc flm mutants, suggesting that PAF1C regulates additional factors. Indeed, RNA sequencing (RNA-Seq) of PAF1C-deficient mutants revealed downregulation of MAF2–MAF5 in addition to FLC and FLM at both 10 and 23°C. Consistent with the reduced expression of FLC and the FLC-clade members FLM/MAF1 and MAF2–MAF5, chromatin immunoprecipitation (ChIP)-quantitative PCR assays showed reduced levels of the permissive epigenetic modification H3K4me3/H3K36me3 and increased levels of the repressive modification H3K27me3 at their chromatin. Knocking down MAF2–MAF5 using artificial microRNAs (amiRNAs) in the flc flm background (35S::amiR-MAF2–5 flc flm) resulted in significantly earlier flowering than flc flm mutants and even earlier than short vegetative phase (svp) mutants at 10°C. Wild-type seedlings showed higher accumulation of FLC and FLC-clade gene transcripts at 10°C compared to 23°C. Our yeast two-hybrid assays and in vivo co-immunoprecipitation (Co-IP) analyses revealed that MAF2–MAF5 directly interact with the prominent floral repressor SVP. Late flowering caused by SVP overexpression was almost completely suppressed by the elf7 and vip4 mutations, suggesting that SVP-mediated floral repression required a functional PAF1C. Taken together, our results showed that PAF1C regulates the transcription of FLC and FLC-clade genes to modulate temperature-responsive flowering at a broad range of temperatures and that the interaction between SVP and these FLC-clade proteins is important for floral repression. INTRODUCTION Plant survival and fitness depends on timely seed production through precise control of flowering time. Flowering time is modulated by a number of endogenous and environmental cues, such as daylength, age, and prolonged exposure to cold and ambient temperatures (Amasino, 2010;Srikanth and Schmid, 2011). To successfully survive a range of varying environmental conditions, plants have evolved a complex regulatory network that integrates these cues to control flowering time (Srikanth and Schmid, 2011). In Arabidopsis (Arabidopsis thaliana), nearly 400 flowering genes are known to regulate flowering time in genetically distinct pathways, e.g., the photoperiod, ambient temperature, aging, vernalization, hormonal, and sugar pathways (Bernier and Périlleux, 2005;Bouché et al., 2016). These pathways modulate flowering in response to different endogenous and environmental signals to optimize reproductive success. In addition to FLC, the Arabidopsis genome contains five more members of the FLC clade, e.g., MADS AFFECTING FLOWERING1 [MAF1, also known as FLOWERING LOCUS M (FLM)] and MAF2-MAF5 (Ratcliffe et al., 2001(Ratcliffe et al., , 2003Scortecci et al., 2001). The MAF2-MAF5 genes occur in a tandem repeat within a 22-kb region. The role of FLM as a floral repressor has been well-studied (Scortecci et al., 2003;Balasubramanian et al., 2006), and its loss of function causes temperature-insensitive flowering (Lee et al., 2013). Like FLM, MAF2 acts as a floral repressor, and the loss of MAF2 function results in strong acceleration of flowering upon vernalization, whereas plants overexpressing MAF2 flowered significantly later than wild-type plants (Ratcliffe et al., 2003). Similarly, MAF3 also represses flowering by directly binding to the promoter sequences of FT and SOC1 and repressing their transcription. Interestingly, the effect of the loss of MAF3 function was more evident at lower temperatures than at normal growth temperatures (Gu et al., 2013). Overexpression of MAF3 produced a stronger floral delay in the Landsberg erecta (Ler) accession compared to Columbia (Col-0; Ratcliffe et al., 2003). T-DNA mutants of MAF4 exhibited accelerated flowering (Gu et al., 2009), and MAF4 overexpression in Ler resulted in a strong delay of flowering (Scortecci et al., 2003). Unlike MAF1-MAF4, the floral repressive effect of MAF5 is not strong, and its overexpression only delayed flowering under non-inductive short-day conditions (Kim and Sung, 2010). These FLC-clade transcription factors physically interact with each other, and some of them interact with SHORT VEGETATIVE PHASE (SVP) to make repressor complexes for efficient floral repression (Gu et al., 2013). Genetic studies revealed that lesions in these PAF1C components resulted in nearly identical early flowering phenotypes under standard growth conditions (He et al., 2004;Oh et al., 2004). PAF1C is required for the enrichment of active epigenetic marks, primarily H3K4me3, at the chromatin of FLC and its homologs to maintain their expression; PAF1C deficiency results in reduced expression of these floral repressors, which eventually accelerates flowering (Zhang et al., 2003;He et al., 2004;Yu and Michaels, 2010). For instance, loss of function of ELF7 and VIP6 results in the reduced expression of FLC, FLM/MAF1, and MAF2 (He et al., 2004). Mutations in VIP3 strongly reduced FLC expression and strongly accelerated flowering. In particular, vip3 mutants flowered significantly earlier than flc mutants, suggesting that additional genes are involved in the early flowering of vip3 mutants (Zhang et al., 2003). Loss of function of VIP4 and VIP5 also resulted in strong flowering acceleration that was comparable to that seen in vip3 single mutants. However, after vernalization, the H3K27 methyltransferase complex Polycomb repressive complex 2 (PRC2) is involved in silencing FLC expression (and probably MAF expression) by depositing repressive H3K27me3 marks on the FLC chromatin De Lucia et al., 2008). Although the regulation of FLC (and some MAF genes) is well-studied under standard growth temperature conditions, whether PAF1C-mediated regulation involves the entire FLC clade and the functional importance of this clade in regulating flowering at chilling temperatures remain unclear. Here, we showed that PAF1C epigenetically regulates the entire set of FLC-clade genes. PAF1C-defective mutants showed ambient temperature-insensitive early flowering due to the downregulation of FLC-clade genes. The epigenetic status of the chromatin of FLC and FLC-clade genes was altered in PAF1C-defective mutants. Expression of FLC and FLC-clade genes was upregulated in response to low temperature in wildtype plants, and these genes play an important role in floral repression at chilling temperatures. Furthermore, MAF2-MAF5 RNA Sequencing RNA sequencing (RNA-Seq) was performed using 8-day-old seedlings grown at 23°C and 23-day-old seedlings grown at 10°C under standard LD conditions in two biological replicates for each sample. About 60-80 seedlings were harvested at Zeitgeber Time 16 (ZT16) and pooled for RNA extraction using Invitrogen's Plant RNA Purification Reagent. For RNA sequencing, library preparation was performed with an Illumina TruSeq Stranded Total RNA Sample Prep kit (Illumina), according to the manufacturer's protocols, and paired-end reads were produced using an Illumina HiSeq2000 sequencer. The raw RNA-seq data of PAF1C-deficient mutants generated in this study were deposited at the Gene Expression Omnibus (GEO) NCBI database and are available under the accession number GSE171778. Transcriptome data for sdg8 mutants (GEO accession number GSE8528) were previously published (Pajoro et al., 2017). RNA-Seq Data Analyses The raw sequence reads were processed by adapter trimming, followed by qualitative analysis of raw reads using FastQC. 1 The resulting good quality reads were aligned to the TAIR10 reference 1 http://www.bioinformatics.babraham.ac.uk/projects/fastqc genome using CLC Genomics Workbench v.11. Differentially expressed genes (DEGs) were defined as genes with at least a 1.5-fold change, unless mentioned otherwise. Heatmaps were generated using the built-in function of CLC Genomics Workbench. Gene Ontology (GO) analysis was performed with DAVID (Ashburner et al., 2000), and GO enrichment data plotting was performed using the R package ggplot2 (Wickham, 2011). For the identification of common targets, the intersection of the gene lists was identified using the R package UpSetR (Lex et al., 2014) and the Java-based program VennDis (Ignatchenko et al., 2015). Reverse Transcription Quantitative PCR Analyses Reverse transcription quantitative PCR (RT-qPCR) was used to validate the RNA-seq data obtained from the PAF1C-deficient mutants. Total RNA was extracted from seedlings at the identical developmental stage at ZT16 at different temperatures. Plant RNA purification reagent (Invitrogen) was used for RNA extraction. The DNase I-treated RNA (~2 μg) was reverse transcribed into cDNA using MMLV enzyme (ELPIS Biotech). qPCR analyses of cDNA or immunoprecipitated DNA (see below) were performed using ×2 A-Star Real Time PCR Master Mix (BioFACT) in a Thermo Fisher QuantStudio 5 real-time PCR machine. All qPCR experiments were conducted in three biological replicates, each with three technical replicates. For RT-qPCR, data analyses were performed according to the previously published ΔΔCT method (Livak and Schmittgen, 2001), with the modification of using two reference genes, PP2AA3 (AT1G13320) and a SAND family gene (AT2G28390) to normalize the data (Hong et al., 2010). Data normalization was performed using the geometric mean of the two reference genes. Sequences of primers used in RT-qPCR analyses are shown in Supplementary Table 1. The statistical significance of differences in gene expression levels among samples was assessed using one-way ANOVA with a 0.05 level of significance (95% CI). Chromatin Immunoprecipitation Assays Chromatin immunoprecipitation (ChIP) assays were performed using wild-type and vip3 mutant seedlings, as described previously . Briefly, seedlings of each genotype were harvested at the 1.02 stage (Boyes et al., 2001) and crosslinked using fixation buffer (1% formaldehyde). Immunoprecipitation was performed with polyclonal anti-H3K4me3 (Millipore, 04-745) or anti-H3K27me3 (Millipore, 07-449) antibodies bound to Dynabead Protein A (Thermo Scientific). The ChIPed DNA was extracted using the ChIP DNA Clean & Concentrator kit (Zymo Research). Relative enrichment of histone modifications was analyzed using qPCR as described earlier . The primers used in ChIP-qPCR are shown in Supplementary Table 1. All ChIP experiments were performed with three biological replicates and three technical replicates for each genotype. Designing and Cloning amiRNAs That Target MAF Genes To posttranscriptionally knock down the MAF2-MAF5 genes, artificial microRNAs (amiRNAs) were designed using the WMD3 webtool (Schwab et al., 2006). Predominantly expressed splice variants of MAF2- MAF5 (MAF2.3,MAF3.1,MAF4.3,and MAF5.2) were selected from the Araport11 cDNA collection and used as target genes for subsequent studies. Two independent amiRNAs were selected and amplified using pRS300 as a template with four amiRNA-specific primers (Supplementary Table 1), as previously described (Schwab et al., 2006). The amplified amiRNAs were cloned into the pENTR2B vector and subsequently into the pEG100 vector containing the 35S promoter. Protein-Protein Interaction Analyses Using Deep Learning Algorithms To test whether MAF2-MAF5 interact with SVP, we used two recently developed artificial intelligence (AI)-based deep learning programs, D-SCRIPT (Sledzieski et al., 2021) and PPI-Detect (Romero-Molina et al., 2019). Protein sequences were provided in fasta format as an input for both programs. Protein phosphatase 2A A3 (PP2AA3) was used as a negative interactor control. Co-immunoprecipitation Assays Co-immunoprecipitation (Co-IP) experiments were performed in Arabidopsis mesophyll protoplasts as described earlier (Wu et al., 2009). The coding sequences of MAF2-MAF5 were fused with GFP (35S::MAF-GFP) in the 326-GFP vector (Lee et al., 2001) and co-transfected with 35S::SVP-2HA in the protoplasts isolated from wild-type plants. The transfected protoplasts were incubated at 23°C for 3 h to allow production of these proteins in sufficient quantities before overnight incubation at 10°C. After incubation, the protoplasts were pelleted and lysed with lysis buffer (10 mM Tris-HCl, pH 7.5, with .1% Triton X-100, and ×1 Roche Protease Inhibitor Cocktail). The lysate was then incubated overnight with GFP-Trap magnetic beads (Chromotek). Anti-GFP monoclonal antibodies (Roche) and anti-HA high-affinity monoclonal antibody clone 3F10 (Sigma-Aldrich) were used as primary antibodies for the western blots. PAF1C-Deficient Mutants Flower Early at a Broad Range of Temperatures To test the effect of a lesion in PAF1C on flowering time, we measured flowering time of PAF1C-deficient mutants at different temperatures ranging from chilling (i.e., cold but not freezing, 10°C) to high temperature (27°C). To show that any observed flowering time change was not specific to a single allele, we used two independent homozygous mutant lines for each PAF1C gene (CDC73, ELF7, VIP3, VIP4, VIP5, and VIP6; Figures 1A,B). For the previously uncharacterized T-DNA lines elf7-4 (SALK_070632), vip5-3 (SALK_055889), and vip6-5 (SALK_119910), we performed conventional RT-PCR to examine their transcript levels; indeed, all three lines were found to be RNA-null alleles (Supplementary Figure 1). Flowering time measurement showed that wild-type plants flowered with a mean total leaf number (TLN) of 36.3 ± 1.8, 32.0 ± 2.3, 15.6 ± 0.9, and 12.5 ± 0.5 at 10, 16, 23, and 27°C, respectively (Figures 1A,B; Supplementary Table 2). At all tested temperatures, the PAF1C-deficient mutants flowered significantly earlier than the wild-type plants, indicating that a lesion in PAF1C caused early flowering at a broad range of temperatures. In particular, PAF1C-deficient mutants showed very early flowering, compared with the wild type, as temperature decreased. We selected an allele that showed a strong early flowering phenotype from each gene (cdc73-2, elf7-2, vip3-2, vip4-1, vip5-2, and vip6-2) and used these mutants for further analyses. To assess the temperature sensitivity of PAF1C-deficient mutants, we calculated the leaf number ratio (LNR) of cdc73-2, elf7-2, vip3-2, vip4-1, vip5-2, and vip6-2 mutants, using the TLN values at different temperatures. A LNR close to 1 indicates that temperature has little effect on flowering. All PAF1Cdeficient mutants showed significantly lower LNRs compared with wild-type plants ( Figure 1C; Supplementary Table 3). These results indicated that a lesion in PAF1C caused ambient temperature-insensitive flowering, especially at lower temperatures. We then compared flowering time of PAF1C mutants with that of flc, flm, and flc flm mutants at 10°C. The flc, flm, and flc flm mutants flowered with 27.0 ± 1.3, 24.8 ± 1.1, and 20.2 ± 1.6 leaves, respectively, at 10°C ( Figure 1D). Interestingly, their flowering times (measured as TLN) were later than elf7, vip3, vip4, vip5, and vip6 single mutants. This indicated that a lesion in both FLC and FLM was insufficient to phenocopy the early flowering time seen in PAF1C-deficient mutants. Therefore, considering that the PAF1C regulates FLC and FLM transcription (Zhang and Van Nocker, 2002;Zhang et al., 2003;Oh et al., 2004), these results suggested the possibility that PAF1C regulates other factors, in addition to FLC and FLM, in modulating flowering time at 10°C. Transcriptome Analyses of PAF1C-Deficient Mutants Mutants in all PAF1C components (except CDC73) flowered significantly earlier than flc flm double mutants at 10°C (Figure 1D), suggesting that additional factors are involved in this early flowering. To identify these factors, we performed RNA-seq using PAF1C-deficient mutants grown at 10 and 23°C. Euclidean distance clustering associated with complete linkage classified the transcriptome profiles into two major clades ( whereas elf7, vip3, vip4, vip5, and vip6 mutants grouped together. This expression profile-based classification was consistent with the flowering time changes of PAF1C-deficient mutants at low temperature. As both cdc73 mutants flowered later than other PAF1C-deficient mutants (Figure 1) and cdc73 mutants were grouped in the same clade with Col-0 plants based on RNA-seq data (Figure 2A), we excluded CDC73 from further analyses. We selected DEGs that showed increased or decreased transcript levels (>1.5-fold). Transcriptome analyses revealed large numbers of DEGs in PAF1C-deficient mutants at both 10 and 23°C ( Figure 2B). We then analyzed DEGs that were commonly upregulated and downregulated in different mutants (Figures 2C,D). At 10°C, 1,519 genes were commonly downregulated in elf7, vip3, vip4, vip5, and vip6 mutants, whereas 1,229 genes were commonly upregulated in these mutants ( Figure 2C). At 23°C, 2,021 genes were commonly downregulated in elf7, vip3, vip4, vip5, and vip6 mutants, and 1,352 genes were upregulated in these mutants ( Figure 2D). These analyses indicated that the largest number of intersecting DEGs was in the set containing the elf7, vip3, vip4, vip5, and vip6 mutants, suggesting that a large number of genes were commonly altered in these mutants at both temperatures. This observation was also consistent with the similar early flowering phenotypes of elf7, vip3, vip4, vip5, and vip6 mutants at 10 and 23°C. To understand the biological significance of the common DEGs in PAF1C-deficient mutants, we performed Gene Ontology (GO) analyses using the webtool DAVID (Ashburner et al., 2000). At 10°C, the downregulated genes showed significant enrichment for GO terms related to the response to transcription, microtubule-based movement, different metabolic processes, response to jasmonic acid, and stomatal complex development. The upregulated genes were enriched in GO terms related to different metabolic processes and response to different stimuli ( Supplementary Figure 2A). At 23°C, the downregulated genes were enriched in GO terms related to response to Karrikin, different metabolic processes, MAPK cascade, and photosynthesis, whereas the upregulated genes were enriched with GO terms related to different metabolic processes, response to oxidative stress, and cell wall organization (Supplementary Figure 2B). PAF1C Regulates the Expression of FLC and the Other FLC-Clade Genes The Flowering Interactive Database (FLOR-ID; Bouché et al., 2016) contains known genes involved in regulating flowering time. To check whether PAF1C deficiency affects the expression of known flowering time genes, we analyzed which genes in FLOR-ID were included among the common DEGs in PAF1Cdeficient mutants at 10 and 23°C (Figures 2C,D). As PAF1C is involved in maintaining the active transcription of its target genes (He et al., 2004;Oh et al., 2004;Yu and Michaels, 2010), we expect that the direct targets of PAF1C will be downregulated in the PAF1C-deficient plants. Interestingly, our analyses showed that 26 and 27 flowering time genes were downregulated at 10 and 23°C, respectively, and both sets included FLC and MAF1, 2, 3, 4, and 5 (Figures 3A,B). We then measured the mRNA levels of FT, TSF, and SOC1 by RT-qPCR and found that the mRNA levels of FT, TSF, and SOC1 were significantly higher in the PAF1C-deficient mutants at 10 and 23°C compared with the wild type. FT mRNA levels were increased by 3.1-to 3.8-fold in the PAF1Cdeficient mutants at 10°C (Figure 3D). At 23°C, FT was upregulated by 1.8-to 2.3-fold. TSF was also significantly upregulated in the PAF1C-deficient mutants at both temperatures. SOC1 transcript levels showed a similar pattern, with a fold increase of 3.2-4.0 and 3.1-4.1 in PAF1C-deficient mutants at 10 and 23°C, respectively, consistent with the downregulation of FLC and FLC-clade genes in PAF1C-deficient mutants ( Figure 3C). These results suggested that functional PAF1C is required for the expression of FLC and FLC-clade genes and that a lesion in one of its components results in the downregulation of FLC and FLC-clade genes, which leads to the derepression of FT, TSF, and SOC1, and early flowering. Furthermore, considering that the expression of all MAF genes was affected in PAF1C-deficient mutants and PAF1C-deficient mutants flowered significantly earlier than flc flm double mutants (Figure 1D), it is likely that the MAF2-MAF5 genes play a significant role in floral repression at chilling temperatures. Although FLC and FLC-clade genes are known to undergo alternative splicing and their splice variants might have differential effects on flowering time (Caicedo et al., 2004;Lee et al., 2013;Pose et al., 2013;Rosloski et al., 2013), our RNA-seq data showed no significant difference in the levels of the splice variants of FLC and FLC-clade genes (Supplementary Figure 3), except for their overall downregulation, suggesting that the flowering time change seen in PAF1C-decificient mutants was not associated with the differential splicing patterns of FLC and FLC-clade genes. PAF1C Deficiency Alters the Epigenetic Status of the FLC-Clade Genes Our expression analyses showed that in addition to FLC and FLM, MAF2-MAF5 were downregulated in PAF1C-deficient mutants (Figure 3), suggesting that this downregulation may be due to the altered epigenetic status at these loci in PAF1Cdeficient mutants. To test this possibility, we analyzed the levels of the repressive H3K27me3 marks and permissive H3K4me3/ H3K36me3 marks of FLC and the FLC-clade genes in vip3-2 mutants (as a representative PAF1C-deficient mutant line) and wild-type plants grown at 10°C. Four different qPCR primer sets (P1-P4), spanning the entire gene bodies of the target genes, were used to assess the enrichment of the repressive/ permissive marks (Figure 4A). Consistent with their downregulation, the FLC and FLCclade genes showed significantly increased enrichment of the repressive H3K27me3 marks throughout their gene bodies in vip3 mutants ( Figure 4B). In FLC, the highest enrichment (5.8-fold higher enrichment compared to wild-type plants) of repressive H3K27me3 marks was observed in the P2 region, which contains the transcription start site (Figure 4B), consistent with a previous finding (He et al., 2004). Significantly higher enrichment was also observed in the P1 (2.9-fold) and P3 (2.8-fold) regions of FLC. In FLM chromatin, the H3K27me3 enrichment was highest in the P1 region (5.8-fold) followed by P2 (3.1-fold) and P3 (2.0fold) regions. MAF2 and MAF3 showed similar H3K27me3 patterns with the highest enrichment in the P1 region (3.7and 5.3-fold, respectively), followed by P2 (3.6-and 4.4-fold, respectively) and P3 regions (2.4-and 2.7-fold, respectively). The enrichment of H3K27me3 in the MAF5 chromatin was highest in the P3 and P2 regions (3.5-and 3.4-fold, respectively) followed by P1 with 2.4-fold higher enrichment of H3K27me3 in vip3 mutants compared to wild-type plants at 10°C. H3K27me3 enrichment in the P4 regions of FLC, FLM, MAF2, and MAF3 of vip3 mutants was comparable with wild-type samples, whereas MAF4 and MAF5 showed slightly higher enrichment in the P4 regions (1.3-and 2.5-fold, respectively) in vip3 mutants ( Figure 4B). By contrast, enrichment of the permissive H3K4me3 marks was significantly reduced in the chromatin of FLC and FLCclade genes in vip3 mutants, primarily in the P1 and P2 regions ( Figure 4C). Enrichment of H3K4me3 in the FLC chromatin in vip3 mutants was reduced 3.3-fold in the P2 region, 2.4-fold in P1, and 2.1-fold in P3 compared to wild-type plants. FLM also showed reduced enrichment of H3K4me3 marks in the P2 (4.1-fold) and P1 (2.9-fold) regions. In the MAF2 chromatin, reduced H3K4me3 enrichment was seen in the P1 (1.4-fold) and P2 (1.3-fold) regions. In addition, in the MAF3-5 chromatin, H3K4me3 enrichment was significantly lower in the P2 region (2.8-, 3.0-, and 2.7-fold, respectively) and the P1 region (2.6-, 2.5-, and 3.9-fold, respectively) in vip3 mutants ( Figure 4C). Similar reduction patterns were found for the H3K36me3 mark at the gene bodies of these genes, with significantly reduced enrichment at the P2 and P3 regions ( Figure 4D). Taken together, these results suggest that PAF1C is required to maintain permissive epigenetic marks and prevent deposition of repressive marks at the FLC and FLC-clade genes, thereby maintaining their active transcription. FLC-Clade Genes Are Upregulated in Wild-Type Plants at 10°C The early flowering of PAF1C-deficient mutants at 10°C is likely due to the combinatorial effect of FLC, FLM, and the other MAFs, suggesting the functional importance of MAF2-MAF5 at low temperature (10°C). To test whether expression of these genes is upregulated in wild-type plants at 10°C, we compared the transcript levels of these genes in wildtype plants at 10 and 23°C using our RNA-seq data. This analysis revealed upregulation of FLC, FLM, and MAF2-MAF5 Note that our P2 region in FLC partially overlaps with the region that was used in a previous study (He et al., 2004). In the case of FLM, our P2 region is included in a region that was previously shown to be affected (He et al., 2004). (B-D) Fold enrichment of the repressive epigenetic H3K27me3 marks (B), permissive H3K4me3 marks (C), and permissive H3K36me3 marks (D) at the genomic regions of FLC and FLC-clade genes in wild-type (Col-0) and PAF1C-deficient vip3 mutant seedlings. For each primer pair, the enrichment was normalized to the wild-type control (Col-0). One-way ANOVA followed by Dunnett's multiple comparison tests were performed to test the statistical significance. *p < 0.05; **p < 0.01; ***p ≤ 0.001; and ns: not significant. in wild-type plants at 10°C compared to 23°C by at least 1.5-fold ( Figure 5A). RT-qPCR analyses also showed statistically significant induction of FLC, FLM, and MAF2-MAF5 in wild-type plants at 10°C compared to 23°C (Figure 5B). Consistent with the upregulation of FLC and FLC-clade genes, transcript levels of FT and TSF, their downstream targets, were significantly reduced (>3-fold) in wild-type plants at 10°C in comparison to 23°C (Figure 5C). This suggests that these MAFs might play important roles in modulating flowering time at chilling temperatures by regulating FT and TSF. FLC-Clade Genes Are Important for Floral Repression at Chilling Temperatures Polymerase II-associated factor 1 complex modulates the expression of downstream genes by recruiting a number of histone modifiers, including SET DOMAIN GROUP8 (SDG8; Wood et al., 2003;Ng et al., 2003a;Kim et al., 2005). SDG8 recruited by PAF1C then regulates the expression of FLC and FLM; therefore, a mutation in SDG8 causes early flowering at normal temperatures (Kim et al., 2005). Because the flowering response of sdg8 mutants under chilling-stress temperatures is not known, we analyzed the flowering time of sdg8 mutants at 10 and 23°C. We found that the sdg8 mutants flowered significantly earlier than flc, flm, and flc flm mutants at 10°C (Figures 6A,B). The sdg8 mutants flowered with 18.1 ± 1.0 leaves, which was significantly earlier than flc mutants (32.6 ± 1.6 leaves), flm mutants (26.5 ± 1.7 leaves), and flc flm double mutants (23.0 ± 1.1 leaves) at 10°C. However, at 23°C, the flowering time of sdg8 mutants (8.5 ± .6 leaves) was only slightly earlier than flc and flm mutants (10.0 ± 0.7 and 9.7 ± 0.7 leaves, respectively) and was comparable to flc flm double mutants (8.4 ± .5 leaves; Figure 6B). LNR analyses revealed significantly decreased LNR values of sdg8 mutants to low temperature (Figure 6C), indicating that the temperature responsiveness of sdg8 mutants was reduced. To determine whether MAF genes play a role in the regulation of flowering time in sdg8 mutants, we first analyzed publicly and FT (C) at 10°C under LD conditions. One-way ANOVA followed by Dunnett's multiple comparison tests were performed to test the statistical significance. *p < 0.05; **p < 0.01; and ***p ≤ 0.001. available RNA-seq data for sdg8 mutants grown at 16°C with or without shifting to 25°C (GSE85282; Pajoro et al., 2017) and then found the intersection of (1) the set of DEGs in sdg8 mutants, (2) the genes that were commonly downregulated in PAF1C-deficient mutants at 10°C ( Figure 2C) and 23°C (Figure 2D), and (3) the list of flowering time genes from FLOR-ID (Bouché et al., 2016). From this comparison, we found that FLC, FLM, and MAF3-MAF5 were commonly downregulated in sdg8 mutants (Figure 6D; Supplementary Figure 4), like in PAF1C-deficient mutants (Figure 3). MAF2, which showed 1.8-fold downregulation in sdg8 mutants, was not identified here, due to the criteria for selecting DEGs (2-fold change). We then performed RT-qPCR to confirm the downregulation of these genes at 10°C. The RT-qPCR results showed statistically significant downregulation of FLC, FLM-ß, and MAF2-MAF5 mRNA levels in sdg8 mutants at 10°C (Figure 6E). FLC mRNA levels were decreased by 13.2-fold, whereas FLM-ß showed 15.9-fold downregulation in sdg8 mutants at 10°C. In addition, expression of MAF2-MAF5 was downregulated by 2.0-, 1.7-, 7.3-, and 1.9-fold, respectively, in sdg8 mutants at 10°C (Figure 6E). Taken together, these results suggested that the early flowering phenotype of sdg8 mutants at chilling temperature is mediated by the downregulation of FLC and MAF genes. However, it should be noted that sdg8 mutants flowered slightly later than PAF1C-dificient mutants at 10°C (Supplementary Figure 5), suggesting a possibility that the PAF1C recruits an additional histone modifier(s), besides SDG8, to regulate the expression of the FLC-clade genes. To confirm that this early flowering of 35S::amiR-MAF2-5 flc flm #1 (1-1 and 13-3) and #2 (1-4 and 2-2) plants was indeed due to the downregulation of MAF2-MAF5, we performed RT-qPCR analyses. These analyses confirmed the transgenic seedlings showed significantly lower MAF mRNA levels compared with the wild-type plants and flc flm double mutants ( Figure 7D) (Figure 7C). Taken together, these data suggest that ablation of function of FLC and all FLC-clade members resulted in earlier flowering than flc flm mutants at chilling temperatures; therefore, MAF2-MAF5 also play a role in repressing flowering at chilling temperatures. MAFs Physically Interact With SVP to Form Repressor Complexes Loss of SVP function results in early flowering across a broad range of temperatures (10°C-27°C; Lee et al., 2013). SVP interacts with FLC (Fujiwara et al., 2008;Li et al., 2008) and with FLM (Lee et al., 2013;Pose et al., 2013). In vivo and yeast two-hybrid (Y2H) analyses showed that SVP interacts with MAF2 and MAF4 (Gu et al., 2013). Since the 35S::amiR-MAF2-5 flc flm (#1 and #2) plants showed significantly earlier flowering at 10°C compared with svp mutants (Figure 7C), one possible scenario is that SVP alone is insufficient to repress flowering at 10°C and may require FLC-clade proteins to repress flowering. To test whether FLC-clade proteins physically interact with SVP, we first used two recently developed artificial intelligence (AI)-based deep learning programs that were designed to predict protein-protein interactions: D-SCRIPT (Sledzieski et al., 2021) and PPI-Detect (Romero-Molina et al., 2019). These programs produce an interaction score between 0 (no interaction predicted) and 1 (interaction strongly predicted). In these analyses, FLC was used as a known interacting partner of SVP (Li et al., 2008), and PP2AA3 was used a negative control. The FLC-SVP interaction scores were 0.977 (D-SCRIPT) and 0.705 (PPI-Detect), whereas the PP2AA3-SVP interaction scores were 0.004 (D-SCRIPT) and 0.278 (PPI-Detect; Figure 8A). From D-SCRIPT analyses, the MAF2-SVP, MAF3-SVP, MAF4-SVP, and MAF5-SVP interaction scores were 0.789, 0.740, 0.586, and 0.779, respectively. From PPI-Detect analyses, the MAF2-SVP, MAF3-SVP, MAF4-SVP, and MAF5-SVP interaction scores were 0.584, 0.512, 0.915, and 0.822, respectively. All of these interaction scores were above the cut-off value of 0.5, suggesting that SVP interacts with MAF2-MAF5 in vivo. We then performed Y2H analyses to experimentally validate the predicted interactions. Indeed, Y2H analyses showed that SVP interacts with MAF2-MAF5 in yeast cells ( Figure 8B). To further test these interactions in vivo, we performed Co-IP experiments using Arabidopsis mesophyll protoplasts. To this end, 35S::2 × HA:SVP and 35S::GFP:MAF vectors were transiently co-expressed in protoplasts to produce HA-tagged SVP and GFP-tagged MAF2-MAF5 proteins and then the transfected protoplasts were shifted to 10°C to test the protein-protein interaction at 10°C. We precipitated protein extracts using GFP-Trap and probed the resulting precipitates with anti-HA antibodies. SVP-2 × HA successfully co-immunoprecipitated with each MAF transcription factor (Figure 8C; asterisk), confirming the interactions between SVP and MAFs. Taken together, these results suggest that MAF2-MAF5, like FLC and FLM, physically interact with SVP, further implying that SVP forms a repressor complex including FLC and MAFs. It is therefore likely that the formation of the complex leads to efficient floral repression, thus allowing the plant to acclimate to chilling temperatures. SVP-Mediated Floral Repression Likely Requires FLC and FLC-Clade Genes Since the PAF1C-deficient mutants showed strong early flowering at 10°C (Figure 1), we tested whether SVP transcript levels were affected in PAF1C-deficient mutants, as svp mutants flower early across a range of temperatures (10°C-27°C; Lee et al., 2013). RT-qPCR analyses showed that SVP mRNA levels in PAF1C-deficient mutants were similar to those of Col-0 plants at both 10 and 23°C (Figure 9A; He et al., 2004). This suggested that the flowering time change seen in PAF1C-deficient mutants at both 10 and 23°C was independent of SVP transcript levels. Considering that elf7-2 mutants flowered with 9.6 ± 0.9 leaves at 23°C, this genetic interaction study showed that the late flowering caused by SVP overexpression was almost completely suppressed by elf7-2 mutation. Similarly, the late flowering of 35S::SVP:HA plants was strongly suppressed by vip4-1 mutation (Supplementary Figure 6). These results suggested that SVP overexpression was unable to delay flowering in the absence of a functional PAF1C. Short vegetative phase was unable to repress flowering in elf7-2 mutants, which have dramatically decreased mRNA levels of FLC and FLC-clade genes (Figure 3B), suggesting the possibility that SVP binding to its targets requires functional FLC and FLC-clade transcription factors. To test this hypothesis, we took advantage of a publicly available ChIP-seq dataset (GSE54881) for SVP-GFP in the presence/absence of FLC (FRI FLC and FRI flc;Mateos et al., 2015). Consistent with a previous study (Mateos et al., 2015), the number of targets bound by SVP-GFP was substantially reduced in plants without functional FLC (FRI flc), compared to the plants with functional FLC (FRI FLC; Supplementary Figure 7). In terms of the number of bound targets, SVP-GFP was only able to bind to 39.2% of its target genes in the absence of functional FLC (the number of targets bound by SVP-GFP in FRI FLC was set to 100%; Supplementary Figure 7A). Furthermore, SVP-GFP was able to bind to 553 additional target genes in the presence of FLC (Supplementary Figure 7B; Mateos et al., 2015), indicating the importance of FLC for SVP binding ability. Since other FLC-clade transcription factors also interact with SVP (Figure 8), these results suggest that the FLC-clade transcription factors play a similar role, especially at low temperatures. DISCUSSION In Arabidopsis, PAF1C regulates flowering primarily through epigenetic modulation of FLC and FLM expression under standard growth conditions (Kim et al., 2005;Yu and Michaels, 2010). However, the role of PAF1C in regulating flowering time at chilling temperatures remains unknown. In this study, we show that PAF1C not only regulates FLC and FLM, but also regulates the entire FLC clade of genes (FLM/MAF1 and MAF2-MAF5), which play important roles in repressing flowering at low temperatures. Several environmental factors, including temperature, affect flowering. At lower temperatures, Arabidopsis plants flower late compared to plants at elevated temperatures (Lee et al., 2013). Several MADS-box transcription factors, including FLC, FLM, and SVP, play an important role in delaying flowering (Lee et al., 2007(Lee et al., , 2013. FLC and FLM are well-known to be epigenetically regulated by a number of histone modifiers, including SET domain-containing histone methyltransferases (He et al., 2003;Zhang et al., 2003;Oh et al., 2004;Nasim et al., 2021). PAF1C recruits these histone modifiers to modulate expression of its target genes, including FLC, FLM, and MAF2 (Zhang and Van Nocker, 2002;He et al., 2003;Zhang et al., 2003;Oh et al., 2004). Polymerase II-associated factor 1 complex function is critical for proper plant growth and development, as lesions in PAF1C components result in strong defects in the vegetative and reproductive stages, such as severely stunted growth, stem cell maintenance defects, floral structure defects, and male sterility (Zhang et al., 2003;He et al., 2004;Oh et al., 2004;Fal et al., 2019). Flowering time analyses across a range of ambient temperatures showed that all PAF1C mutants flowered early in a temperature-independent manner with reduced expression of FLC and FLC-clade genes (Figures 1, 3). This observation validates the previous findings that mutations in VIP3, VIP5, and VIP6 result in reduced FLC and FLM expression and, hence, accelerated flowering (Zhang et al., 2003;Oh et al., 2004). However, flowering of flc, flm, and flc flm mutants was delayed at a chilling temperature of 10°C, indicating that these mutants showed temperature-sensitive flowering at 10°C. This further indicated that FLC and FLM function primarily from 16 to 27°C (Lee et al., 2013), suggesting that other genes may play important roles at lower temperatures. In this study, we observed that PAF1C-deficient mutants had lower transcript levels of FLC and the other FLC-clade genes at 10°C (Figure 3), indicating that FLC-clade genes are involved in repressing flowering at 10°C. Our ChIP-qPCR assays showed that downregulation of FLC and FLC-clade genes in PAF1C-deficient mutants is associated with higher enrichment of the repressive mark H3K27me3 and reduced levels of the permissive marks H3K4me3 and H3K36me3 in the chromatin of these genes (Figure 4). This is consistent with previous findings that PAF1C-deficient mutants had reduced FLC and FLM expression due to the reduced H3K4me3 levels at these loci (He et al., 2004;Oh et al., 2004;Xu et al., 2008). Moreover, we showed that PAF1C-mediated epigenetic regulation is not limited to FLC and FLM, but also affects the entire FLC clade. A previous study showed that MAF3 function is more important at lower temperatures than higher temperatures, as the flc flm maf3 triple mutants flowered earlier than flc flm double mutants at 16°C, compared to 23°C (Gu et al., 2013). This supports our hypothesis that FLC-clade genes are important to repress flowering at low temperatures, as the 35S::amiR-MAF2-5 flc flm plants flowered significantly earlier than the flc flm double mutants. Furthermore, we observed induction of FLC and FLC-clade genes in wild-type plants at 10°C (Figure 5), supporting previous findings that the mRNA levels of FLC and FLM-ß increased at low temperatures (16°C compared to 23°C; Lee et al., 2007;Pose et al., 2013). This might mean that plants increase transcription of these FLC family genes in response to low temperatures to ensure efficient floral repression, and this increase in transcription likely requires PAF1C. Since PAF1C components and functions are conserved from unicellular yeast to complex eukaryotic organisms (Tomson and Arndt, 2013), this regulatory mechanism might be conserved and have important functions in other plant species. One important question raised by our observations is how these MAFs play such a critical role in floral repression at 10°C. One possible answer is their effect on SVP function. MADS-box transcription factors physically interact to form larger complexes that synergistically enhance their abilities to regulate transcription. Consistent with this, the MADS-box transcription factors FLC (Fujiwara et al., 2008;Li et al., 2008) and FLM (Lee et al., 2013) form floral repressor complexes with SVP and enhance their repression of flowering. Our data revealed that SVP interacts with all five MAF transcription factors in vitro and in vivo (Figure 8). This finding is consistent with our observation that in plants that lack (or have downregulated expression of) FLC and FLC-clade transcription factor genes, such as the PAF1Cdeficient mutants and 35S::amiR-MAF2-5 flc flm plants, SVP alone is not sufficient to repress flowering, especially at lower temperatures. Consistent with this notion, SVP overexpression was unable to delay flowering in PAF1C-deficient elf7 and vip4 mutants (Figure 9; Supplementary Figure 6), with significantly low levels of FLC and FLC-clade transcripts, suggesting that SVP function depends on FLC and FLC-clade transcription factors, and thus providing novel insight into SVP protein function at low temperatures. However, it should be noted that further genetic interaction analyses, such as analyses of plants overexpressing SVP in the 35S::amiR-MAF2-5 flc flm background, will provide more direct genetic evidence. Furthermore, our analysis of a previously published ChIP-seq dataset suggested that the presence/absence of FLC influences SVP binding to its targets, as the number of SVP-bound targets nearly doubled in the presence of functional FLC, implying that SVP function depends on FLC, as previously reported (Mateos et al., 2015). It is likely that FLC-clade transcription factors play similar roles, enhancing SVP binding to its targets and/or enhancing its ability to repress transcription; however, further experiments are required to confirm this hypothesis. It would be interesting to perform a genome-wide analysis of whether SVP can bind and repress its target genes in plants with reduced expression of FLC and FLC-clade genes, such as PAF1-deficient mutants or the 35S::amiR-MAF2-5 flc flm plants. In conclusion, our findings showed that PAF1C epigenetically regulates all the FLC-clade genes and that these genes play an important role in repressing flowering at chilling temperatures by forming floral repressor complexes with SVP. Wild-type plants accumulate higher levels of FLC-clade transcripts in response to chilling temperature to prevent precocious flowering. Our work uncovers the functional importance of MAF transcription factors in repressing flowering at chilling temperatures and increases the current understanding of how flowering is regulated in response to temperature. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found at: https://www.ncbi.nlm. nih.gov/geo/, under the accession number GSE171778. AUTHOR CONTRIBUTIONS ZN and JA designed the research. ZN performed the bioinformatic analyses and conducted experimental work. HS helped with ChIP-qPCR experiments. SJ and GY provided technical assistance. Note that these plants were first grown on medium containing phosphinothricin for 6 days at 23°C, transferred to soil, and then grown at 16°C. Letters indicate significant differences by one-way ANOVA followed by Tukey's range tests (p < 0.05). JA supervised the study. All authors contributed to the article and approved the submitted version. FUNDING This work was supported by a National Research Foundation (NRF) of Korea grant funded by the Korean government (NRF-2017R1A2B3009624 to JA) and Samsung Science and Technology Foundation under Project Number SSTF-BA1602-12.
9,329.6
2022-02-09T00:00:00.000
[ "Biology", "Environmental Science" ]
First-principles calculations of structural, electronic, magnetic and elastic properties of Mo2FeB2 under high pressure The structural, electronic, magnetic and elastic properties of Mo2FeB2 under high pressure have been investigated with first-principles calculations. Furthermore, the thermal dynamic properties of Mo2FeB2 were also studied with the quasi-harmonic Debye model. The volume of Mo2FeB2 decreases with the increase in pressure. Using the analysis of the density of the states, atom population and Mulliken overlap population, it is observed that as the pressure increases, the B–B bonds are strengthened and the B–Mo covalency decreases. Moreover, for all pressures, Mo2FeB2 is detected in the anti-ferromagnetic phase and the magnetic moments decrease with the increase in pressure. The calculated bulk modulus, shear modulus, Young's modulus, Poisson's ratio and universal anisotropy index all increase with the increase in pressure. From thermal expansion coefficient analysis, it is found that Mo2FeB2 shows good volume invariance under high pressure and temperature. The examination of the dependence of heat capacity on the temperature and pressure shows that heat capacity is more sensitive to temperature than to pressure. Introduction conductivity [1]. Current studies to improve the mechanical properties of Mo 2 FeB 2 mainly introduce Mn, Nb, V, Cr, Ni and C [2][3][4][5][6][7]. Mn addition can improve the wettability of the Fe binder phase on the Mo 2 FeB 2 hard phase. This enhancement is observed because Mn can refine the grains, decrease the porosity and increase the phase uniformity of Mo 2 FeB 2 [2,6]. Addition of V and Nb can also refine the grains [3,7]. Furthermore, with the increase in the Nb/V content, the hardness and transverse rupture strength are both first enhanced and then decreased [7]. Additions of Cr and Ni enhance the hardness and transverse rupture strength [4]. Addition of carbon can improve the hardness but it decreases the transverse rupture and fracture toughness [5]. Theoretical studies of Mo 2 FeB 2 are rare. Using empirical electron theory of solids and molecules, Pang et al. predicted that the brittleness of Mo 2 FeB 2 arises from weak bonds [8]. He et al. found that Mo 2 FeB 2 exhibits the largest shear and Young's moduli (E) due to its strong chemical bonding among the Mo 2 XB 2 and MoX 2 B 4 (X = Fe, Co, Ni) ternary borides with first-principles methods [9]. By the firstprinciples method, it is also found that addition of Cr can improve the volume deformation resistance of Mo 2 FeB 2 . Addition of Mn can improve the shear deformation resistance of Mo 2 FeB 2 [10]. It is worth pointing out that the structure, electronic density of states (DOS), and magnetic and elastic properties of Mo 2 FeB 2 under normal pressure have been studied by us before [11]. It was found that magnetism has a great impact on the crystal structure and mechanical properties. The anti-ferromagnetic (AF) case is the ground state. The B-B and B-Mo bonds play an important role in the shear modulus. The Fe atom contributes the most to the magnetism. To date, there have been no reports on Mo 2 FeB 2 behaviour under high pressure. On the one hand, high temperature and high pressure can help increase the density of the hard phase [12,13]. On the other hand, Mo 2 FeB 2 -based cermets are typically used in extreme conditions (high pressure and high temperature). The variation of the magnetic properties and structure of Mo 2 FeB 2 under high pressure is still unknown. Magnetism will affect the accuracy of the calculation of the crystal structure. Thus, it is necessary to study the electronic structure, elastic properties, magnetic properties and thermodynamic properties of Mo 2 FeB 2 under high pressure. Calculation method and crystal structure The calculation method in this paper was similar to that of previous work [11]. The work was conducted based on density functional theory [14,15] with the calculations performed using the Cambridge Serial Total Energy Package (CASTEP) plane wave code [16]. The interaction of the ionic core and valence electrons was modelled with ultrasoft pseudopotentials. The valence states considered here correspond to B 2s 2 2p 1 , Fe 3d 6 4s 2 and Mo 4d 5 5s 1 . The generalized gradient approximation in the Perdew-Burke-Ernzerhof form is used to describe the exchange and correlation terms [17,18]. The integration over the Brillouin zone was performed with the Monkhorst and Pack k-point mesh integrations [19]. The cutoff energy was set to 330 eV, and the 5 × 5 × 8 k-point grid was used. The convergence conditions were set as the maximum force on the atom below 0.01 eVÅ −1 , the maximum stress below 0.02 GPa and the maximum displacement between the cycles below 0.0005 Å. Hydrostatic pressure was applied in the x, y and z directions simultaneously with an increase of 10 GPa each time. Furthermore, the AF ground state was first set before the geometric optimization. The setting method can be found in [11]. Then, the lattice optimization with spin polarization was performed. In this work, the unit cell that contains 4 Mo atoms, 2 Fe atoms and 4 B atoms with periodic boundary conditions was used. Mo 2 FeB 2 with tetragonal symmetry belongs to the P4/mbm space group. Furthermore, the ground state of Mo 2 FeB 2 is AF [11]. The calculated data listed in table 1 agree well with the previous theoretical and experimental results [11,20,21]. The largest error is less than 1.6% between the volume V 0 obtained by our calculations and the experimental data of Gladyshevskii [20]. The electronic population can also be used to analyse the electronic structure and the covalent or ionic nature of a bond [22]. A high value of the bond population indicates the strong covalency of a bond. Otherwise, the bond is ionic. The results are listed in table 2, which can be separated into three categories of chemical interactions, namely, B-B, B-Mo(Fe) and Mo(Fe)-Mo(Fe) bonds. All bond lengths decrease as the pressure increases, which is due to the shrinking of the volume. Magnetic properties As the magnetic properties have significant impact on the crystal structures [11], the magnetic properties of the ground state should be decided. The calculated magnetic properties of Mo 2 FeB 2 under different pressures are listed in table 3. In all cases, Mo 2 FeB 2 shows AF behaviour. From the data in table 3, it can be found that the magnetic moments decrease with increasing pressure. The strong intra-band exchange interactions of the Fe d orbitals play a critical part in the magnetic moment of Mo 2 FeB 2 [11]. There is no magnetic moment for Mo. Figure 3 illustrates the calculated Fe 3d DOS of Mo 2 FeB 2 under the pressures of 0, 50 and 100 GPa. The examination of figure 3 also shows that the majority of spin channels are analogous to each other. Two obvious main peaks are present at 0 and −2.5 eV. With the pressure increasing, the heights of the peaks are reduced and the peaks move to lower energy, which is in good agreement with the decrease of the magnetic moment. Elastic properties Elastic constants depend on the stress and the strain tensors according to Hooke's law. The elastic constants C ijkl can be written as follows [23][24][25]: where e kl , σ ij , X and x are, respectively, the Eulerian strain tensor, the applied stress tensor and the coordinates before and after the deformation. For the tetragonal crystal studied here, six independent elastic constants, C 11 , C 33 , C 44 , C 66 , C 12 and C 13 , can be obtained. The bulk modulus (B) and the shear modulus (G) are deduced from the elastic constants. Based on the Voigt and Reuss method [26], for tetragonal crystals, the bulk modulus and the shear modulus are defined as and The arithmetic average of the Voigt and the Reuss bounds, which is the Voigt-Reuss-Hill (VRH) average, is considered to provide the best estimation of the isotropic elastic moduli [27]. Using the VRH average, the bulk and the shear modulus can be written as B = (B V + B R )/2 and G = (G V + G R )/2, respectively. The average E and Poisson's ratio (v) can be expressed with B and G as follows [28]: If the elastic constants satisfy the Born stability criterion, the crystal structure is usually considered to be mechanically stable [29,30]. A positive determinant for the crystal's symmetric matrix is required for the criterion of a stable crystal. For tetragonal crystals, the mechanical stability restrictions can be described as (C 11 − P) > 0, (C 33 − P) > 0, (C 44 − P) > 0, (C 66 − P) > 0, (C 11 − C 12 − 2P) > 0, (C 11 + C 33 − 2C 13 − 4P) > 0, (2C 11 + 2C 12 + C 33 + 4C 13 + 3P) > 0. (3.8) The elastic constants of Mo 2 FeB 2 under different pressures are shown in figure 4. It was found that the elastic constants increase almost linearly with increasing pressure up to 100 GPa. This is caused by the enhancement of the covalent bonds (B-B, B-Fe) mentioned above. All elastic constants meet the Born stability criterion, indicating that Mo 2 FeB 2 is mechanically stable from 0 to 100 GPa. Figure 5 shows a monotonic increase of B with the pressure. This finding means that the resistance ability of the material to uniform compression increases. B can also reflect the average atomic bond strength. Hence, the atomic bond strength of Mo 2 FeB 2 increases with the pressure. Furthermore, the values of G and C 44 of Mo 2 FeB 2 also increase monotonically with the increase in pressure, thus indicating that it is harder to achieve a shear deformation with increasing pressure. The higher shear modulus implies the more pronounced directional interatomic bonding [31]. Thus, the bonding behaviour of Mo 2 FeB 2 becomes more directional with the increase in pressure. Moreover, E also increases monotonically with the pressure, which means that it is harder to stretch the material uniformly with increasing pressure. As B, G and E increase monotonically with the pressure, the hardness is supposed to have a similar trend. Poisson's ratio, B/G and the universal anisotropy index (A U ) are also calculated here, as shown in figure 6. Poisson's ratio is inversely proportional to the volume change during uniaxial deformation. That is, the lower the υ value, the larger is the volume change. The values of υ increase with the increase in pressure, thus indicating that there is lower volume change during uniaxial deformation. To analyse the ductile (brittle) behaviour of materials, a simple relationship has been proposed by Pugh: a high value of B/G corresponds to malleability, while a low value corresponds to brittleness [32]. It was suggested that 1.75 is the critical value that separates ductile and brittle materials. That is, if B/G > 1.75, the material behaves in a ductile manner. As shown in figure 6, Mo 2 FeB 2 becomes more ductile as the pressure increases. When the pressure increases to 20 GPa, Mo 2 FeB 2 changes from brittle to ductile. A U can be defined as A U can indicate the degree of anisotropy for crystals. Zero means the crystal is isotropic. It is noted that A U increases with the pressure, which means that Mo 2 FeB 2 is anisotropic under pressure. In addition, the directional-dependent Young's modulus can also predict the elastic anisotropy of a crystal. For a tetragonal crystal this is expressed as follows [33]: 1 E = S 11 (l 1 4 + l 2 4 ) + (2S 13 + S 44 )(l 1 2 l 3 2 + l 2 2 l 3 2 ) + S 33 l 3 4 + (2S 12 + S 66 )l 1 2 l 2 2 , (3.10) where l 1 , l 2 and l 3 represent the directional cosines with respect to the x-, y-and z-axes, respectively. Using the compliance constant S ij , the directional Young's moduli for Mo 2 FeB 2 were obtained, as shown in figure 7. For direct comparison, the directional-dependent Young's moduli are plotted in figure 7 for the pressures of 0, 50 and 100 GPa. A spherical curved surface represents an isotropic system, while the deviation from the spherical shape indicates the extent of elastic anisotropy. For Mo 2 FeB 2 at 0 GPa, the curved surface deviates slightly from the spherical shape, which means that there is a slight elastic anisotropy for Mo 2 FeB 2 , which is in agreement with the discussion above. When the pressure increased to 50 GPa, the curved surface changed to oval. This indicates that the elastic anisotropy increased. Furthermore, when the pressure reaches 100 GPa, the curved surface has a larger distortion, thus indicating larger anisotropy. Thermal properties To study the thermodynamic properties of Mo 2 FeB 2 under high pressures, a quasi-harmonic Debye model [34] was used, in which the non-equilibrium Gibbs function G*(V; p, T) can be expressed as follows [35]: where E(V) is the total energy per unit cell, pV is the constant hydrostatic pressure condition, Θ(V) is the Debye temperature and A vib is the vibrational term. A vib can be expressed with the Debye model of the phonon DOS as follows [36]: Here, n is the number of atoms per formula unit, and D(Θ/T) represents the Debye integral. For an isotropic solid, Θ is defined as follows [34]: where M is the molecular mass per unit cell and B s is the adiabatic bulk modulus, which is approximately given by the static compressibility [34] (3.14) f (σ ) is written as [37] f where σ is the Poisson's ratio. Thus, the thermal equation of state (EOS) V(P,T) can be calculated by the following equation with respect to volume V: The heat capacity C V and the thermal expansion coefficient α are defined as where B T is the isothermal bulk modulus and γ is the Grüneisen parameter, which is expressed as This paper calculated the pressure dependence of thermodynamic properties in the 0-100 GPa pressure range. First, a series of lattice constants were selected. Then, the corresponding unit cell volume and total energy were calculated and the third-order Birch-Murnaghan state equation was used for curve fitting to obtain the E-V curve (figure 8). As seen in the figure, the calculated values agreed well with the fitted values. The dependence of the calculated normalized volume V/V 0 on pressure P and temperature T is illustrated in figure 9, where V 0 is the zero-pressure equilibrium volume. It is found that V/V 0 decreases due to the increase in pressure and the slope of the curves also decreases, thus indicating that Mo 2 FeB 2 is increasingly difficult to compress as the pressure increases. It is also found that the curves changed little with the increase in pressure, which means that Mo 2 FeB 2 is stable under different temperatures. The thermal expansion coefficient (α) can intuitively reflect a material's structural stability. Figure 10 shows the dependence of α on pressure and temperature. For a given pressure (figure 10a), α increases rapidly especially at zero pressure below a temperature of 400 K, and it increases slowly at higher temperatures. This is an expression of the excellent volume invariance under high temperature. However, α decreases strongly below 40 GPa with pressure at a constant temperature (figure 10b). Moreover, it decreases slowly above 40 GPa with the increase in pressure. This indicates that Mo 2 FeB 2 possesses good volume invariance under high pressure. Figure 11 shows the Debye temperature (Θ) as a function of the temperature and pressure. At room temperature (T = 300 K), Θ of 1000.19 K is obtained. Unfortunately, we did not find the corresponding experimental data. Under the application of pressure, Θ decreases very slowly with increase in temperature ( figure 11a). Furthermore, at a given temperature (figure 11b), Θ tends to increase linearly with the increase in pressure. This indicated that the influence of the pressure on Θ is strong and that Θ is less affected by the temperature. The Debye temperature can also reflect the bonding between atoms. Thus, with the increase in pressure, the strength of atoms' bonds increases, which is consistent with the above analysis. Heat capacity C V is one of the most important parameters in thermodynamics. Figure 12 shows the relationship between the heat capacity and temperature under different pressures. For the same pressure, C V increases with the temperature. For the same temperature, C V decreases with the increase in pressure, thus implying that increasing the pressure is equivalent to reducing the temperature. The relationships of C V with the temperature and pressure show that C V is more sensitive to temperature than to the pressure. Owing to the anharmonic effect, when T < 500 K, the variation of C V with the changes in the temperature and pressure is more obvious. Under high temperature and high pressure, the heat capacity approaches the Dulong-Petit limit. Conclusion First-principles calculations were performed to investigate the structure, electronic DOS, and magnetic and elastic properties of Mo 2 FeB 2 under high pressure. The volume of Mo 2 FeB 2 decreases almost linearly with the increase in pressure. Examination of the DOS showed that with the pressure increasing, the B-B bonds were strengthened and the B-Mo covalency decreased. The atom population analysis and Mulliken overlap population analysis also found these results. With the pressure increasing, the sp hybridization of the B atoms increases, resulting in the increase of strong covalent bonding between the B atoms (forming a B-B bond). Moreover, the B-B and B-Fe bond populations increase with the increase in pressure, thus implying that the covalence of the B-B and B-Fe bonds increases. The analysis of the magnetic properties shows that, for all pressures, Mo 2 FeB 2 shows AF behaviour and the magnetic moments decrease with the increase in pressure. The calculated B, G, E, B/G, v and A U all increase with the increase in pressure, which means that the hardness and ductility of Mo 2 FeB 2 increase with the increase in pressure. Furthermore, from the directional-dependent Young's modulus of Mo 2 FeB 2 under different pressures, it is found that elastic anisotropy increases with the increase in pressure. A quasiharmonic Debye model was used to investigate the thermodynamic properties of Mo 2 FeB 2 under high pressures. As the pressure increases, Mo 2 FeB 2 is increasingly hard to compress. Furthermore, Mo 2 FeB 2 is stable under different temperatures. From α analysis, it is found that Mo 2 FeB 2 possesses good volume invariance under high pressure and temperature. The value of Θ is more influenced by pressure than by temperature. The examination of the relationships of C V with the temperature and pressure shows that C V is more sensitive to temperature than to pressure.
4,446.4
2018-07-01T00:00:00.000
[ "Physics", "Materials Science" ]
Dye-Sensitized Solar Cells and Solar Module Using Polymer Electrolytes : Stability and Performance Investigations We present recent results on solid-state dye-sensitized solar cell research using a polymer electrolyte based on a poly(ethylene oxide) derivative. The stability and performance of the devices have been improved by a modification in the method of assembly of the cells and by the addition of plasticizers in the electrolyte. After 30 days of solar irradiation (100 mW cm−2) no changes in the cell’s efficiency were observed using this new method. The effect of the active area size on cell performance and the first results obtained for the first solar module composed of 4.5 cm2 solid-state solar cells are also presented. INTRODUCTION Since Grätzel's announcement of the first dye-sensitized nanocrystalline solar cell (DSSC) as a promising, low cost, clean and highly efficient device for solar energy conversion, many groups have focused their efforts on improving and comprehending this technology in its different aspects.The liquid electrolyte usually employed in this cell is still a drawback for long-term practical operation and causes substantial problems to bring DSSC onto the market.To overcome these problems, many research groups have been searching for alternatives to replace the liquid electrolytes, such as inorganic or organic hole conductors [1,2], ionic liquids [3,4] and polymers [5,6], and gel electrolytes [7,8]. Since 1996, our group has been working on DSSC using polymer electrolytes based on the copolymer poly(epichlorohydrin-co-ethylene oxide), P(EPI-EO), and the first results were published in 1999 [9].The best solar energy conversion efficiency device obtained for a solid-state DSSC (1 cm 2 of active area) was 2.6% under 10 mW cm −2 and 1.6% under 100 mW cm −2 [10].However, our results indicate that the overall efficiency has already reached the limit for a system based solely on polymers and salt/iodine mixtures.Other components must be added to this system in order to develop cells with enhanced efficiency, since the cell's performance is directly related to the polymer electrolyte ionic conductivity, which is a consequence of the I 3 − /I − mobility inside the polymer matrix. In this report we summarize our recent experimental efforts to improve the ionic conductivity of the polymer electrolyte, looking towards solar cells with improved performance.We investigated the addition of γ-butyrolactone as plasticizer to electrolytes based on P(EO-EPI) copolymers.The optimization of polymer electrolyte composition and cell assembly leads to devices with better efficiency and enhanced stability.We also discuss the performance of solar cells with large active areas and present the first results of a solar module assembled with DSSC using a polymer electrolyte. Thermogravimetric analysis (TGA) Thermogravimetric curves were obtained for the pure copolymer P(EO-EPI)84 : 16 and two polymer electrolyte samples containing P(EO-EPI)84 : 16 + 11 wt% of NaI/I 2 .The polymer electrolyte samples were prepared by casting the polymer electrolyte solution under a saturated solvent atmosphere, reproducing the conditions usually employed in DSSC assembly.The samples were prepared on the same day; however thermogravimetric analysis (TGA) were carried out on different days for the two samples in order to understand the role of the solvent.The TGA for the first sample was carried out immediately after sample preparation (1st day).For the other sample, the TGA was carried out on the 5th day.TGA curves were measured using a thermogravimetric analyzer model 2950 from TA Instruments.All measurements were done under continuous argon flow (100 mL min −1 ), heating from room temperature to 600 • C at 10 • C min −1 . Ionic conductivity measurements Ionic conductivity measurements were evaluated as a function of salt concentration for polymer electrolytes prepared with and without plasticizers.All samples consisted of polymer electrolyte films (thickness of circa 100 μm), obtained by casting electrolyte solutions onto a Teflon disk, under saturated atmosphere conditions.Afterwards, the films were detached from the Teflon by dipping into liquid nitrogen, and further dried under vacuum for 144 hours.Conductivity measurements were carried out in a MBraun dry box (humidity < 10 −4 %, under an argon atmosphere).The films were fixed between two mirror-polished stainless steel disc shaped electrodes (diameter = 12 mm) and the conductivity values were calculated from the data obtained by electrochemical impedance spectroscopy (EIS), using an Eco-Chemie Autolab PGSTAT 12 with FRA module coupled to a computer in the frequency of 10 6 to 10 Hz and amplitude of 10 mV applied to 0 V. Dye-sensitized TiO 2 solar cell assembly DSSC were assembled using glass-FTO (Hartford glass, Rs ≤ 10 Ωcm −2 ) and glass-ITO (Delta Technologies, Rs ≤ 30 Ωcm −2 ) electrodes as substrates for the photo and counter-electrodes, respectively.Counter-electrodes (CE) were prepared by Pt sputtering (400 Å) onto the glass-ITO electrodes.For preparation of photoelectrodes, a small aliquot of TiO 2 suspension was spread using a glass rod onto glass-FTO electrodes with an adhesive tape as spacer.The TiO 2 electrodes (thickness ∼ 6 μm) were heated at 450 • C for 30 minutes, cooled to ∼ 80 • C, and then immersed in a 1.5 × 10 −4 mol L −1 solution of the sensitizer dye cis-bis(isothiocyanato)bis(2,2 -bipyridyl-4,4 -dicarboxylic acid)-ruthenium(II), Ruthenium-535, Solaronix) in absolute ethanol for 16 hours.Afterwards, the electrodes were rinsed with ethanol and dried.A film of the polymer electrolyte was cast onto the sensitized electrodes under a saturated solvent atmosphere.An alternative procedure for polymer electrolyte deposition consisted in casting the solution onto the electrodes placed on a hot plate at 60 • C. The final assembly of the DSSC was done by pressing the CE against the sensitized electrode coated with the polymer electrolyte.An adhesive tape was placed between the two electrodes, in order to control electrolyte film thickness and to avoid short-circuiting of the cell.All devices were placed in a desiccator with P 2 O 5 for 2 hours to remove moisture. The DSSC devices were characterized on an optical bench using an Oriel Xe (Hg) 250 W lamp, lenses, water, and cutoff filters to avoid IR and UV radiation, respectively.The light intensity was measured with a Newport Optical Power Meter.Current-potential curves (J-V curves) were obtained using linear sweep voltammetry at 1 mV s −1 using an Eco-Chimie-Autolab PGSTAT 12 potentiostat.The stability tests were performed daily, irradiating each cell for 1 hour at 100 mW cm −2 in short-circuit conditions, for approximately 35 days. A solar module was built by connecting in series 13 cells with 4.5 cm 2 of active area.These cells were assembled using a glass-FTO from Saint-Gobain (EKO-EM1, 10-14 Ω) and the counter electrodes were prepared by thermal decomposition of a 5 The electric contacts between adjacent cells were done by placing copper wires on the FTOglasses and further covering them with colloidal conductive carbon glue.The solar module was placed on the rooftop of the Chemistry Building at the Universidade Estadual de Campinas (lat S522 • 49.08 ; long W47 • 04.11 ) for outdoor performance tests.Electric parameters were monitored during the day.No corrections were made for the 30% reflection and transmission losses of the FTO-glasses. Solar cell stability Figure 1 shows the variation of energy conversion efficiency (η) as a function of time for a device assembled with polymer electrolyte deposited by casting under a saturated solvent atmosphere and for a device assembled with polymer electrolyte deposited by alternative casting at 60 • C. All the measurements were carried out at a light intensity of 100 mW cm −2 .The data revealed that for the cell assembled with the polymer electrolyte deposited by casting under a saturated solvent atmosphere, there was a decay of 72% in the cell's efficiency (η from 1.0 to 0.3%) during the first 30 days.For the cell assembled with the alternative procedure, η remained constant over the same period.After the 30th day, both cells exhibit similar performance, indicating that the apparent better initial performance of the "moisturized cell" might be a consequence of residual solvent present in the electrolyte.This residual solvent, in this case, acetone, increases the ionic conductivity of the electrolyte (or the mobility of charge carriers), giving rise to higher values of photocurrent.As the solvent evaporates, the ionic conductivity decreases followed by a decrease in the cell's efficiency.A similar behavior was observed in an earlier work involving a DSSC assembled with flexible and rigid substrates (ITO-PET and ITO-glass) [11].The loss of solvent in the "moisturized cell" also results in the formation of empty holes (spaces) which contribute to increasing the overall cell resistance and affect the contact between the electrodes, lowering the fill factor.In order to fully characterize the role of the residual solvent on the cell's efficiency, thermogravimetric analyses were carried out on two polymer electrolyte samples prepared by casting the polymer electrolyte under a saturated solvent atmosphere, reproducing the conditions usually employed during DSSC assembly.Both samples were prepared at the same time; however TGA were carried out in different days.TGA for the first sample was carried out immediately after sample preparation (1st day).For the other sample, TGA was carried out on the 5th day.The TGA curve (Figure 2) for the pristine P(EPI-EO)84 : 16 sample showed only one mass loss step at 336 • C (maximum mass loss temperature, T dec ) associated to the thermal degradation of the copolymer, due to the loss of chlorine radicals and HCl release, in analogy to the thermal degradation of poly(vinyl chloride) [12].Both polymer electrolyte samples presented two significant mass loss steps in the temperature region below 400 • C. The first step (T < 200 • C) is associated to the loss of residual solvent and water and the second is associated to the thermal degradation of the copolymer.For the sample analyzed immediately after preparation, 50% of mass loss is attributed to residual solvent and water.For the sample kept in ambient conditions for 5 days and then analyzed, the solvent/water loss was reduced to 10%.These results indicate that the casting procedure usually employed for gel and polymer electrolytes leaves a high amount of residual solvent and water in the device that is released gradually during postassembly. The removal of solvent and water prior to device closing, however, reduces the polymer ionic conductivity, since these small molecules also serve as a second path for charge carrier motions. It was also observed that, even after the removal of the residual acetone by heating the sample at 60 • C, the performance of the devices presented a decay with time when operating under outdoor conditions [13].Investigations on the stability of the polymer electrolyte have indicated that the polymer is stable at the temperature conditions in which solar cells normally operate [14].Therefore, the devices might be degrading in a similar way to that observed for liquid solar cells.Several studies have reported that the origin of cell degradation might be related to dye desorption or degradation, accelerated by temperature [15].Another factor that might have a strong influence on cell stability is the low mobility or concentration of iodide species inside the TiO 2 film, which increases the average time in which the dye remains in the oxidized state [16].Is important to emphasize that for future application of DSSC modules, a good sealing is necessary.This requirement is important in order to minimize all the factors described above, which affect drastically the long term stability of the devices. Polymer electrolyte with enhanced ionic conductivity Although stability is an important issue in the development of solar cells and modules capable of being commercialized, the market also demands cells with performance similar to that exhibited by silicon technology.According to these observations, the processing of devices with better stability and higher performance was pursued.Thus, polymer electrolytes were prepared with the addition of γ-butyrolactone, GBL, as a plasticizer.GBL is a low molar mass organic liquid, with low vapor pressure, low viscosity and stable in the temperature range of operating cells. Figure 3 exhibits the ionic conductivity as a function of NaI concentration for different polymer electrolytes with and without γ-butyrolactone.The amount of salt that was added considered solely the polymer content. For polymer electrolytes prepared without the plasticizer, the higher ionic conductivity was achieved for the copolymer with higher content of EO units, P(EO-EPI)87 : 13.This behavior is in agreement with previous reports on P(EO-EPI) copolymers, where only the EO units interact effectively with the ions, and thus contribute effectively for ionic conductivity.The low interaction of the epichlorohydrin unit is due to the electronegativity of the chlorine atom which withdraws electron density from the oxygen [17].Therefore, one could expect that a large amount of EO units would give origin to a higher ionic conductivity.However, it also leads to an increase in the crystallinity degree of the copolymer (the copolymer P(EO-EPI)50 : 50 is fully amorphous while pure poly(ethylene oxide) has 80% crystallinity).The increase in crystallinity degree reduces the ionic conductivity of polymer electrolytes, since ionic transport is expected to occur predominantly in the amorphous phase of the polymer matrix.On the other hand, the high content of EO units in the copolymers P(EO-EPI)87 : 13 and P(EO-EPI)84 : 16 allows a larger amount of salt to be dissolved, due to more oxygen sites available to coordinate the Na + ions.The results presented in Figure 3 show that this effect is the predominant one, because even with the high degree of crystallinity exhibited by the copolymer with higher EO content, this polymer showed the highest values of ionic conductivity.The same trend in ionic conductivity was observed when analyzing the polymer electrolytes containing GBL as plasticizer.Again, the higher ionic conductivity values are obtained for the copolymer P(EO-EPI)87 : 13 after the addition of plasticizer.This result is in agreement with the high content of ethylene oxide in this copolymer, indicating that the plasticizer is acting directly between the polymer chains disrupting the crystalline phase. Besides, for the plasticized polymer electrolytes the ionic conductivity remains almost constant even after addition of a large amount of salt (30 wt%).This effect is not observed when no plasticizer is added.In this case, the maximum in the ionic conductivity curve is reached between 10 and 15% of salt.After this point, an abrupt decrease in the ionic conductivity is observed when the amount of salt is increased [18].Thus, the addition of plasticizer has important effects on cell performance by increasing the ionic conductivity of the electrolyte since it promotes salt dissociation and transport of charges, and also favors the cell stability since it allows a higher amount of salt to be added.Thus, as a consequence more dye cations are regenerated.The ability of GBL to coordinate ions and contribute to ionic conductivity has already been shown for gel electrolytes based on PEO and lithium salts [19].The ability to dissociate more salt was also demonstrated in an earlier work using poly(ethylene glycol methyl ether) (PEGME) as plasticizer [14].In the same work the addition of PEGME does produce devices with enhanced photo-current and efficiency responses, which are probably a consequence of the enhancement in ionic conductivity of the electrolyte [14]. Solar cell performance Figure 4 shows the J-V curves obtained for several dyesensitized solar cells with different active areas, all assembled with P(EO-EPI)87 : 13 + 15 wt% of NaI/I 2 + GBL.The parameters calculated from these curves were summarized in Table 1. As expected, the results show that higher photocurrent values are obtained for devices with large active areas (see Table 1).However, the increase in the current is not proportional to area enlargement, as can be seen when the curves are evaluated considering the current density (Jsc), defined as photocurrent per square centimeter (Figure 4).One could expect similar values of Jsc if the materials did not present a great increase in overall resistance with area enlargement.Figure 4 shows that the diode profile of the J-V curves is replaced by an ohmic profile when the area of the device is increased.The series resistance estimated from the J-V curves corresponds to circa 40, 85, and 200 Ω for cells with active areas of 0.25, 1.0, and 4.5 cm 2 , respectively.This increase in the internal resistance is a drawback when scaling up these devices.The same effect is also observed for liquid solar cells and has its origin in the increase of the resistance of the FTO layer, which limits current collection in back contact, that is, there is a loss in the maximum current that can flow through the device.Table 1 shows that the smaller the active area, the higher Jsc, efficiency (η), and fill factor (FF).These results indicate that for small areas the substrate is not a limiting factor as it is for large area devices.We observed that increasing the active area by a factor of ∼ 4 leads to a decrease in efficiency of approximately half.The fill factor is also reduced by 20-30% when the area is enlarged, because of the poor electrical contact in the cell, which reflects the interface with the FTO-glass. Such limitations in device performance were previously reported by Okada et al. [20].These authors showed that the lack of grid collectors in the substrates significantly reduces the performance of dye-sensitized solar cells assembled with liquid electrolytes.Therefore, although the ionic conductivity of the polymer electrolyte investigated here is still one order of magnitude lower than the liquid electrolyte, the internal resistance in the solid electrolyte is not as crucial in determining the FF and η values as the FTO layer resistance.However, the electric resistivity of the FTO-glass is not the only factor determining the overall efficiency.When the active area was increased, the TiO 2 film deposited by the doctor blade technique became less homogeneous, with poor adhesion to the FTO layer.This effect can be minimized by modifying the properties of the TiO 2 colloidal suspension, or by employing techniques of deposition more suitable for large-scale production, such as spin-coating or screenprinting. Solar module performance After the characterization of individual cells, 13 DSSC with 4.5 cm 2 of active area were connected in series to build a solar module made with a polymer electrolyte.Figure 5 shows the plots of maximum power (Pmax) and open circuit potential (Voc) generated by the module during one day of irradiation (January/2005), from 6 am to 7 pm, in Campinas, Brazil.The total short circuit current of the module corresponds to the average of the current generated by each cell and the open circuit potential corresponds to the sum of the potential generated by each cell.The module composed of 13 solar cells connected in series presented overall Voc of ∼ 8 V under irradiation from 10 am to 2 pm.The maximum power generated was 28 mW, at 1 pm, and the integrated power generated during the whole period of irradiation was 183 mW.The profile of the open circuit voltage and maximum power resembles the variation of solar irradiation during the day.Complete characterization of this solar module will be published elsewhere [13]. To our knowledge, the present module was the first prototype ever built with a polymer electrolyte.Although its performance is low in comparison to other solar modules assembled with liquid electrolytes, some points must be considered.For instance, the module had an active area (58.5 cm 2 ) which is small compared to other modules reported in the literature and, therefore, lower values of power generation are expected.Also, the lack of grid collectors lowered the expected current values, because of the low conductivity of the substrate, as discussed above.Nevertheless, the results obtained are considered very promising and further improvements in module assembly are under investigation.Solar cells with 4.5 cm 2 of active area were prepared with substrates containing a metallic grid, and we observed that this modification, indeed, improved the performance of the devices. CONCLUSIONS For DSSC, stability and performance are two critical issues that must be considered when scaling up these devices.In this work we showed that the stability of dye-sensitized solar cells assembled with polymer electrolytes can be improved by the removal of residual solvent.Also, the performance of these devices can be further improved by the addition of a plasticizer in the polymer electrolyte, which acts directly between the polymer chains, disrupting the crystalline phase and enhancing the ionic conductivity.The performance of dye-sensitized solar cells assembled with the plasticized polymer electrolyte was evaluated as a function of active area size.With area enlargement, the performance of the cell decreases, as a consequence of the loss of current flowing through the device, mainly caused by an increase in the internal resistance of the device.Although the performance obtained for solar cells assembled with 4.5 cm 2 of active area was low, 13 cells were connected in series to compose the first solar module built with solid-state dye-sensitized solar cells.The module showed a very promising performance, and generated 8 V under outdoor conditions. Figure 1 : Figure 1: The variation of energy conversion efficiency under 100 mW cm −2 as a function of time for a solid-state DSSC assembled with a P(EPI-EO) + 11 wt% NaI/I 2 polymer electrolyte deposited by casting under a saturated solvent atmosphere ( ) and deposited by casting at 60 • C ( ). Figure 5 : Figure 5: Variation of the maximum power (Pmax = Δ) and open circuit potential (Voc = •) produced by the solar module assembled with 13 series-connected solar cells during one day of outdoor exposure.
4,893
2006-09-19T00:00:00.000
[ "Materials Science" ]
Direct Yaw-Moment Control of All-Wheel-Independent-Drive Electric Vehicles with Network-Induced Delays through Parameter-Dependent Fuzzy SMC Approach , Introduction In recent years, all-wheel-independent-drive electric vehicles (AWID-EVs) have attracted increasing research efforts from both the academia and industry [1][2][3].Equipped by advanced electric motors with more accurate and quicker torque generations than internal combustion engine (ICE) and hydraulic braking systems, AWID-EVs have obvious advantages in terms of direct yaw-moment control (DYC) through flexible differential driving/braking functions over traditional centralized drive vehicles [1,[3][4][5][6].Plenty of existing studies have focused on the more flexible DYC and its integration control with active steering for AWID-EVs [2,[7][8][9][10][11][12][13].However, considering the presence of model uncertainties, system parameter variations and external disturbances such as road rough or wind gust, it is one of the most principal issues for AWID-EVs to ensure the robustness of DYC. To improve and ensure the robustness of DYC, in the existing lateral dynamics control strategies of AWID-EVs, sliding mode control (SMC) has been widely adopted, which is robust and suitable for nonlinear systems such as vehicles.Goodarzi and Esmailzadeh used a sliding mode controller in low-level control to improve the robustness of vehicle dynamics control system for AWID-EVs [1].Li et al. employed a sliding mode controller in the main loop to offer enough robustness for an integrated vehicle chassis control system based on DYC, active steering, and active stabilizer [14].Wang and Longoria designed three sliding mode controllers to ensure control system robustness in a new coordinated-reconfigurable vehicle dynamics control strategy for AWID-EVs [15] and [16,17] presented a terminal sliding mode control method to improve the robustness of yaw rate tracking control and torque distribution control.But, in all of these aforementioned control strategies based on SMC, the interference from electronic control systems such as networked-induced delays is not considered.However, in a modern AWID-EV, the control signals from controllers and the measurements from sensors are usually exchanged through an in-vehicle communication network, for example, controller area network (CAN) or FlexRay [2].In other words, a modern AWID-EV is a networked control system (NCS) rather than a conventional centralized control system [2,7,9,10].Thus, the network-induced delays cannot be ignored.According to these research results in [2,7,9,10], the network-induced delays caused by CAN could reduce the control performance of DYC and even deteriorate the EV system.Some researchers proposed ∞ -based linear quadratic regulator (LQR) control method against CAN network delays as in [2,7,10].However, research on SMCbased DYC of AWID-EVs is rare. Furthermore, as a variable structure control technology, SMC is more vulnerable to network-induced delays than continuous control technologies such as LQR or PID [18,19].There are numerous approaches to improve SMC.Among these approaches, the state-dependent boundary layer tuning method [20] has been widely used to improve the robustness of SMC.However the conventional state-dependent boundary layer tuning method is not sufficiently effective to deal with network-induced delay. The main work is as follows: firstly, the network-induced delays are explicitly considered in the DYC through SMC control method.Considering that the network-induced delays lead to a challenging problem for the DYC based on SMC, the chattering problem of SMC caused by the delays is analyzed in detail and the delays are determined with a command-first scheme, which is a more accurate method than the existing approaches.Furthermore, a parameterdependent fuzzy sliding mode control (FSMC) method based on the real-time information of vehicle states and system delays is proposed to ensure the robustness of DYC for AWID-EVs against network-induced delays. The remaining sections of this paper are organized as follows: In Section 2, problem formulation is described containing overall structure for DYC of networked AWID-EVs, control-oriented vehicle lateral dynamics model, and reference state model.The negative impact that resulted from NCS on the lateral dynamics model of AWID-EVs is also analyzed in detail in this section.In Section 3, an integrated statedependent and delay-dependent fuzzy SMC is proposed to improve the robustness of DYC for networked AWID-EVs.In Section 4, the results of cosimulations with Matlab5/Simulink and CarSim are demonstrated.Conclusions are summarized in Section 5. Overall Structure for DYC of Networked AWID-EVs. According to vehicle dynamics, the main working principle of the DYC of AWID-EVs is to keep the vehicle lateral motion state variables such as the yaw rate and the slip angle tracking the reference states by using the external yaw moment [21], which is directly generated by active longitude tire forces distribution of all wheels.As shown in Figure 1, a typical overall structure for DYC of networked 4-wheel-independent drive vehicles mainly consists of AWID-EV controller, controller area network (CAN), 4 motor controllers, motion state sensors, and BMS.The overall control system is integrated by CAN. The DYC function is implemented by AWID-EV controller, which is usually designed as a hierarchical controller including reference state model, motion controller unit, torque distribution unit, and estimation and processing unit as in Figure 1.The reference state model is used to solve the reference states such as the reference sideslip angle res and the reference yaw rate res according to vehicle speed and the front wheel steering angle from the driver.The reference states indicate the desired motion state by the driver.The motion controller unit is used to calculate the external yawmoment Δ to keep the sideslip angle and the yaw rate tracking the reference states.The torque distribution unit is used to solve the longitude forces , , , and for 4 motors according to the Δ .The estimation and processing unit is used to measure or estimate states such as , , . In this study, the motion controller unit, which influences the system robustness against network-induced delays, will be studied, whereas the reference state model, torque distribution unit, and estimation and processing unit are simplified. Control-Oriented Vehicle Lateral Dynamics Model. As shown in Figure 3, a two-degree-of-freedom (2-DOF) vehicle model, which has been widely studied as the control-oriented vehicle lateral dynamics model in various researches on DYC of vehicles [2,21], is used in the paper, where CG is the center of gravity; is the vehicle mass; is the vehicle yaw inertia; is the yaw moment; and are the longitude tire forces of front and rear wheels, respectively; and are the lateral tire forces of front and rear wheels, respectively; and are the slip angle of front and rear wheels, respectively. With the 2-DOF vehicle model, the state-space formulation of control-oriented vehicle lateral dynamics model for DYC of AWID-EVs is expressed as follows [10]: where and are the cornering stiffness of the front and rear tires, respectively. Reference State Model. In vehicle lateral motion control, the desired sideslip angle is generally selected to be zero to ensure vehicle stability, while the desired yaw rate is usually defined to ensure good handling performance [10].A widespread expression of the desired yaw rate is described in [2,10].Therefore, the reference state model can be written as follows: where 2.4.The Impact Resulted from NCS on Vehicle Lateral Dynamics Model.Firstly, without loss of generality as in [22], it is possible to make following assumption.In the NCS of DYC for AWID-EVs shown in Figure 2, the sensor node Vehicle sensors Vehicle controller Vehicle actuators (motors) periodically samples the vehicle states with fixed period , the controller node and the actuator node operate in event-driven mode which means a task will be immediately implemented once a message arrives via CAN, and the task implementation time in each node is ignored.With such assumption and without considering network-induced delays, the NCS of DYC for AWID-EVs runs like an ideal centralized control system with fixed sampling period .The control-oriented discrete-time model of the vehicle lateral dynamics along with reference state model can be written as [2] where with , , , and indicating the state vector, steering angle vector, control input vector, and the reference state vector at time , respectively.Secondly, considering the network-induced delays and the same assumption mentioned above, the control input will be delayed by CAN as shown in Figure 4. Thus, the control input of the vehicle model at time can be expressed as follows [22]: If the delay is expressed as where Υ ∈ + and V ∈ [0,1) . Then, the control-oriented discrete-time model of the vehicle lateral dynamics with network-induced delays can be expressed as follows [2]: where the coefficient of each disturbance element induced by network-induced delays is expressed as follows [2]: For analyzing, expression ( 9) is rewritten as follows: where the disturbance item () is the function of the network-induced delay and the input . Thus, the control-oriented model of vehicle lateral dynamics for networked AWID-EVs is described as a discrete-time model with the disturbance elements caused by network-induced delays. Sliding Mode Controller Design for DYC of 4WID-EV. Firstly, without considering network-induce delays, a general SMC for the discrete-time model ( 5) is designed.According to the typical design methodology of an general SMC [18], the sliding mode surface can be defined as denotes the tracing error of motion states and is the weight coefficient of elements of . A reach law has been widely used [15,18], which is written as where denotes ( ) at the time .With the reach law ( 14), the control law can be solved as follows: However, according to the research in [18], for a discretetime system, the state trajectory hardly occurs on the sliding mode surface (13) but zigzags around the sliding mode surface cause a quasi-sliding mode with a quasi-sliding mode band (QSMB) as in Figure 5.The QSMB is expressed as follows [18]: In order to avoid the chattering phenomenon in the QSMB, a boundary layer technique [1,14] is usually adopted by defining a saturation function sat( ) instead of sgn( ) in ( 14) and ( 15) where bl is the boundary layer width.Thus, the reach law (14) and control law (15) can be rewritten as According to the expression (16), it is necessary to tune the boundary layer width bl based on the dynamics of () in the controller design stage.When the control law ( 19) is used for the discrete-time system (5), the () dynamics can be expressed as [18]: The () dynamics can be solved by ( 5) and (12).However, when the control law ( 19) is used for the DYC of networked AWID-EVs, according to formulas (11), ( 13), (14), and (19), the () dynamics will be changed as follows: Comparing ( 21) with (20), the disturbance item (), which is caused by network-induced delays, will impose a new uncertainty item on the () dynamics and will result in adverse impact on the robustness of DYC. Integrated State-Dependent and Delay-Dependent Sliding Mode Controller Design. A state-dependent boundary layer tuning method [20], which can tune the boundary layer width actively according to the dynamic state such as () instead of a fixed boundary layer, has been widely used to improve the robustness of SMC in the real-time system applications [20]. However, according to (21), the network-induced delays cause the uncertainty of the () dynamics.Consequently, it is reasonable to create an integrated state-dependent and delaydependent method to tune the boundary layer width for SMC dynamically (see Figure 6).However, according to expressions (10), (12), and ( 21), the mathematic function between the () and the networkinduced delay is complicated nonlinear.It is too difficult to be solved online in general in-vehicle ECUs.Thus, a fuzzy logic unit is designed to deal with the complicated nonlinear mathematic problem as in Figure 6. Fuzzy Logic Unit Design for the Integrated State- Dependent and Delay-Dependent Method.As shown in Figure 6, the fuzzy logic unit is used to solve the boundary layer width bl according to the state norm || and the network-induced delay .Thus the fuzzy logic unit has two input values and one unique output value, which are, respectively, defined as follows. Input 2 Output And the fuzzy linguistic variable values are defined in Table 1.As shown in Figure 7, the triangular membership functions are used for the fuzzification of the two input variables and the one output variable.The scaling factors SF 1 and SF 2 (see Figure 6), which are tuned at the design stage by trial and extensive simulations performed in this study, are used to map the actual values of the input and output variables to their fuzzified values [23].The rule base for the proposed fuzzy logic unit is described in Table 2. The variable domains such as the inputs || ∈ [0, 0.5] and ∈ [0, 20] and the output bl ∈ [0.6, 1.4] are selected based on the simulation results with a high-fidelity full-vehicle model in CarSim and Matlab in this study.The fuzzy unit employs the Mamdani Fuzzy Inference System (FIS), which is described by the following schema: where , , , , , and are fuzzy values defined as the input and output variables, respectively.The centre of area method is used in the defuzzification to solve bl . The rule base in detail between the inputs and the outputs is shown in Figure 8.Thus, once the state norm || and the network-induced delay are known, the boundary layer width can be calculated by the fuzzy unit, and these rules could be easily implemented into the microprocessor of vehicle controller with a look-up mode.According to definition (13), the state norm || is known, and the unknown network-induced delay will be discussed in the following section. Determination of the Network-Induced Time Delay. To implement the proposed method, the network-induced delay should be determined.According to the assumption above (see Figures 2 and 4), the network-induced delay , which consists of the delays in both the forward and feedback links in the th cycle, can be expressed as follows: Generally, the feedback link delay in a NCS is known, which can be measured by using a "time sampling technology" with the time tag within each received message sent by sensor node.However, the current delay in the forward link cannot be measured.In this paper, a "delay estimating technology," which is based on Network Calculus Theory [24], is introduced to estimate the forward link delay with an explicit expression as follows [24]: where large, is the upper bound of delay of a message with the jth priority sent by CAN; indicates the maximum data frame length; is the baud rate of the CAN; is the cycle length of the message with the th priority. According to the research result in [24], the networkedinduced delay of the message with the highest priority sent by CAN network can be accurately estimated through the expression (27).Therefore, this paper uses a "command-first scheme," in which the command message in the forward link is sent with the highest priority to ensure the accuracy of as follows: , setting = 0. (28) Thus, the network-induced delay can be precisely determined by the following formula: Simulation Results To study the effectiveness of the proposed controller, the cosimulations are carried out in Matlab/Simulink with a fullvehicle model constructed by CarSim (see Figure 9).The vehicle parameters used in the simulations are based on a prototyped 4WID-EV, and main parameters are listed in Table 3. The proposed controller is used in motion controller unit (see Figure 1) and a simple torque distribution strategy, which distributes the direct yaw-moment equally to the driving or braking torques of 4 wheels, is used in torque distribution unit (see Figure 1).For comparison, a conventional statedependent fuzzy SMC controller (the conventional FSMC) without considering network-induced delays is also designed.And the designed rule base of the conventional FSMC is shown in Figure 10. The reaching law parameters of SMC are chosen as = 27.5 and = 0.According to the results in [22], the upper bound of network-induced delays in a practical vehicle control system is about as high as 1.7 .The sampling period of the closed-loop system is adopted as = 10 ms.Thus, CAN-induced delays in simulations are assumed to change randomly in time range [0, 1.7 ] (see Figure 11).Two different steering maneuvers, which are commonly used in vehicle tests, are considered: a ramp steering maneuver and a double lane-changing maneuver.The ramp steering maneuver is often adopted in the -turn test and the double lane-changing maneuver is usually used in extreme cases, for example, high-speed overtaking or obstacle avoidance. In each driving maneuver, simulations are carried out in two stages.The first stage is under the ideal network condition to evaluate the effectiveness of controllers without considering network-induced delays.The second stage is to verify the robustness of the proposed controller with network-induced delays. 𝐽-Turn Steering Maneuver.In this case, the vehicle runs at a low speed of 40 km/h on a slippery road with a low road friction ( = 0.4).During the -turn maneuver, the steering wheel angle first increases from 0 deg to 18 deg in 0.5 s, which is used to simulate a sharp turn.Then it decreases to 0 deg again in 4 s. Figure 12 shows the results of cosimulations in the turn steering maneuver under the ideal network condition.It is obvious in Figures 12(a), 12(b), and 12(c) that the vehicle yaw rate can precisely track the desired reference by the driver, the vehicle sideslip angle can be restrained in a narrow scope around 0.014 rad, and the lateral acceleration can be restrained in a narrow scope around 0.013 g.For the two controllers, the control performance is satisfactory.The results demonstrate the effectiveness of the conventional FSMC and the proposed controller. Figure 13 shows the results of cosimulations in the -turn steering maneuver with the network-induced delays caused by CAN.It is obvious in Figures 13(a the desired yaw rate in the transient phase but can keep tracking the desired yaw rate in the steady phase, and the yaw rate overshoot is about 10.3% in the transient phase, while, for the proposed controller, the adverse impact of delays can be eliminated and the control performance is still satisfactory.The yaw rate overshoot is about 3.4%.Therefore the results of comparison explicitly illustrate the strength of the proposed controller dealing with network-induced delays. Figure 14 shows the torque response of 4 motors in the -turn steering maneuver with the network-induced delays caused by CAN.It is obvious that, with the conventional FSMC, the chattering phenomenon of each motor is severe in the transient phase, which would reduce control performance of DYC and even deteriorate the EV system, while, for the proposed controller, the performance of the torque response of each motor is satisfactory. Figures 15 and 16 show the dynamic boundary layers of two controllers in the -turn steering maneuver with the network-induced delays caused by CAN. The results show that the conventional FSMC can tune the boundary layer width according to the vehicle states but not the network-induced delays.The proposed controller can tune the boundary layer width according to both the vehicle states and the network-induced delays. Double Lane-Changing Steering Maneuver. In this case, the vehicle runs at a high speed of 100 km/h on a road with a high road friction ( = 0.85). The following cosimulation process is quite similar to that for the -turn steering maneuver.Under different network conditions, the results are shown in Figures 17,18,19,20,and 21.Under the ideal network condition, the actual vehicle yaw rate can track the desired reference very well, and the vehicle sideslip angle and the vehicle lateral acceleration can also be kept regulated for both controllers.When with the networkinduced delays, the proposed control can still keep the vehicle yaw rate tracking the desired reference very well, whereas the conventional FSMC results in significant oscillations due to the effect of the network-induced delays.Furthermore, a similar observation can be found in the vehicle sideslip angle and the vehicle lateral acceleration.The torque response of each motor is also shown as in Figure 19.The results show that network-induced delays have a significant impact on the stability of the closed-loop control system and can obviously reduce the robustness of the conventional FSMC. The tuning boundary layer width processes of two controllers in double lane-changing maneuver are shown in Figures 20 and 21 the vehicle states but not the network-induced delays, while the proposed controller can tune the boundary layer width actively according to both the vehicle states and the system delays, which makes the control system more robust. Conclusions This paper proposed an integrated state-dependent and delay-dependent fuzzy sliding mode control method to improve the robustness of DYC of AWID-EVs subject to network-induced delays.The SMC, which can effectively deal with model uncertainties, system parameter variations, and external disturbances, has been widely used to improve the robustness of DYC of AWID-EVs instead of common continuous control technologies such as LQR and PID.However, on the other hand, the SMC, which is a variable structure control, is more vulnerable to the system delays from electronic control systems.Meanwhile, in modern AWID-EVs, the networked control system based on in-vehicle networks such as CAN would inevitably impose network-induced delays on the vehicle control system.In order to improve the robustness of DYC of AWID-EVs, this paper first analyzed the adverse impact resulted from NCS on DYC based on SMC in detail. Then an integrated state-dependent and delay-dependent fuzzy SMC method is proposed to improve the robustness of DYC for AWID-EVs. The results of comparison in both typical steering maneuver cases show that the proposed controller can effectively improve the robustness of DYC for AWID-EVs subject to network-induced delays.Moreover, the proposed controller also inherits the robustness of SMC in terms of dealing with model uncertainties, system parameter variations, and external disturbances. Figure 2 : Figure 2: General structure of the NCS for DYC of AWID-EVs. Figure 13 : Figure 13: Control performance in -turn maneuver with the network-induced delays caused by CAN.(a) Vehicle yaw rate.(b) Vehicle sideslip angle.(c) Vehicle lateral acceleration. Figure 14 : Figure 14: Motor torques of 4 wheels with two different controllers for DYC of 4WID-EV in double -turn maneuver with the networkinduced delays caused by CAN.Time series plot Figure 15 :Figure 16 :Figure 17 : Figure 15: Tuning the boundary layer width process by the conventional FSMC in double -turn steering maneuver with the network-induced delays caused by CAN.Time series plot Figure 18 : Figure 18: Control performance in double lane-changing maneuver with the network-induced delays caused by CAN.(a) Vehicle yaw rate.(b) Vehicle sideslip angle.(c) Vehicle lateral acceleration. Figure 19 : Figure 19: Motor torques of 4 wheels with two different controllers for DYC of 4WID-EV in double lane-changing maneuver with the network-induced delays caused by CAN.Time series plot Figure 20 :Figure 21 : Figure 20: Tuning the boundary layer width process by the conventional FSMC in double lane-changing steering maneuver with the networkinduced delays caused by CAN.Time series plot Table 1 : Fuzzy linguistic variable value terms. Table 2 : Rule base of the fuzzy logic unit.
5,229
2017-01-22T00:00:00.000
[ "Engineering", "Computer Science" ]
Non-perturbative construction of the Luttinger-Ward functional For a system of correlated electrons, the Luttinger-Ward functional provides a link between static thermodynamic quantities on the one hand and single-particle excitations on the other. The functional is useful to derive several general properties of the system as well as for the formulation of thermodynamically consistent approximations. Its original construction, however, is perturbative as it is based on the weak-coupling skeleton-diagram expansion. Here, it is shown that the Luttinger-Ward functional can be derived within a general functional-integral approach. This alternative and non-perturbative approach stresses the fact that the Luttinger-Ward functional is universal for a large class of models. I. INTRODUCTION For a system of correlated electrons in equilibrium, there are several relations 1,2,3 between static quantities which describe the thermodynamics of the system and dynamic quantities which describe its one-particle excitations. Static quantities are given by the grand potential Ω and its derivatives with respect to temperature T , chemical potential µ etc. The one-electron Green's function G = G(iω n ) or the self-energy Σ = Σ(iω n ), on the other hand, are dynamic quantities which yield (equivalent) information on an idealized (photoemission or inverse photoemission) excitation process. The Luttinger-Ward functional Φ[G] provides a special relation between static and dynamic quantities with several important properties: 4 First, the grand potential is obtained from the Luttinger-Ward functional evaluated at the exact Green's function, Φ = Φ[G], via Ω = Φ + Tr ln G − Tr ΣG . (1) Second, the functional derivative of the Luttinger-Ward functional, defines a functional Σ[G] which gives the exact selfenergy of the system if evaluated at the exact Green's function. The relation Σ = Σ[G] is independent from the Dyson equation G −1 = G −1 0 − Σ. Third, in the noninteracting limit: Finally, the functional dependence Φ[G] is completely determined by the interaction part of the Hamiltonian and independent from the one-particle part: This universality property can also be expressed as follows: Two systems with the same interaction U but different one-particle parameters t (on-site energies and hopping integrals) in the respective Hamiltonian are described by the same Luttinger-Ward functional. Using Eq. (2), this implies that the functional Σ[G] is universal, too. If Ref. 4 it is shown by Luttinger and Ward that Φ[G] can be constructed order by order in diagrammatic weak-coupling perturbation theory. Φ is obtained as the limit of the infinite series of closed diagrams without any self-energy insertions and with all free propagators in a diagram replaced by fully interacting ones (see Fig. 1). Generally, this skeleton-diagram expansion cannot be summed up to get a closed form for Φ [G]. So, unfortunately, the explicit functional dependence Φ[G] is actually unknown -even for the most simple Hamiltonians such as the Hubbard model. 5 The defining properties, Eqs. (1-4), however, are easily verified. 4 The Luttinger-Ward functional is useful for several general considerations: With the help of Φ[G] and the Dyson equation, the grand potential can be considered as a functional of the Green's function Ω = Ω[G] or as functional of the self-energy Ω = Ω[Σ], such that Ω is stationary at the physical G or Σ. 4,6 This represents a remarkable variational principle which connects static with dynamic physical quantities. The Luttinger-Ward functional is also used in the microscopic derivation of some zero-or low-temperature properties of Fermi liquids as discussed in Refs. 4,7. The derivative of the functional, Eq. (2), shows the self-energy to be gradient field when considered as a functional of the Green's function, Σ[G]. This fact is related to certain symmetry properties of two-particle Green's functions as orig- inally noted by Baym and Kadanoff. 8 Furthermore, the Luttinger-Ward functional is of great importance in the construction of thermodynamically consistent approximations. So-called conserving approximations virtually start from the Luttinger-Ward functional. 6,8 This is essential to prove these approximations to respect a number of macroscopic conservation laws. The Hartree-Fock and the random-phase approximations are well-known examples. These "classical" conserving approximations are essentially limited to the weak-coupling regime. However, the Luttinger-Ward functional can also be used to construct non-perturbative approximations. This was first realized in the context of the dynamical meanfield theory (DMFT) for lattice models of correlated electrons. 9,10,11,12 Here, one exploits the universality of the functional, Eq. (4), to achieve an (approximate) mapping of the original lattice model onto a simpler impurity model with the same interaction part. The fact that Φ[G] is the same for a large class of systems, has recently been shown 13,14 to be the key feature that allows to construct several non-perturbative and thermodynamically consistent approximations. 15,16 This idea has been termed "self-energy-functional approach" (SFA). Such general considerations remain valid as long as the Luttinger-Ward functional is well defined. This presupposes that the skeleton-diagram expansion is convergent or at least that formal manipulations of diagrammatic quantities are consistent in themselves and eventually lead to physically meaningful results. Provided that one can assure that no singular point is passed when starting from the non-interacting Fermi gas and increasing the interaction strength, this seems to be plausible. A strict proof that the skeleton-diagram expansion is wellbehaved, however, will hardly be possible in most concrete situations. On the contrary, it is well known that the expansion is questionable in a number of cases, e.g. in case of a symmetry-broken state or a state that is not "adiabatically connected" to the non-interacting limit, such as a Mott insulator. The skeleton-diagram expansion may break down even in the absence of any spontaneous symmetry breaking in a (strongly correlated) state that gradually evolves from a metallic Fermi liquid. This has explicitly been shown by Hofstetter and Kehrein 17 for the narrow-band limit of the single-impurity Anderson model (see Refs. 18,19 for a discussion of possible physical consequences). Generally speaking there is no strict argument available that ensures the convergence of the skeleton-diagram expansion in the strong-coupling regime. The purpose of the present paper is to show that a construction of the Luttinger-Ward functional is possible that does not make use of the skeleton-diagram expansion. The proposed construction is based on a standard functional-integral approach and avoids the formal complications mentioned above. Thereby, one achieves an alternative and in particular non-perturbative route to the general properties of correlated electron systems derived from the functional, to the dynamical mean-field theory as well as to the self-energy-functional approach. It should be stressed that the intended construction of the Luttinger-Ward functional requires more than a simple definition of the quantity Φ (which could trivially be achieved by using Eq. (1): Φ ≡ Ω − Tr ln G + Tr ΣG). The task is rather to provide a functional Φ[G] with the properties Eqs. (1-4). Previous approaches are either perturbative or cannot prove Eqs. (1-4): A construction of the Luttinger-Ward functional different from the original one 4 has been given by Baym: 6 The existence of Φ[G] is deduced from a "vanishing curl condition", δΣ(1, 1 ′ )/δG(2 ′ , 2) = δΣ(2, 2 ′ )/δG(1 ′ , 1), which is derived from an analysis of the functional dependence of G on an arbitrary (timedependent) external perturbation J. However, an independent functional relation Σ = Σ[G] is required in addition. In Ref. 6 the latter is assumed to be given by the (full or by a truncated) skeleton-diagram expansion, and consequently this approach is perturbative again. As also shown in Ref. 6, the Green's function in the presence of an external field J can be derived from the grand potential This idea is in the spirit of the effective action approach. 20,21,22 Here, however, the problem is that the universality of Φ[G], Eq. (4), cannot be proven. The Luttinger-Ward functional constructed in this way explicitly depends on G 0 and thus on the one-particle parameters t. The paper is organized as follows: The next section briefly introduces the notations and the quantities of interest. The construction of the Luttinger-Ward functional is described in Sec. III. Sec. IV gives a brief discussion of the properties of the functional and its use within the dynamical mean-field theory and the selfenergy-functional approach. The results are summed up in Sec. V. II. STATIC AND DYNAMIC QUANTITIES Consider a system of electrons at temperature T and chemical potential µ in thermal equilibrium and let H = An index α refers to an arbitrary set of quantum numbers characterizing a one-particle basis state. If N is the total particle-number operator, the grand potential of the system is given by is the partition function. Here and in the following the dependence of all quantities on the one-particle parameters t and the interaction parameters U is made explicit through the subscripts. Using a matrix notation, the free one-particle Green's function is denoted by G t,0 . Its elements (for fixed µ) are given by: Here iω n = i(2n + 1)πT is the n-th Matsubara frequency. The fully interacting Green's function is denoted by G t,U . Using Grassmann variables , its elements can be written as 3 is the action. Finally, the self-energy is defined as The goal is to construct a functional Φ U [G] (where G is considered as a free variable) which vanishes in the non-interacting case, (1)], and the derivative of which is a functional are indicated by a hat and should be distinguished clearly from physical quantities A.) For the classical construction of Φ U [G] via the skeleton-diagram expansion (Fig. 1), these properties are easily verified: The universality of the functional [Eq. (4)] is obvious as any diagram depends on U and on G only; there is no explicit dependence on the free Green's function G t,0 , i.e. no explicit dependence on t. Since there is no zeroth-order diagram, Φ U [G] trivially vanishes for U = 0 [Eq. (3)]. The functional derivative of Φ[G] with respect to G corresponds to the removal of a propagator from each of the Φ diagrams. Taking care of topological factors, 1,4 one ends up with the skeleton-diagram expansion for the selfenergy, i.e. one gets Eq. (2). Using Eq. (2), the Dyson equation (10), and Φ t,U ≡ Φ U [G t,U ], the µ derivative of the l.h.s and of the r.h.s of Eq. (1) are equal for any fixed interaction strength U and temperature T . Namely, (∂/∂µ)(Φ t,U + Tr ln G t,U − Tr Σ t,U G t,U ) = Tr G −1 t,U (∂G t,U /∂µ)−Tr G t,U (∂Σ t,U /∂µ) = −TrG t,U = − N t,U = ∂Ω t,U /∂µ. Integration over µ then yields Eq. III. LUTTINGER-WARD FUNCTIONAL The starting point is the standard functional-integral representation of the partition function as given in Ref. 3, for example: Define the functional with and Ω U [G −1 0 ] parametrically depends on U . G −1 0 is considered as a free variable. At the (matrix inverse of the) exact free Green's function, G −1 0 = G −1 t,0 , the functional yields the exact grand potential, of the system with Hamiltonian with the property which is easily verified using Eq. (8). The strategy to be pursued is the following: G U [G −1 0 ] is a universal (t independent) functional and can be used to construct a universal relation G = G U [Σ] between the one-particle Green's function and the self-energy independent from the Dyson equation. Using the universal functionals Ω U [G −1 0 ] and G U [Σ], a universal functional F U [Σ] is defined the derivative of which essentially yields G U [Σ]. The Luttinger-Ward functional can then be obtained by Legendre transformation and is universal by construction. To start with, consider the equation This is a relation between the variables G and Σ which, for a given Σ, may be solved for G. This defines a func- For a given self-energy Σ, the Green's function G = G U [Σ] is defined to be the solution of Eq. (17). From the Dyson equation (10) and Eq. (16) it is obvious that the relation (17) is satisfied for G and Σ being the exact Green's function and the exact self-energy, G = G t,U and Σ = Σ t,U , of a system with the interaction U and some set of one-particle parameters t (H = H 0 (t) + H 1 (U )). Hence, A brief discussion of the existence and the uniqueness of possible solutions of the relation (17) Using Eq. (15) one finds: and, using Eq. (18), Here Σ U [G] is the inverse of the functional G U [Σ]. The functional can be assumed to be invertible (locally) provided that the system is not at a critical point for a phase transition (see also Ref. 13). Eq. (23) defines the Luttinger-Ward functional. A. Properties of the Luttinger-Ward functional The properties of the Luttinger-Ward functional, Eqs. (1-4), can be verified easily: Eqs. (10), (14), (19) and (20) imply and with Σ U [G t,U ] = Σ t,U the evaluation of the Luttinger-Ward functional at G = G t,U yields i.e. Eq. (1). From Eqs. (22) and (23), one immediately has: The functional derivative is easily calculated: The equation is a (highly non-linear) conditional equation for the selfenergy of the system H = H 0 (t) + H 1 (U ): Eqs. (10) and (19) show that it is satisfied by the exact self-energy Σ = Σ t,U . Note that the l.h.s of (29) is independent of t but depends on U (universality of G[Σ]), while the r.h.s is independent of U but depends on t via G −1 t,0 . The obvious problem of finding a solution of Eq. (29) is that there is no closed form for the functional G U [Σ]. Solving Eq. (29) is equivalent to a search for the stationary point of the grand potential as a functional of the self-energy: Similarly, one can also construct a variational principle using the Green's function as the basic variable, δ Ω t,U [G]/δG = 0. C. Dynamical mean-field theory The dynamical mean-field theory 9,10,11,12 basically applies to lattice models of correlated electrons with on-site interactions such as the Hubbard model, 5 for example. The DMFT aims at an approximate solution of Eq. (29) and is based on two ingredients: (i) It is important to note that the Luttinger-Ward functional Φ U [G] is the same for the lattice (e.g. Hubbard) model and for an impurity model (single-impurity Anderson model). Actually a (decoupled) set of impurity models has to be considered -one impurity model with the according local interaction at each site of the original lattice. This ensures that the interaction (U ) term is the same as in the lattice model. (In case of translational symmetry the a priori different impurity models can be assumed to be equivalent). As U is the same in the lattice and in the impurity model, the Luttinger-Ward functional, as well as G U [Σ], is the same. (ii) Let the lattice model be characterized by oneparticle parameters t and the impurity model by parameters t ′ . The fundamental equation (29) for the lattice model would then be solved by the exact self-energy Σ t,U . As an ansatz for an approximate solution Σ of Eq. (29), the self-energy is assumed to be local within the DMFT and to be representable as the exact self-energy of the impurity model for some parameters t ′ : The universality of the Luttinger-Ward functional (i) and the local approximation for the self-energy (ii) are sufficient to derive the DMFT: Inserting the ansatz (31) into Eq. (29) yields a conditional equation for the oneparticle parameters of the impurity model t ′ . The l.h.s becomes G U [Σ t ′ ,U ] = G t ′ ,U , i.e. the exact Green's function of the impurity model, while the r.h.s reads (G −1 t,0 − Σ t ′ ,U ) −1 . The resulting equation for the parameters t ′ can be fulfilled only locally, i.e. by equating the local elements of the respective Green's functions at the impurity and the original site respectively: This is the so-called self-consistency equation of the DMFT. 12 This consideration can be seen as an independent and, in particular, non-perturbative re-derivation of the DMFT which supplements known approaches such as the cavity method. 12 D. Self-energy-functional approach The universality of the Luttinger-Ward functional or of its Legendre transform F U [Σ] is central to the recently developed self-energy-functional approach. 13,14 The SFA is a general variational scheme which includes the DMFT as a special limit. The idea is to take as an ansatz for the self-energy of a model H = H 0 (t) + H 1 (U ) the exact self-energy Σ t ′ ,U of a so-called reference system H ′ = H 0 (t ′ ) + H 1 (U ) that shares with the original model the same interaction part. The parameters t ′ of the oneparticle part are considered as variational parameters to search for the stationary point of the grand potential as a functional of the self-energy. This means to insert the ansatz Σ = Σ t ′ ,U into the general expression (27) and to solve the Euler equation ∂ Ω t,U [Σ t ′ ,U ]/∂t ′ = 0, i.e.: for t ′ . If the search for the optimum set of one-particle parameters t ′ was unrestricted, the approach would be exact in principle as the Euler equation would then be equivalent with the Euler equation (29) of the general variational principle Eq. (30). A restriction of the space of variational parameters becomes necessary to evaluate the quantity Ω t,U [Σ t ′ ,U ] which, in general, is impossible as a closed form for the functional F U [Σ] is not known. With a proper restriction, however, the reference system H ′ can be made accessible to an exact (numerical) solution which allows to derive the exact grand potential and the exact Green's function of the system H ′ . Therewith, making use of the universality of F U [Σ] and using Eqs. (23) and (25) for the reference system, Note that this implies that an exact evaluation of F U [Σ] is possible for self-energies of a exactly solvable reference system with the same interaction part as the original one. Using this result in Eq. (33), one obtains: which can be evaluated to fix t ′ and therewith the optimal self-energy and grand potential (see Refs. 13,14,15,16 for details and concrete examples). E. Luttinger's theorem Finally, the role of the Luttinger-Ward functional in the derivation of general properties of correlated electron systems shall be discussed. As an important example, the Luttinger theorem 4 is considered. For a translationally invariant system, the theorem states that in the limit T → 0 the average particle number is equal to the volume enclosed by the Fermi surface in k space: The Fermi surface is defined by the set of k points in the first Brillouin zone that satisfies µ − η k = 0 where η k are the eigenvalues of the matrix t + Σ(ω) at vanishing excitation energy ω = 0. Hence, to formulate the Luttinger theorem, one obviously has to presuppose that there is a Fermi surface at all, i.e. that Σ(ω = 0) is Hermitian. 25 The original proof of the theorem 4 is perturbative as it makes use of the skeleton-diagram expansion. A nonperturbative proof, based on topological considerations, was proposed recently 23 and is based on the assumption that the system is a Fermi liquid. To discuss the Luttinger theorem in the present context, consider the following shift transformation of the Green's function with z = 2πkT and k integer (z is a bosonic Matsubara frequency). S (z) is a linear and unitary transformation. The shift transformation leaves the functional integral Eq. (11) unchanged: To verify this invariance, one has to note that the shift of the Matsubara frequencies in G −1 0 by z can be transformed into a shift ω n → ω n − z in the Grassmann numbers: In imaginary-time representation this shift is equivalent with the multiplication of a phase: This, however, leaves the functional integral unchanged as the transformation Eq. (39) or Eq. (40) is linear and the Jacobian is unity. Note that antiperiodic boundary conditions ξ α (τ = 1/T ) = −ξ α (τ = 0) are respected for a bosonic shift frequency z. Denoting Ω t,U (z) ≡ Ω U [S (z) G −1 t,0 ], Eq. (38) states that Ω t,U (z) = Ω t,U (0). Following the steps in the construction of the Luttinger-Ward functional in Sec. III, one easily verifies that this implies Φ t,U (z) = Φ t,U (0) where Φ t,U (z) ≡ Φ U [S (z) G t,U ]. For the Legendre transform, one has F t,U (z) = F t,U (0) where F t,U (z) ≡ F U [S (z) Σ t,U ]. Now, in the limit T → 0, z becomes a continuous variable. Hence, If the limit and the derivative can be interchanged, Eqs. (24) and (41) imply The z dependence of the grand potential is the same as its µ dependence, and thus −(d/dz)Ω t,U (z = 0) = −∂Ω t,U /∂µ = N . The evaluation of the r.h.s in Eq. (43) is straightforward and can be found in Ref. 4, for example. It turns out that at z = 0 the r.h.s is just the Fermi-surface volume V FS . Consequently, the non-perturbative construction of the Luttinger-Ward functional allows to reduce the proof of the Luttinger theorem to the proof of Eq. (42). This, however, requires certain assumptions on the regularity of the T → 0 limit which are non-trivial generally. V. SUMMARY To summarize, the present paper has shown that the Luttinger-Ward functional can be constructed within the framework of functional integrals under fairly general assumptions. In particular, there no need for an adiabatic connection to the non-interacting limit and no expansion in the interaction strength as was required in the original approach of Luttinger and Ward. 4 The construction merely assumes the very existence of the functional integral over Grassmann fields, i.e. the existence of the Trotter limit, for the representation of the partition function. It is well known that the Luttinger-Ward functional can be employed for different purposes, some of which have been discussed here: The functional is used to derive some general properties of correlated electron systems, such as the Luttinger theorem. It allows to formulate a variational principle involving a thermodynamical potential as a functional of the Green's function or the self-energy and thereby provides a unique and thermodynamically meaningful link between static and dynamic quantities which is helpful for interpretations and for the construction of approximations. An independent derivation of the dynamical mean-field is possible using the special properties of the Luttinger-Ward functional and the universality of the functional in particular. The latter is of central importance in the context of the self-energyfunctional approach which is a general framework to construct thermodynamically consistent approximations.
5,385.8
2004-06-28T00:00:00.000
[ "Physics" ]
Large-scale comparison of bibliographic data sources: Scopus, Web of Science, Dimensions, Crossref, and Microsoft Academic We present a large-scale comparison of five multidisciplinary bibliographic data sources: Scopus, Web of Science, Dimensions, Crossref, and Microsoft Academic. The comparison considers all scientific documents from the period 2008-2017 covered by these data sources. Scopus is compared in a pairwise manner with each of the other data sources. We first analyze differences between the data sources in the coverage of documents, focusing for instance on differences over time, differences per document type, and differences per discipline. We then study differences in the completeness and accuracy of citation links. Based on our analysis, we discuss strengths and weaknesses of the different data sources. We emphasize the importance of combining a comprehensive coverage of the scientific literature with a flexible set of filters for making selections of the literature. Introduction Over the past 15 years, Web of Science (WoS; Birkle, Pendlebury, Schnell, & Adams, 2020;Schnell, 2017), Scopus (Baas, Schotten, Plume, Côté, & Karimi, 2020;Schotten, el Aisati, Meester, Steiginga, & Ross, 2017), and Google Scholar have been the three most important multidisciplinary bibliographic data sources, providing metadata on scientific documents and on citation links between these documents.It is very challenging to perform large-scale analyses using Google Scholar.WoS and Scopus have therefore long been the only options for large-scale bibliometric studies.This has changed in recent years with the introduction of two new multidisciplinary on grants, data sets, clinical trials, patents, and policy documents.Likewise, the WoS platform provides data on data sets and patents.While this data can be of great value, it falls outside the scope of our analysis. A number of major bibliographic data sources are not covered by the comparison presented in this paper.Google Scholar is not included in the comparison because we do not have large-scale access to this data source.Studies of Google Scholar typically focus on relatively small numbers of documents (e.g., Harzing, 2019;Martín-Martín, Orduna-Malea, & López-Cózar, 2018).Large-scale studies of Google Scholar (Martín-Martín, Thelwall et al., 2018;Martín-Martín et al., 2020) require an extraordinary amount of effort (Else, 2018).OpenCitations is another important data source that is not included in the comparison.OpenCitations is not included because it currently provides more or less the same data as Crossref (Heibi, Peroni, & Shotton, 2019b).This is expected to change in the near future (Peroni & Shotton, 2020), so OpenCitations deserves careful attention in future work.Finally, the comparison does not cover PubMed.This data source is not included because it does not provide data on citation links between documents. To keep the analysis manageable, we use Scopus as a baseline and we perform pairwise comparisons of Scopus with each of the other data sources.Because Scopus and WoS are the most established bibliographic data sources, it seems natural to use either of these data sources as the baseline in our analysis.We use Scopus rather than WoS as the baseline because we do not have access to the full WoS database.Our use of Scopus as the baseline does not mean that we consider Scopus to be our preferred bibliographic data source. The rest of this paper is organized as follows.In Section 2, we discuss the data sources included in our analysis.The procedure developed for matching documents in different data sources is described in Section 3. We present the results of our analysis in Sections 4 and 5. Conclusions and limitations are discussed in Sections 6 and 7. Data sources In our analysis, we focus on scientific documents published in the period 2008-2017.Scientific documents can be articles in journals, but also preprints, papers in conference proceedings, books, book chapters, and so on.We consider the following five bibliographic data sources: • Scopus.Scopus is a data source produced by Elsevier.Our center has full access to Scopus for documents starting from 1996.We use Scopus data delivered to our center in April 2019. • CWTS WoS.WoS is a data source produced by Clarivate Analytics.Clarivate Analytics distinguishes between the WoS Core Collection and the broader WoS platform.Our focus is on the WoS Core Collection.The WoS Core Collection consists of a number of citation indices.We consider the Science Citation Index Expanded (SCIE), the Social Sciences Citation Index (SSCI), the Arts & Humanities Citation Index (AHCI), and the Conference Proceedings Citation Index (CPCI).Our center has full access to these citation indices for documents starting from 1980.The Emerging Sources Citation Index (ESCI) and the Book Citation Index (BKCI) are also part of the WoS Core Collection.However, we do not consider these citation indices, because our center does not have access to them.We use WoS data updated until the end of 2018.The data was delivered to our center in XML format.In the rest of this paper, we use the label CWTS WoS to refer to the WoS data to which our center has access and to distinguish this data from the full WoS database. • Dimensions.Dimensions is a data source produced by Digital Science.Our center has full access to Dimensions.We use Dimensions data delivered to our center in June 2019.In addition to scientific documents, Dimensions also covers grants, data sets, clinical trials, patents, and policy documents.We do not include this content in our analysis. • Crossref.Crossref provides an infrastructure through which scientific publishers make metadata available for the content they publish.We use Crossref data downloaded in August 2018 through the public REST API of Crossref.We downloaded the data in JSON format.The following content types are not included in our analysis: book-part, book-section, component, dataset, journal-issue, peer-review, posted-content, proceedings, proceedings-series, report-series, and standard. • Microsoft Academic.Microsoft Academic is a data source produced by Microsoft.We use a dump of Microsoft Academic data from March 2019 (Microsoft Academic, 2019).Content classified as data set or patent is not included in our analysis. The different data sources have different content selection policies.WoS has an internal Editorial Development team for content selection.WoS emphasizes the selectivity of its content selection policy for the WoS Core Collection, and in particular for SCIE, SSCI, and AHCI (Birkle et al., 2020;Schnell, 2017).Scopus works together with an international group of researchers, referred to as the Content Selection and Advisory Board, to perform content selection (Baas et al., 2020;Schotten et al., 2017). Scopus often emphasizes the size of its database.Compared with the WoS Core Collection, it therefore appears to focus more on comprehensiveness and less on selectivity.Dimensions has an even stronger focus on comprehensiveness: "The database should not be selective but rather should be open to encompassing all scholarly content that is available for inclusion … The community should then be able to choose the filter that they wish to apply to explore the data according to their use case."(Hook et al., 2018; see also Herzog et al., 2020).Microsoft Academic has the strongest focus on comprehensiveness.It claims to replicate "the success of Google Scholar, which utilizes the massive document index from a web search engine to achieve comprehensive coverage of contemporary scholarly materials, many of which are not published and distributed through traditional channels and not assigned DOIs" (Wang et al., 2020). Crossref (Hendricks et al., 2020) is a special case.It is a registration agency for DOIs.If a scientific publisher works with Crossref to register a DOI for a document, Crossref obtains basic metadata for this document.Crossref then makes this metadata openly available (with the possible exception of the reference list, for which the publisher determines whether it is made openly available or not).In this way, Crossref has become a bibliographic data source that is of significant interest for bibliometric analyses.The completeness and the quality of the data available in Crossref depend on what publishers provide to Crossref.Crossref itself does not actively collect and enrich data. Matching of data sources Because of the large amount of data, matching documents in Scopus with documents in the other data sources is a challenging task.We developed a matching procedure that aims to provide accurate results within an acceptable amount of computing time. Our matching procedure starts by preprocessing the data obtained from the different data sources.In the case of publication years and volume, issue, page, and article numbers, the preprocessing process retains only numerical characters.All other characters are discarded.The preprocessing process also splits author names in Microsoft Academic into first and last names.In the other data sources, author names have already been split into first and last names by the data provider.In the matching procedure, we treat the first character of the first name of an author as the author's first initial.One of the other steps taken in the preprocessing process is to simplify document titles, source titles, and author names by converting non-US-ASCII characters into US-ASCII characters, for instance by removing accents. After preprocessing the data, our matching procedure identifies matching documents in a number of consecutive steps: 1. Matching of documents with the same publication year and DOI. 2. Matching of documents with the same publication year, volume number, and either begin page or article number. 3. Matching of documents with the same publication year, last name of the first author, and either begin page or article number. 4. Matching of documents with the same publication year, last name of the first author, and volume number. 5. Matching of documents with the same publication year, source ID (i.e., ISBN or ISSN), and volume number. 6. Matching of documents with similar titles.Two documents are considered to have a similar title if the three longest words in the title of the document in Scopus also occur in the title of the document in the other data source. In each of the above steps, the matching procedure will identify a large number of pairs of documents as candidate matches.For each pair of documents, a matching score is calculated by comparing the following attributes: (1) DOI, (2) first author (i.e., last name and first initial), (3) document title, (4) source (i.e., ISBN, ISSN, or source title1 ), (5) publication year, (6) volume and issue number, and (7) begin and end page and article number.Each attribute for which there is a match increases the matching score. In the case of the first author, document title, and source title, the matching procedure uses the Levenshtein distance to allow for partial matches.The smaller the Levenshtein distance, the larger the increase in the matching score.A match is established between a pair of documents if the matching score of the documents exceeds a certain threshold.This threshold is set in such a way that the matching procedure favors precision over recall.If a document has a match with multiple other documents, only the match with the highest matching score is considered.When a match between two documents has been established in a particular step of the matching procedure, the documents will be excluded from the remaining steps of the procedure. The first step of our matching procedure uses the most restrictive matching criterion.The subsequent steps use matching criteria that are increasingly less restrictive.Less restrictive matching criteria yield more candidate matches, making the matching process more demanding from a computational point of view.However, the number of documents that still need to be matched decreases after each step of the matching procedure, and in this way the computational burden remains acceptable. Data sources may index multiple versions of (basically) the same document.In some cases, this happens by mistake.2In many cases, however, data sources deliberately choose to index multiple versions of basically the same document, for instance a version published in a journal, a version published in a conference proceedings, and a version published in a repository.Our matching procedure creates one-to-one links between documents in Scopus and documents in the other data sources. Suppose for instance that document X is indexed both in Scopus and in Microsoft Academic.Scopus indexes only the version of document X that was published in a journal, while Microsoft Academic indexes also the version that was published in a repository.Most likely, our matching procedure will then create a link between the journal version of document X in Scopus and the journal version of document X in Microsoft Academic.For the repository version of document X in Microsoft Academic, no link will be created.Hence, this version will be seen as part of the unique content of Microsoft Academic relative to Scopus. Comparison of coverage of documents As already mentioned, in our comparison of Scopus, CWTS WoS, Dimensions, Crossref, and Microsoft Academic, we use Scopus as the baseline.Figure 1 As can be seen in Figure 1, CWTS WoS has the smallest overlap with Scopus. Almost 18 million documents were found both in CWTS WoS and in Scopus. Dimensions and Crossref each have an overlap of 21 million documents with Scopus. With 22 million documents, Microsoft Academic has the largest overlap with Scopus. The most striking observation probably is that Microsoft Academic covers so many more documents than the other data sources.Some documents covered by Microsoft Academic are not of a scientific nature.We for instance found news articles and blog posts about someone's private life in Microsoft Academic.To determine the extent to which such non-scientific content artificially inflates the number of documents in Microsoft Academic, we manually examined a random sample of 30 documents that are covered by Microsoft Academic and that do not have a matching document in Scopus.Of these 30 documents, there are four that are clearly not of a scientific nature. The other 26 documents can all be regarded as scientific content.Hence, although Microsoft Academic includes non-scientific content, our manual analysis indicates that this is a small share of the total content of Microsoft Academic.This means that Microsoft Academic provides a much more comprehensive coverage of the scientific literature than the other data sources. We performed a similar manual examination for a random sample of 30 documents covered by Dimensions and not by Scopus.These documents can all or almost all considered to be of a scientific nature.However, for about one-third of the documents, the scientific contribution does not seem very substantial.These documents include meeting abstracts and other very short items, often with a length of no more than one page.Some of these documents have appeared in journals covered by Scopus, but Scopus has apparently chosen not to index these documents. 3We made similar observations for a random sample of 30 documents covered by Crossref and not by Scopus.Some documents in Crossref are included in a scientific journal or book, but do not contain any scientific information themselves.In our sample, we for instance found two documents listing the members of the editorial board of a journal.We also found a document containing some of the front matter of a book. The high-level statistics presented in Figure 1 are of limited value because they hide many important differences between the various data sources.We analyze these differences in the next subsections. Differences in coverage by document type The top-left plot in Figure 3 The other plots in Figure 3 Scientific Reports) are assigned to the Health Sciences discipline. 5 In CWTS WoS, these documents do not have an assignment to a discipline.Some documents belong to multiple disciplines in the classifications of Scopus and CWTS WoS.We use a fractional counting approach to handle these documents.We note that in an earlier study significant inaccuracies were identified in the disciplinary classification of Scopus (Wang & Waltman, 2016). In The patterns observed for Dimensions are fairly similar to those observed for CWTS WoS.However, one-third of the documents in Dimensions do not have an assignment to a discipline, which limits the conclusions that can be drawn from the results for Dimensions. Differences in coverage by number of references The number of references in the reference list of a document may be used as a rough proxy of the scientific contribution of the document.Although there are all kinds of exceptions, a document with many references (e.g., a full research article) may often be considered to make a more substantial scientific contribution than a document with only a few references or no references at all (e.g., an editorial, a letter, or a meeting abstract). For this reason, we look at a breakdown by number of references of the overlap between the different data sources. The left plot in Figure 5 The right plot in Figure 5 offers a breakdown by number of references for all documents in CWTS WoS and for the overlap with Scopus.As can be seen, there are only a very limited number of documents in CWTS WoS that have a large number of references and that do not have a matching document in Scopus. We do not show results from the viewpoint of Dimensions, Crossref, and Microsoft Academic.In Dimensions and Microsoft Academic, we do not know the total number of references of a document.We know only the number of references that have been matched with a cited document.In Crossref, there are quite a lot of documents for which the reference list is missing because the publisher did not deposit the reference list in Crossref.For these documents, we do not know how many references they have. Differences in coverage by number of citations Like the number of references in the reference list of a document, the number of citations received by a document offers a proxy of the scientific contribution of the document.We therefore look at a breakdown by number of citations of the overlap between the different data sources. The top-left plot in Figure 6 Differences in coverage by language Scopus and CWTS WoS are dominated by documents written in English (see also Mongeon & Paul-Hus, 2016).Although they cover a small share of documents written in languages such as Chinese, French, German, Portuguese, and Spanish, 90% of the documents in Scopus and 96% of the documents in CWTS WoS are in English.In Dimensions, documents in English are slightly less dominant.86% of the documents in Dimensions are in English.We do not have language information for Crossref. Likewise, language information is missing in the Microsoft Academic data dump that we use.For most of the documents in Scopus that are not in English, we did not find a matching document in the other data sources.Only about 40% of the non-English documents in Scopus have a matching document in Dimensions or Microsoft Academic, and only 21% have a matching document in CWTS WoS.The other way around, 57% of the non-English documents in CWTS WoS have a matching document in Scopus.In Dimensions, this is the case for only 19% of the non-English documents. These statistics show that Scopus, CWTS WoS, and Dimensions differ a lot in terms of the non-English documents they cover.Although language information is missing in our Microsoft Academic data, this conclusion also extends to Microsoft Academic.In a manual examination of a random sample of 30 documents in Microsoft Academic that do not have a matching document in Scopus (see above), we found that between onethird and half of these documents are not in English. Comparison of completeness and accuracy of citation links To compare the completeness and accuracy of citation links, we again use Scopus as the baseline.We present pairwise comparisons between Scopus on the one hand and CWTS WoS, Dimensions, Crossref, and Microsoft Academic on the other hand. Importantly, in these pairwise comparisons, we consider only citation links between citing and cited documents that are covered by both data sources.Hence, we compare the completeness and accuracy of citation links after correcting for differences in the coverage of documents.The comparisons consider the original citation links made available in the different data sources.They do not consider citation links that may be identified using alternative citation matching algorithms (e.g., Olensky, Schmidt, & Van Eck, 2016). Figure 7 shows the overlap of citation links between Scopus and the other data sources.Relatively speaking, Scopus and CWTS WoS have the largest overlap. Nevertheless, the discrepancies between the two data sources are quite significant.1.9% of the citation links in CWTS WoS cannot be found in Scopus.Conversely, 5.8% of the citation links in Scopus cannot be found in CWTS WoS.These discrepancies may be caused by citation links that have been incorrectly identified in Scopus or CWTS WoS. They may also be due to citation links that incorrectly have not been identified in either of these data sources.This will be analyzed below. The discrepancies between Scopus on the one hand and Dimensions and Microsoft Academic on the other hand are even larger.3.4% of the citation links in Dimensions cannot be found in Scopus.Moreover, for 10.6% of the citation links in Scopus, there is no corresponding citation link in Dimensions.Likewise, 5.1% of the citation links in Microsoft Academic cannot be found in Scopus, while 12.7% of the citation links in Scopus do not have a corresponding citation link in Microsoft Academic.Finally, comparing Scopus and Crossref, we find that 57.9% of the citation links in Scopus cannot be obtained from Crossref.There are three main reasons for this.First, some publishers deposit documents in Crossref without depositing their references. Second, there are publishers (in particular ACS, Elsevier, and IEEE) that deposit references in Crossref but choose not to make these references openly available.Third, Crossref has suffered from a technical problem due to which a large number of openly available references incorrectly have not been linked to cited documents (Bilder, 2019). Figure 7 makes clear that Dimensions has an important advantage over Crossref. Our earlier results indicate that Dimensions and Crossref have a fairly similar coverage of documents, but Figure 7 shows that Dimensions provides access to many more citation links than Crossref.Although Dimensions relies strongly on data from Crossref, it enriches this data in various ways, in particular by adding citation links, but also by adding abstracts, affiliation data, and so on. Analysis of incompleteness or inaccuracy of citation links The discrepancies shown in Figure 7 between the different data sources are quite significant.To better understand these discrepancies, we now analyze the incompleteness or inaccuracy of citation links in the various data sources. An important explanation for the discrepancies in the citation links covered by the various data sources is that for some documents no reference list is available in some of the data sources.Missing reference lists are an important explanation for citation links in Scopus for which there is no corresponding citation link in Dimensions, Crossref, or Microsoft Academic.For 15 million citation links in Scopus, the citing document does not have a reference list in Dimensions.Likewise, there are 18 million citation links in Scopus for which the citing document does not have a reference list in Microsoft Academic.In Crossref, missing reference lists are a major problem.Missing reference lists in Crossref are responsible for 107 million of the 116 million citation links in Scopus for which there is no corresponding citation link in Crossref.Of these 107 million citation links, 27 million are due to reference lists that have not been deposited in Crossref at all and 80 million are due to reference lists that have been deposited but that the publisher has chosen not to make openly available.In CWTS WoS, missing reference lists are highly exceptional.Of the 10 million citation links in Scopus for which there is no corresponding citation link in CWTS WoS, only 0.1 million are due to missing reference lists in CWTS WoS.Finally, in Scopus, the problem of missing reference lists is more significant than in CWTS WoS but less serious than in the other data sources.CWTS WoS, Dimensions, Crossref, and Microsoft Academic each cover between 1 and 2 million citation links for which the citing document does not have a reference list in Scopus. In earlier work (Van Eck & Waltman, 2017; see also Olensky et al., 2016), we studied inaccuracies of citation links in Scopus and WoS.For WoS, three problems were identified.First, some references are missing in the reference lists of documents in WoS.Second, sometimes there is an error in a reference in WoS, such as an incorrect publication year or volume number.Third, some references in WoS have been incorrectly matched with a cited document, leading to so-called phantom citations (García-Pérez, 2010).For Scopus, the opposite problem was identified.Some references incorrectly have not been matched with a cited document, even though all information needed to make a match seems to be available. We now look in more detail at the discrepancies in the citation links covered by In total, we manually examined 60 citation links that can be found in one data source but not in another.In only two cases, we found that the discrepancy is due to a mistake made by our procedure for matching documents in Scopus with documents in the other data sources (see Section 3).Hence, in a sample of 60 citing documents and 60 cited documents, we found only two mistakes made by our matching procedure.This confirms that the matching procedure is sufficiently accurate. Conclusions The value of a bibliographic data source depends on many different elements.The coverage of a data source is very important, but the completeness and accuracy of the data provided by a data source are of course important as well.For some purposes, the speed of updating is also a key concern.Another crucial issue for determining the value of a bibliographic data source are the ways in which data is made available, for instance through web interfaces, APIs, and data dumps.Finally, the conditions under which a data source can be used are of major importance (Waltman & Larivière, 2020). While we recognize the importance of all these elements, we have chosen a specific focus for the analysis presented in this paper.In our comparison of five multidisciplinary bibliographic data sources, that is, Scopus, CWTS WoS, Dimensions, Crossref, and Microsoft Academic, our focus has been on differences between the data sources in the coverage of documents and in the completeness and accuracy of citation links.In addition, we have chosen to consider only scientific documents in our analysis. Some data sources, in particular Dimensions, also provide data on other types of entities, but this data falls outside the scope of our analysis. The main findings of our analysis can be summarized as follows: • This includes journal articles and also proceedings papers and book chapters. • All data sources suffer from problems of incompleteness and inaccuracy of citation links.However, our overall conclusion is that, in terms of the quality of citation links, the more established data sources, Scopus and CWTS WoS, outperform two recent alternatives, Dimensions and Microsoft Academic. Missing citation links are a significant problem in Dimensions and Microsoft Academic.These data sources also have the limitation that they do not provide data for references that have not been matched with a cited document.In the case of CWTS WoS, we are especially concerned about the problem of phantom citations (García-Pérez, 2010;Van Eck & Waltman, 2017). In Crossref, incompleteness of citation links is a major problem.This is partly caused by publishers that do not deposit references in Crossref.To a significant extent, however, this is due to publishers that do deposit references in Crossref but choose not to make these references openly available.Citation links resulting from closed references are available within Crossref's internal infrastructure, but they are not accessible to the outside world.Crossref takes these closed citation links into account in the aggregate citation counts it provides for documents (Heibi, Peroni, & Shotton, 2019a).This for instance explains why Harzing (2019) concludes that Crossref has "a similar or better coverage" of citations than Scopus and WoS.However, while closed citation links are taken into account in aggregate citation counts provided by Crossref, the individual citation links cannot be accessed. How the differences between the data sources should be assessed depends on the purpose for which the data sources are used.For many purposes, a broad coverage of documents is valuable, for instance to make sure that locally relevant research is properly taken into account (e.g., Hicks, Wouters, Waltman, De Rijcke, & Rafols, 2015) and to provide a proper coverage of the literature in disciplines in which researchers prefer to publish proceedings papers or books rather than journal articles. However, for other purposes, it may be desirable to work within a more restricted universe of documents (e.g., López-Illescas, de Moya Anegón, & Moed, 2009).For instance, to enable meaningful international comparisons of universities, documents that have not been published in international scientific journals are deliberately excluded from the calculation of the bibliometric statistics reported in the CWTS Leiden Ranking (www.leidenranking.com). In our view, there is value both in the comprehensiveness offered by Dimensions and Microsoft Academic and in the selectivity offered by Scopus and Web of Science. However, comprehensiveness and selectivity should not be seen as mutually exclusive. In line with the philosophy of the developers of Dimensions (Herzog et al., 2020;Hook et al., 2018), we believe that data sources should be as comprehensive as possible while filters for making relevant selections of the scientific literature should be provided on top of the data.Depending on the purpose for which a data source is used, one may or may not wish to apply certain filters to restrict an analysis to a particular selection of the scientific literature.In this approach, comprehensiveness and selectivity are no Limitations Our work has several limitations.First of all, our analysis is not entirely up-to-date, since it is based on data sets from 2018 and 2019.The data sources studied in this paper are regularly being improved and expanded.The most recent developments are not covered by our analysis, and some of our findings may therefore not be fully representative for the current state of the different data sources.Furthermore, we have performed pairwise comparisons between Scopus and the other data sources.CWTS WoS, Dimensions, Crossref, and Microsoft Academic have not been compared directly with each other.In addition, in the case of CWTS WoS, two citation indices that are part of the WoS Core Collection, ESCI and BKCI, are not included.Finally, our procedure for matching documents in Scopus with documents in the other data sources is somewhat conservative.Avoiding false positives (i.e., documents that have been incorrectly matched) is considered more important than avoiding false negatives (i.e., documents that incorrectly have not been matched).This means that our analysis is likely to underestimate the true overlap between Scopus and the other data sources. shows the differences in coverage of documents between Scopus on the one hand and CWTS WoS, Dimensions, Crossref, and Microsoft Academic on the other hand.Scopus covers 27 million documents.With 23 million documents, CWTS WoS is smaller than Scopus.Dimensions and Crossref are of similar size.They cover respectively 36 and 35 million documents, which is substantially more than Scopus and CWTS WoS.Since Dimensions relies strongly on data from Crossref (Hook et al., 2018), these two data sources largely cover the same documents.Documents covered by Dimensions and not by Crossref typically seem to originate from PubMed.With 73 million documents, Microsoft Academic covers by far the largest number of documents. Figure 1 . Figure 1.Overlap of documents between Scopus and the other data sources. Figure 2 Figure2shows the time trend in the number of documents covered by the different data sources and the overlap of documents between Scopus and the other data sources.The yearly number of documents in Dimensions and Crossref is very similar.This illustrates the strong reliance of Dimensions on data from Crossref.The number of documents in Microsoft Academic is substantially smaller in 2017 than in the preceding years.We do not know why this is the case.4 Figure 2 . Figure 2. Breakdown by publication year for all documents in each data source (solid line) and for the overlap with Scopus (dashed line). provides a breakdown by document type for all documents in Scopus and for the overlap with the other data sources.The document type classification of Scopus is used.The plot shows that there are a substantial number of articles and proceedings papers in Scopus for which there are no matching documents in the other data sources.Microsoft Academic has the largest overlap with Scopus, followed by Dimensions and Crossref.CWTS WoS has the smallest overlap with Scopus.It can also be seen that CWTS WoS covers hardly any of the book chapters covered by Scopus.This probably can be explained by the fact that the WoS BKCI is not included in CWTS WoS. more recent Microsoft Academic data set, we found a similar drop in the number of documents between 2016 and 2017. Figure 3. Top-left plot: Breakdown by document type for all documents in Scopus and for the overlap with the other data sources.Other plots: Breakdown by document type for all documents in CWTS WoS, Dimensions, Crossref, and Microsoft Academic and for the overlap with Scopus (in dark blue). 4. 3 . Differences in coverage by discipline We now compare the coverage of documents by broad discipline.In Scopus, documents are assigned to four broad disciplines: Health Sciences, Life Sciences, Physical Sciences, and Social Sciences & Humanities.In CWTS WoS, we make use of an assignment of documents to five broad disciplines: Arts & Humanities, Life Sciences & Biomedicine, Physical Sciences, Social Sciences, and Technology.In Dimensions, we rely on a classification of documents into 22 fields, which we further aggregate into four broad disciplines: Arts & Humanities, Biomedical Sciences, Physical Sciences, and Social Sciences.Crossref also provides a classification of documents into broad disciplines, but most documents are not included in this classification.We therefore do not use this classification.We do not use the disciplinary classification of Microsoft Academic either.This classification is missing in the Microsoft Academic data dump that we use.In the disciplinary classifications of Scopus and CWTS WoS, documents are assigned to disciplines based on the source in which they have appeared.In Scopus, documents in multidisciplinary sources (e.g., Nature, PLOS ONE, PNAS, Science, and Figure 4. Top-left plot: Breakdown by discipline for all documents in Scopus and for the overlap with the other data sources.Other plots: Breakdown by discipline for all documents in CWTS WoS and Dimensions and for the overlap with Scopus (in dark blue). Figure 5. Left plot: Breakdown by number of references for all documents in Scopus and for the overlap with the other data sources.Right plot: Breakdown by number of references for all documents in CWTS WoS and for the overlap with Scopus (in dark blue). provides a breakdown by number of citations in Scopus for all documents in Scopus and for the overlap with the other data sources.Documents with a larger number of citations are overrepresented in the overlap between Scopus and the other data sources.Almost all documents with more than 25 citations in Scopus have a matching document in the other data sources.A limited number of documents with more than five and no more than 25 citations in Scopus do not have a matching document in the other data sources.The other plots in Figure 6 provide the opposite perspective.These plots offer a breakdown by number of citations for all documents in CWTS WoS, Dimensions, Crossref, and Microsoft Academic and for the overlap with Scopus.Almost all documents with more than five citations in CWTS WoS have a matching document in Scopus.A limited number of documents with more than five citations in Dimensions, Crossref, and Microsoft Academic do not have a matching document in Scopus. Figure 6 . Figure 6.Top-left plot: Breakdown by number of citations for all documents in Scopus and for the overlap with the other data sources.Other plots: Breakdown by number of citations for all documents in CWTS WoS, Dimensions, Crossref, and Microsoft Academic and for the overlap with Scopus (in dark blue). Figure 7 . Figure 7. Overlap of citation links between Scopus and the other data sources. longer mutually exclusive.The ideal data source provides a comprehensive coverage of the scientific literature, like Dimensions and Microsoft Academic already aim to do.In addition, the ideal data source also offers a flexible set of filters for making all kinds of selections of the literature.Important examples of such filters are expert-curated journal lists, such as those provided by Scopus, Web of Science, Directory of Open Access Journals, and many others.The fine-grained document type classifications of Scopus and Web of Science offer another example. (Scopus, 2020)WTS WoS shows that meeting abstracts and book reviews are missing in Scopus, which is indeed confirmed by the Scopus Content Coverage Guide(Scopus, 2020).Also, for a substantial number of proceedings papers in CWTS WoS, there are no matching documents in Scopus.On the other hand, almost all articles in CWTS WoS can also be found in Scopus.Unfortunately, the document type classifications of Dimensions, Crossref, and Microsoft Academic are less detailed.The plots for these data sources therefore offer less information.The plots for Dimensions and Crossref show that for many articles in these data sources there is no matching document in Scopus.Importantly, however, any document published in a journal is classified as an article in Dimensions and Crossref. provide the opposite perspective.Using the document type classifications of CWTS WoS, Dimensions, Crossref, and Microsoft Academic, these plots offer a breakdown by document type for all documents in CWTS WoS, Dimensions, Crossref, and Microsoft Academic and for the overlap with Scopus.This may even include content such as the list of editorial board members of a journal or the cover of a journal issue.Dimensions and Crossref also cover many more book chapters than Scopus.Only a small share of the book chapters in Dimensions and Crossref have a matching document in Scopus.For Microsoft Academic, it is hard to draw clear conclusions, since about half of the documents in Microsoft Academic do not have a document type. Scopus on the one hand and Dimensions and Microsoft Academic on the other hand, focusing on discrepancies that are not due to documents for which no reference list is available.We manually examined 15 randomly selected citation links in Scopus that are not in Dimensions and 15 randomly selected citation links in Scopus that are not in Microsoft Academic.It turns out that in about two-third of the cases Dimensions or Microsoft Academic incorrectly has not identified a citation link.Hence, these data sources both fail to identify a substantial number of citation links.We found just a few cases in which a citation link has been incorrectly identified in Scopus.The other way around, we also performed a manual examination of 15 randomly selected citation links in Dimensions that are not in Scopus and 15 randomly selected citation links in Microsoft Academic that are not in Scopus.Of the citation links in Dimensions that are not in Scopus, about half incorrectly have not been identified in Scopus.A few citation links have been incorrectly identified in Dimensions.Of the citation links in Microsoft Academic that are not in Scopus, only one incorrectly has not been identified in Scopus.About one-third of the citation links have been incorrectly identified in Microsoft Academic.We also found a substantial number of cases in which Scopus and Microsoft Academic seem to make different choices, causing a citation link to be created in Microsoft Academic but not in Scopus.Some cases involve in-print references (i.e., references to a document that has not yet formally been published), for which Microsoft Academic tries to create a citation link, while Scopus does not seem to do so.Other cases involve references to 'secondary' versions of a document (i.e., references to for instance a preprint or a proceedings paper instead of a journal article).For such references, it seems that Microsoft Academic chooses to create a citation link to the 'primary' version of the document (usually a journal article), while Scopus does not do so.6 Comparing Scopus and CWTS WoS, it turns out that Scopus covers a large number of documents that are not covered by CWTS WoS, including documents with substantial numbers of references and citations.Documents covered by Scopus and not by CWTS WoS have appeared mostly in journals and conference proceedings.We have also identified a lot of book chapters covered by Scopus and not by CWTS WoS, but this is likely to be a consequence of the fact that the WoS BKCI is not included in CWTS WoS.Almost all journal articles covered by CWTS WoS are also covered by Scopus.However, CWTS WoS covers meeting abstracts and book reviews, which are not covered by Scopus.A substantial share of the proceedings papers covered by CWTS WoS are not covered by Scopus either.•The results of the comparison of Scopus with Dimensions and Crossref are somewhat more difficult to interpret.This is partly due to limitations of the document type classifications of Dimensions and Crossref.These classifications do not distinguish between different types of documents published in journals.Dimensions and Crossref turn out to have a similar coverage of documents.This illustrates the strong reliance of Dimensions on data from Crossref.Scopus covers a large number of journal articles that are not covered by Dimensions and Crossref.The other way around, Dimensions and Crossref cover an even larger number of documents that have been published in journals and that are not covered by Scopus.However, a significant share of these documents are meeting abstracts and other short items that do not seem to make a very substantial scientific contribution.Dimensions and Crossref also cover many book chapters and quite some proceedings papers that are not covered by Scopus.On the other hand, Scopus also covers many proceedings papers that are not covered by Dimensions and Crossref.• Of the five data sources studied in this paper, Microsoft Academic offers by far the most comprehensive coverage of the scientific literature.It covers many more documents than the other data sources.Microsoft Academic provides only a very basic document type classification, which does not give much insight into the nature of the documents covered by Microsoft Academic.However, a manual examination of documents covered by Microsoft Academic and not by Scopus has confirmed that most of these documents are of a scientific nature.It has also shown that Microsoft Academic covers many documents that are not in English. Despite the large coverage of Microsoft Academic, there are still quite a lot of documents in Scopus without a matching document in Microsoft Academic.
9,636.2
2020-05-21T00:00:00.000
[ "Computer Science" ]
Remarks on the pion-nucleon sigma-term The pion-nucleon $\sigma$-term can be stringently constrained by the combination of analyticity, unitarity, and crossing symmetry with phenomenological information on the pion-nucleon scattering lengths. Recently, lattice calculations at the physical point have been reported that find lower values by about $3\sigma$ with respect to the phenomenological determination. We point out that a lattice measurement of the pion-nucleon scattering lengths could help resolve the situation by testing the values extracted from spectroscopy measurements in pionic atoms. Introduction The pion-nucleon (πN) σ-term σ πN is a fundamental parameter of low-energy QCD. It measures the amount of the nucleon mass that originates from the up-and down-quarks, in contrast to the predominant contribution from the gluon fields generated by means of the trace-anomaly of the energy-momentum tensor. A precise knowledge of the σ-term has become increasingly important over the last years since it determines the scalar matrix elements N|m qq q|N for q = u, d [1], which, in turn, are crucial for the interpretation of dark-matter direct-detection experiments [2][3][4] and searches for lepton flavor violation in µ → e conversion in nuclei [5,6] in the scalar-current interaction channel. Despite its importance, the value of σ πN has been under debate for decades. Phenomenologically, the σ-term can be extracted from πN scattering by means of a low-energy theorem that relates the scalar form factor of the nucleon evaluated at momentum transfer t = 2M 2 π to an isoscalar πN amplitude at the Cheng-Dashen point [7,8], which unfortunately lies outside the region directly accessible to experiment. The necessary analytic continuation, performed in [9][10][11] based on the partial-wave analysis from [12,13], led to the classical value of σ πN ∼ 45 MeV [10]. Within the same formalism, this result was later contested by a new partial-wave analysis [14] that implied a significantly larger value σ πN = (64 ± 8) MeV. Recently, a new formalism for the extraction of σ πN has been suggested relying on the machinery of Roy-Steiner equations [15][16][17][18][19], a framework designed in such a way as to maintain analyticity, unitarity, and crossing symmetry of the scattering amplitude within a partial-wave expansion. One of the key results of this approach is a robust correlation between the σ-term and the S -wave scattering lengths where the sum extends over the two s-channel isospin channels and a I s −ā I s measures the deviation of the scattering lengths from their reference values extracted from pionic atoms In this way, the main drawback of the formalism from [9,10], the need for very precise input for a particular P-wave scattering volume, could be eliminated. In combination with the experimental constraints on the scattering lengths from pionic atoms, the low-energy theorem thus leads to a very stringent constraint [17] σ πN = (59.1 ± 3.5) MeV. Given that already 4.2 MeV of the increase originate from updated corrections to the low-energy theorem (thereof 3.0 MeV from the consideration of isospin-breaking effects), the net increase in the πN amplitude compared to [10] adds up to about 10 MeV, roughly half-way between [10] and [14]. While for a long time lattice calculations were hampered by large systematic uncertainties due to the extrapolation to physical quark masses, recently three calculations near or at the physical point appeared [20][21][22], with results collected in Ta errors are added in quadrature). As we will argue in this Letter, this discrepancy should be taken very seriously as it suggests that the lattice σ-terms are at odds with experimental data on pionic atoms. An analysis of flavor S U(3) breaking suggests a σ-term closer to the small values obtained on the lattice (cf. the discussion in [23] and references therein): assuming violation of the OZI rule to be small, it should be not too far from the matrix element σ 0 = (m u +m d )/2× N|ūu+dd−2ss|N , which can be related to the mass splitting in the baryon ground state octet and is usually found to be of the order of 35 MeV [24,25]. However, significantly larger values have been obtained in the literature when including effects of the baryon decuplet explicitly in the loops, both in covariant and heavy-baryon approaches [26,27], making it unclear how well the perturbation series in the breaking of flavor S U(3) behaves, and the uncertainties difficult to quantify. A similar puzzle emerged recently in a lattice calculation of K → ππ [28], which quotes a value of the isospin-0 S -wave ππ phase shift at the kaon mass δ 0 0 (M K ) = 23.8(4.9)(1.2) • , about 3σ smaller than the phenomenological result from ππ Roy equations [29,30]. A potential origin of this discrepancy could be that the strong ππ rescattering, known to be particularly pronounced in the isospin-0 S -wave, is not fully captured by the lattice calculation, given that the result for the isospin-2 phase shift δ 2 0 (M K ) = −11.6(2.5)(1.2) • is much closer to phenomenology. This explanation could be tested by a fully dynamical calculation of the corresponding scattering length a 0 0 , which is predicted very accurately from the combination of Roy equations and Chiral Perturbation Theory [31], a prediction in excellent agreement with the available experimental information (see [23] for a review of the present situation). In the same way as a 0 0 provides a common ground where lattice, experiment, and dispersion theory can meet to resolve the discrepancy in the ππ case, a lattice measurement of the πN scattering lengths could help clarify the σ-term puzzle. In this Letter we present our arguments why we believe this to be the case. πN scattering lengths from pionic atoms The linear relation (1) between σ πN and the πN scattering lengths proves to be a very stable prediction of πN Roy-Steiner equations: while derived as a linear expansion around the central values (2), we checked the potential influence of higher terms by additional calculations on a grid aroundā I s with maximal extension of twice the standard deviation quoted in (2) in each direction, with the result that also in this extended region quadratic terms are entirely negligible. The numbers for c I s given in (1) refer to this extended fit and therefore differ slightly from the ones given in [17]. An additional check of the formalism is provided by the fact that if the scattering lengths from [13] are inserted, the lower σ-term from [10] is recovered. Irrespective of the details of uncertainty estimates, this behavior clearly demonstrates that the origin for the upward shift in the central value is to be attributed to the updated input for the scattering lengths. The latter exercise also shows that the solution linearized around the pionic-atom reference point remains approximately valid in a much larger range of parameter space: for the scattering lengths from [13] it differs from the full solution by less than 2 MeV. In pionic atoms, electromagnetic bound states of a π − and a proton/deuteron core, strong interactions leave imprints in the level spectrum that are accessible to spectroscopy measurements [32]. In pionic hydrogen (πH) the ground state is shifted compared to its position in pure QED and acquires a finite width due to the decay to π 0 n (and nγ) final states. The corresponding observables are therefore sensitive to the π − p → π − p and π − p → π 0 n scattering channels. Although the width in pionic deuterium (πD) is dominated by π − d → nn, the level shift measures the isoscalar combination of π − p → π − p and π − n → π − n, once few-body corrections are applied, and thus provides a third constraint on threshold πN physics. Experimentally, the level shifts have been measured with high accuracy at PSI [33,34], and a preliminary value for the πH width is reported in [35]. At this level of accuracy a consistent treatment of isospinbreaking [36][37][38][39] and few-body [40][41][42][43][44][45][46] corrections becomes paramount if all three constraints are to be combined in a global analysis of πH and πD [47,48]. In the isospin limit, the πN amplitude can be decomposed into two independent structures where a and b refer to the isospin label of the incoming and outgoing pion, τ a to isospin Pauli matrices, and T ± to isoscalar/isovector amplitudes. Their threshold values are related to the S -wave scattering lengths by where M π and m N are the masses of pion and nucleon, and spinors have been normalized to 1. Conventionally, the combined analysis of pionic atom data is not performed in terms of a + , but using [49] a + = a + + 1 instead, where ∆ π = M 2 π − M 2 π 0 denotes the pion mass difference, F π the pion decay constant, e = √ 4πα, and c 1 and f 1 are low-energy constants that yield a universal shift in a + away from its isospin limit that cannot be resolved from pionic atoms alone. Moreover, we have defined particle masses in the isospin limit to coincide with the charged particle masses. The central values for the s-channel isospin scattering lengths (2) have been obtained from such a combined analysis as follows [19]: first, we subtracted the contributions from virtual photons to avoid the presence of photon cuts, and second, we identified the I s = 1/2, 3/2 channels from the physical π ± p amplitudes The main motivation for this convention is that a π − p→π − p can be extracted directly from the πH level shift without any further corrections, while a π + p→π + p can be reconstructed from a π − p→π − p andã + with minimal sensitivity to a − and thus the preliminary value for the πH width. Of course, this convention has to be reflected in the precise form of the low-energy theorem for σ πN [17,19], with uncertainties included in the error given in (1). To illustrate the tension between phenomenological and lattice determinations of σ πN it is most convenient to revert this change of basis by means of where The linear relation (1) can then be recast as where the right-hand side is given by The corresponding bands in theã + -a − plane are shown in Fig. 1. As expected due to the isoscalar nature of the σ-term, the constraint from the lattice results is largely orthogonal to a − , although non-linear effects in the Roy-Steiner solution generate some residual dependence on a − as well. The overall picture reflects the core of the discrepancy between lattice and phenomenology: while the three bands from the pionic-atom measurements nicely overlap, the lattice σ-terms favor a considerably smaller value ofã + . 1 The exact significance again depends on if and how the three lattice measurements are combined, but in any case the fact remains that there is a disagreement with pionic-atom phenomenology around the 3σ level. 1 In this context, it is also worth stressing that changing a 3/2 alone, where most of the difference between pionic atoms and [13] resides, is not an option: in doing so, one would infer, via the Goldberger-Miyazawa-Oehme sum rule [50] that is sensitive to the isovector combination a − , a value of the πN coupling constant significantly too large compared to extractions from both nucleon-nucleon [51,52] and pion-nucleon scattering [53]; see [48]. and from lattice σ-terms (orange: BMW [20], violet: χQCD [21], brown: ETMC [22]). Lattice calculation of the πN scattering lengths The discussion in the previous section makes it apparent that another independent determination of the πN scattering lengths would imply additional information on σ πN that could help isolate the origin of the σ-term puzzle. Since a lattice calculation of a I s would proceed directly in the isospin limit, we reformulate the relation (1) accordingly. First, we assume that the isospin limit would still be defined by the charged particle masses, 2 but due to the absence of electromagnetic effects the corresponding scattering lengths as extracted from pionic atoms become where we have used c 1 = −1.07(2) GeV −1 [18] and | f 1 | ≤ 1.4 GeV −1 [36,54]. The size of the shifts compared to (2) is larger than one might naively expect from the chiral expansion, but the origin of the enhanced contributions is well understood: the bulk is generated from the term proportional to 4c 1 ∆ π /F 2 π , see (6), which appears because the operator involving c 1 in the chiral Lagrangian generates a term proportional to the quark masses and thus, by the Gell-Mann-Oakes-Renner relation, to the neutral pion mass, which results in a large tree-level shift. The remainder is mainly due to a particular class of loop topologies, so-called triangle diagrams, which are enhanced by a factor of π and an additional numerical factor. In view of these effects one might wonder about the potential impact of O(p 4 ) isospin-breaking corrections. However, both enhancement mechanisms will become irrelevant at higher orders simply due to the fact that the chiral S U(2) expansion converges with an expansion parameter M π /m N ∼ 0.15 unless large chiral logs appear or additional degrees of freedom enhance the size of low-energy constants. This leaves as potentially large O(p 4 ) corrections loop diagrams with low-energy constants c i , which are numerically enhanced due to saturation from the ∆(1232), but at this order cannot appear in triangletype topologies and therefore are not sufficiently enhanced to become relevant. Finally, similarly to c 1 at tree level, there is another artifact from the definition of the operator accompanying c 2 , which is conventionally normalized to the nucleon mass in the chiral limit. At O(p 4 ) this generates a quark-mass correction proportional to c 1 c 2 that renormalizes the aforementioned isospin-breaking correction involving c 1 by a factor 1 + 4c 2 M 2 π /m N = 1.27, resulting in an additional shift in a I s c by 1.6 units. Given that we do not have a full O(p 4 ) calculation, we did not include this correction in the central values in (12), but, to stay conservative, in the quoted uncertainty as an estimate of the potential impact of higher-order terms. If we finally rewrite (1) in terms of a I s c in order to illustrate the impact of a lattice determination of the pion-nucleon scattering lengths on the σ-term, we obtain where the new reference valuesā I s c refer to the central values given in (12). In this formulation the uncertainty even decreases slightly because the electromagnetic shift proportional to f 1 cancels to a large extent a similar correction in the low-energy theorem. The final uncertainty in σ πN for a given relative accuracy in the scattering lengths is shown in Fig. 2. For instance, if both isospin channels could be calculated at [5 . . . 10]%, one would obtain the σ-term with an uncertainty [5.0 . . . 8.5] MeV. We therefore see that to add conclusive information to the resolution of the σ-term puzzle by means of a lattice determination of the scattering lengths, a calculation at or below the 10% level would be required. However, also more moderate lattice information may be helpful, e.g. in case one of the scattering lengths can be obtained more accurately than the other: as Fig. 1 suggests, also a single additional band could point towards significant tension with the very precise overlap region of the three pionic-atom experimental constraints. Conclusions In this Letter we highlighted the current tension between lattice and phenomenological determinations of the πN σ-term. We argued that the puzzle becomes particularly apparent when formulated at the level of the πN scattering lengths, which play a decisive role for the phenomenological value: a linear relation between the two scattering lengths of definite isospin and the σ-term allows one to reformulate any value for the latter as a constraint on the former, pointing towards a clear disagreement between lattice and pionic-atom data. In a similar way as a direct lattice calculation of the isospin-0 S -wave ππ scattering length could help resolve a comparable discrepancy between lattice and Roy equations in K → ππ, we suggested that a lattice calculation of the πN scattering lengths would amount to another independent determination of σ πN that could help identify the origin of the discrepancy. Note added in proof While this paper was under review, another lattice calculation near the physical point appeared [55]. The quoted result σ πN = 35(6) MeV lies within the range of [20][21][22].
3,886.4
2016-02-24T00:00:00.000
[ "Physics" ]
On a Class of Conjugate Symplectic Hermite-Obreshkov One-Step Methods with Continuous Spline Extension The class of A-stable symmetric one-step Hermite–Obreshkov (HO) methods introduced by F. Loscalzo in 1968 for dealing with initial value problems is analyzed. Such schemes have the peculiarity of admitting a multiple knot spline extension collocating the differential equation at the mesh points. As a new result, it is shown that these maximal order schemes are conjugate symplectic, which is a benefit when the methods have to be applied to Hamiltonian problems. Furthermore, a new efficient approach for the computation of the spline extension is introduced, adopting the same strategy developed for the BS linear multistep methods. The performances of the schemes are tested in particular on some Hamiltonian benchmarks and compared with those of the Gauss–Runge–Kutta schemes and Euler–Maclaurin formulas of the same order. Introduction We are interested in the numerical solution of the Cauchy problem, that is the first order Ordinary Differential Equation (ODE), y (t) = f(y(t)), t ∈ [t 0 , t 0 + T], associated with the initial condition: where f : IR m → IR m , m ≥ 1, is a C R−1 , R ≥ 1, function on its domain and y 0 ∈ IR m is assigned.Note that there is no loss of generality in assuming that the equation is autonomous.In this context, here, we focus on one-step Hermite-Obreshkov (HO) methods ( [1], p. 277).Unlike Runge-Kutta schemes, a high order of convergence is obtained with HO methods without adding stages.Clearly, there is a price for this because total derivatives of the f function are involved in the difference equation defining the method, and thus, a suitable smoothness requirement for f is necessary.Multiderivative methods have been considered often in the past for the numerical treatment of ODEs, for example also in the context of boundary value methods [2], and in the last years, there has been a renewed interest in this topic, also considering its application to the numerical solution of differential algebraic equations; see, e.g., [3][4][5][6][7][8].Here, we consider the numerical solution of Hamiltonian problems which in canonical form can be written as follows: with: where q and p are the generalized coordinates and momenta, H : IR 2 → IR is the Hamiltonian function and I stands for the identity matrix of dimension .Note that the flow ϕ t : y 0 → y(t) associated with the dynamical system (3) is symplectic; this means that its Jacobian satisfies: ∂ϕ t (y) ∂y J ∂ϕ t (y) ∂y = J, ∀ y ∈ IR 2 . ( A one-step numerical method Φ h : IR 2 → IR 2 with stepsize h is symplectic if the discrete flow y n+1 = Φ h (y n ), n ≥ 0, satisfies: Recently, the class of Euler-Maclaurin HO methods for the solution of Hamiltonian problems has been analyzed in [9,10] where, despite the non-existence results of symplectic multiderivative methods shown in [11], the conjugate symplecticity of the methods was proven.Two numerical methods Φ h , Ψ h are conjugate to each other if there exists a global change of coordinate χ h , such that: with χ h = y + O(h) uniformly for y varying in a compact set and • denoting a composition operator [12].If one method is conjugate to a symplectic method is said to be conjugate symplectic, this is a less strong requirement than symplecticity, which allows the numerical solution to have the same long-time behavior of a symplectic method.Observe that the conjugate symplecticity here refers to a property of the discrete flow of the two numerical methods; this should be not confused with the group of conjugate symplectic matrices, the set of matrices M ∈ C 2 that satisfy M H J M = J, where H means Hermitian conjugate [13]. In this paper, we consider the symmetric one-step HO methods, which were analyzed in [14,15] in the context of spline applications.We call them BSHO methods, since they are connected to B-Splines, as we will show.BSHO methods have a formulation similar to that of the Euler-Maclaurin formulas, and the order two and four schemes of the two families are the same.As a new result, we prove that BSHO methods are conjugate symplectic schemes, as is the case for the Euler-Maclaurin methods [9,10], and so, both families are suited to the context of geometric integration.BSHO methods are also strictly related to BS methods [16,17], which are a class of linear multistep methods also based on B-splines suited for addressing boundary value problems formulated as first order differential problems.Note that also BS methods were firstly studied in [14,15], but at that time, they were discarded in favor of BSHO methods since; when used as initial value methods, they are not convergent.In [16,17], the same schemes have been studied as boundary value methods, and they have been recovered in particular in connection with boundary value problems.As for the BSHO methods, the discrete solution generated by a BS method can be easily extended to a continuous spline collocating the differential problem at the mesh points [18].The idea now is to rely on B-splines with multiple inner knots in order to derive one-step HO schemes.The inner knot multiplicity is strictly connected to the number of derivatives of f involved in the difference equations defining the method and consequently with the order of the method.The efficient approach introduced in [18] dealing with BS methods for the computation of the collocating spline extension is here extended to BSHO methods, working with multiple knots.Note that we adopt a reversed point of view with respect to [14,15] because we assume to have already available the numerical solution generated by the BSHO methods and to be interested in an efficient procedure for obtaining the B-spline coefficients of the associated spline. The paper is organized as follows.In Section 2, one-step symmetric HO methods are introduced, focusing in particular on BSHO methods.Section 3 is devoted to proving that BSHO methods are conjugate symplectic methods.Then, Section 4 first shows how these methods can be revisited in the spline collocation context.Successively, an efficient procedure is introduced to compute the B-spline form of the collocating spline extension associated with the numerical solution produced by the R-th BSHO, and it is shown that its convergence order is equal to that of the numerical solution.Section 6 presents some numerical results related to Hamiltonian problems, comparing them with those generated by Euler-Maclaurin and Gauss-Runge-Kutta schemes of the same order. One-Step Symmetric Hermite-Obreshkov Methods Let t i , i = 0, . . ., N, be an assigned partition of the integration interval [t 0 , t 0 + T], and let us denote by u i an approximation of y(t i ).Any one-step symmetric Hermite-Obreshkov (HO) method can be written as follows, clearly setting u 0 := y 0 , where h n := t n+1 − t n and where u r , for j ≥ 1, denotes the total (j − 1)-th derivative of f with respect to t computed at u r , Note that u r ≈ y (j) (t r ), and on the basis of (1), the analytical computation of the j-th derivative y (j) involves a tensor of order j.For example, y (2) (t) = df dt (y(t)) = ∂f ∂y (y(t)) f(y(t)) (where ∂f ∂y becomes the Jacobian m × m matrix of f with respect to y when m > 1).As a consequence, it is u (2) r = ∂f ∂y (u r ) f(u r ).We observe that the definition in (8) implies that only u n+1 is unknown in (7), which in general is a nonlinear vector equation in IR m with respect to it. For example, the one-step Euler-Maclaurin [1] formulas of order 2s with s ∈ IN, s ≥ 1, (where the b 2i denote the Bernoulli numbers, which are reported in Table 2) belong to this class of methods.These methods will be referred to in the following with the label EMHO (Euler-Maclaurin Hermite-Obreshkov). Here, we consider another class of symmetric HO methods that can be obtained by defining as follows the polynomial P 2R , appearing in ([1], Lemma 13.3), the statement of which is reported in Lemma 1. Lemma 1.Let R be any positive integer and P 2R be a polynomial of exact degree 2R.Then, the following one-step linear difference equation, defines a multiderivative method of order 2R. Referring to the methods obtainable by Lemma 1, if in particular the polynomial P 2R is defined as in (10), then we obtain the class of methods in which we are interested here.They can be written as in (7) with, which are reported in Table 1, for R = 1, . . ., 5. In particular, for R = 1 and R = 2, we obtain the trapezoidal rule and the Euler-Maclaurin method of order four, respectively.These methods were originally introduced in the spline collocation context, dealing in particular with splines with multiple knots [14,15], as we will show in Section 4. We call them BSHO methods since we will show that they can be obtained dealing in particular with the standard B-spline basis.The stability function of the R-th one-step symmetric BSHO method is the rational function corresponding to the (R, R)-Padé approximation of the exponential function, as is that of the same order Runge-Kutta-Gauss method ( [19], p. 72).It has been proven that methods with this stability function are A-stable ( [19], Theorem 4.12).For the proof of the statement of the following corollary, which will be useful in the sequel, we refer to [15] and {u i } N i=0 denotes the related numerical solution produced by the R-th one-step symmetric BSHO method in (7)- (11), it is: u Conjugate Symplecticity of the Symmetric One-Step BSHO Methods Following the lines of the proof given in [10], in this section, we prove that one-step symmetric BSHO methods are conjugate symplectic schemes.The following lemma, proved in [20], is the starting point of the proof, and it makes use of the B-series integrator concept.On this concern, referring to [12] for the details, here, we just recall that a B-series integrator is a numerical method that can be expressed as a formal B-series, that is it has a power series in the time step in which each term is a sum of elementary differentials of the vector field and where the number of terms is allowed to be infinite.Lemma 2. Assume that Problem (1) admits a quadratic first integral Q(y) = y T Sy (with S denoting a constant symmetric matrix) and that it is solved by a B-series integrator Φ h (y).Then, the following properties, where all formulas have to be interpreted in the sense of formal series, are equivalent: We observe that Lemma 2 is used in [21] to prove the conjugate symplecticity of symmetric linear multistep methods.With similar arguments, we prove the following theorem. Theorem 1.The map u 1 = Φ h (u 0 ) associated with the one-step method ( 7)-( 11) admits a B-series expansion and satisfies Property (a) of Lemma 2. Proof.By defining the two characteristic polynomials of the trapezoidal rule: and the shift operator E(u n ) := u n+1 , the R-th method described in (7) reads, We now consider a function v(t), a stepsize h and the shift operator E h (v(t)) := v(t + h), and we look for a continuous function v(t) that satisfies (12) in the sense of formal series (a series where the number of terms is allowed to be infinite), using the relation By multiplying both side of the previous equation by Dρ(e hD ) −1 , we obtain: that is, Now, since Bernoulli numbers define the Taylor expansion of the function z/(e z − 1) and b 0 = 1, b 1 = −1/2 and b j = 0 for the other odd j, we have: Thus, we can write that: With some algebra, we arrive at the following relation, with: Observe that δ j = 0 for j = 1, . . ., R − 1, since the method is of order 2R (see [12], Theorem 3.1, page 340).Therefore, we derive the modified initial value differential equation associated with the numerical scheme by coupling (15) with the initial condition v(t 0 ) = y 0 .Thus, the one-step symmetric BSHO methods are B-series integrators.The proof of Lemma 2 Property (a) follows exactly the same steps of the analogous proof in Theorem 1 of [10] and in [12] (Theorem 4.10, page 591). In Table 2, we report the coefficients δ R for R ≤ 5 and the corresponding Bernoulli numbers.We can observe that the truncation error in the modified initial value problem is smaller than the one of the EMHO methods of the same order, which is equal to b i /i! (see [10]).The conjugate symplecticity property of a numerical scheme makes it suitable for the solution of Hamiltonian problems, since a conjugate symplectic method has the same long-time behavior of a symplectic one.A well-known pair of conjugate symplectic methods is composed by the trapezoidal and midpoint rules.Observe that the trapezoidal rule belongs to both the classes BSHO and EMHO of multiderivative methods, and its characteristic polynomial plays an important role in the proof of their conjugate symplecticity. The Spline Extension A (vector) Hermite polynomial of degree 2R + 1 interpolating both u n and u n+1 respectively at t n and t n+1 together with assigned derivatives u n+1 , k = 1, . . ., R, can be computed using the Newton interpolation formulas with multiple nodes.On the other hand, in his Ph.D. thesis [15], Loscalzo proved that a polynomial of degree 2R verifying the same conditions exists if and only if (7) is fulfilled with the β coefficients defined as in (11).Note that, since the polynomial of degree 2R + 1 fulfilling these conditions is always unique and its principal coefficient is given by the generalized divided difference u[t n , . . ., t n , t n+1 , . . ., t n+1 ] of order 2R + 1 associated with the given R-order Hermite data, the n-th condition in (7) holds iff this coefficient vanishes.If all the conditions in ( 7) are fulfilled, it is possible to define a piecewise polynomial, the restriction to [t n , t n+1 ] of which coincides with this polynomial, and it is clearly a C R spline of degree 2R with breakpoints at the mesh points.Now, when the definition given in ( 8) is used together with the assumption u 0 = y 0 , the conditions in ( 7) become a multiderivative one-step scheme for the numerical solution of (1).Thus, the numerical solution u n , n = 0, . . ., N it produces and the associated derivative values defined as in ( 8) can be associated with the above-mentioned 2R degree spline extension.Such a spline collocates the differential equation at the mesh points with multiplicity R, that is it verifies the given differential equation and also the equations y (j) (t) = d (j−1) (f•y) dt j−1 (t), j = 2, . . ., R at the mesh points.This piecewise representation of the spline is that adopted in [15].Here, we are interested in deriving its more compact B-spline representation.Besides being more compact, this also allows us to clarify the connection between BSHO and BS methods previously introduced in [16][17][18].For this aim, let us introduce some necessary notation.Let S 2R , be the space of C R 2R-degree splines with breakpoints at t i , i = 0, . . ., N, where t 0 < • • • < t N = t 0 + T. Since we relate to the B-spline basis, we need to introduce the associated extended knot vector: where: which means that all the inner breakpoints have multiplicity R in T and both t 0 and t N have multiplicity 2R + 1.The associated B-spline basis is denoted as B i , i = −2R, . . ., (N − 1)R and the dimension of S 2R as D, with D := (N + 1)R + 1. The mentioned result proven by Loscalzo is equivalent to saying that, if the β coefficients are defined as in (11), any C R spline of degree 2R with breakpoints at the mesh points fulfills the relation in (7), where u (j) n denotes the j-th spline derivative at t n .In turn, this is equivalent to saying that such a relation holds for any element of the B-spline basis of S 2R .Thus, setting α := (−1 ; 1) T ∈ IR 2 and β (i) considering the local support of the B-spline basis, we have that (α; β (1) ; ...; β (R) ), where the punctuation mark ";" means vertical catenation (to make a column-vector), can be also characterized as the unique solution of the following linear system, G (n) (α; β (1) ; . . .; where e 2R+2 = (0; . . .; 0; 1) ∈ IR 2R+2 and: R+1 defined as, (19) where B (j) i denotes the j-th derivative of B i .Note that the last equation in (17), 2β In order to prove the non-singularity of the matrix G (n) , we need to introduce the following definition, Definition 1.Given a non-decreasing set of abscissas Θ := {θ i } M i=0 , we say that a function g 1 agrees with another function g 2 at Θ if g (j) 2 (θ i ), j = 0, . . ., m i − 1, i = 0, . . ., M, where m i denotes the multiplicity of θ i in Θ. Proof.Observe that the restriction to I n = [t n , t n+1 ] of the splines in S 2R generates Π 2R since there are no inner knots in I n .Then, restricting to I n , Π 2R can be also generated by the B-splines of S 2R not vanishing in I n , that is from B (n−2)R , . . ., B nR .Since the polynomial in Π 2R agreeing with a given function in: is unique, it follows that also the corresponding (2R + 1) × (2R + 1) matrix collocating the spline basis active in I n is nonsingular.Such a matrix is the principal submatrix of G (n)T of order 2R + 1.Thus now, considering that the restriction to I n of any function in S 2R is a polynomial of degree 2R, we prove by reductio ad absurdum that the last row of G (n) cannot be a linear combination of the other rows.In fact, in the opposite case, there would exist a polynomial P of degree 2R such that P(t n ) = P(t n+1 ) = 0, P (t n ) = P (t n+1 ) = −1, and P (j) (t n ) = P (j) (t n+1 ) = 0, j = 2, . . ., R. Considering the specific interpolation conditions, this P does not fulfill the n-th condition in (7).This is absurd, since Loscalzo [15] has proven that such a condition is equivalent to requiring degree reduction for the unique polynomial of degree less than or equal to 2R + 1, fulfilling R + 1 Hermite conditions at both t n and t n+1 . Note that this different form for defining the coefficient of the R-th BSHO scheme is analogous to that adopted in [17] for defining a BS method on a general partition.However, in this case, the coefficients of the scheme do not depend on the mesh distribution, so there is no need to determine them solving the above linear system.On the other hand, having proven that the matrix G (n) is nonsingular will be useful in the following for determining the B-spline form of the associated spline extension. Thus, let us now see how the B-spline coefficients of the spline in S 2R associated with the numerical solution generated by the R-th BSHO can be efficiently obtained, considering that the following conditions have to be imposed, Now, we are interested in deriving the B-spline coefficients c i , i = −2R, . . ., (N − 1)R, of s 2R , Relying on the representation in (21), all the conditions in (20) can be re-written in the following compact matrix form, where c = (c −2R ; . . .; c (N−1)R ) ∈ IR mD , with c j ∈ IR m , I m is the identity matrix of size m × m, D is the dimension of the spline space previously introduced and where: A := (A 1 ; A 2 ; . . .; A R+1 ) , with each A being a (R + 1)-banded matrix of size (N + 1) × D (see Figure 1) with entries defined as follows: The following theorem related to the rectangular linear system in (22) ensures that the collocating spline s 2R is well defined. Theorem 2. The rectangular linear system in (22) has always a unique solution, if the entries of the vector on its right-hand side satisfy the conditions in (7) with the β coefficients given in (11). Proof.The proof is analogous to that in [18] (Theorem 1), and it is omitted. We now move to introduce the strategy adopted for an efficient computation of the B-spline coefficients of s 2R . Efficient Spline Computation Concerning the computation of the spline coefficient vectors: the unique solution of ( 22) can be computed with several different strategies, which can have very different computational costs and can produce results with different accuracy when implemented in finite arithmetic.Here, we follow the local strategy used in [18].Taking into account the banded structure of A i , i = 1, . . ., R + 1, we can verify that ( 22) implies the following relations, where u = (u 0 ; . . .; u N ), c (i) := (c (i−3)R ; . . .; c (i−1)R ) ∈ IR m (2R+1) , i = 1, . . ., N and: As a consequence, we can also write that, where ĉ(i) := (c (i) ; 0) ∈ IR m (2R+2) .Now, for all integers r < 2R + 2, we can define other R + 1 auxiliary vectors α(R) i,r , β(R) l,i,r , l = 1, . . ., R ∈ IR 2 , defined as the solution of the following linear system, where e r is the r-th unit vector in IR 2R+2 (that is the auxiliary vectors define the r-th column of the inverse of G (i) ).Then, we can write, From this formula, considering (25), we can conclude that: Thus, solving all the systems (26) for i = 1, . . ., N, r = r 1 (i), . . ., r 2 (i), with: all the spline coefficients are obtained.Note that, with this approach, we solve D auxiliary systems, the size of which does not depend on N, using only N different coefficient matrices.Furthermore, only the information at t i−1 and t i is necessary to compute c (i−3)R+r−1 .Thus, the spline can be dynamically computed at the same time the numerical solution is advanced at a new time value.This is clearly of interest for a dynamical adaptation of the stepsize. In the following subsection, relying on its B-spline representation, we prove that the convergence order of s 2R to y is equal to that of the numerical solution.This result was already available in [15] (see Theorem 4.2 in the reference), but proven with different longer arguments. Spline Convergence Let us assume the following quasi-uniformity requirement for the mesh, where M l and M u are positive constants not depending on h, with M l ≤ 1 and M u ≥ 1.Note that this requirement is a standard assumption in the refinement strategies of numerical methods for ODEs.We first prove the following result, that will be useful in the sequel. Proposition 2. If y ∈ S 2R and so in particular if y is a polynomial of degree at most 2R, then: where y n := y(t n ), y n := d j y d j t (t n ), j = 1, . . ., R, n = 0, . . ., N, and the spline extension s 2R coincides with y. Proof.The result follows by considering that the divided difference vanishes and, as a consequence, the local truncation error of the methods is null. Then, we can prove the following theorem (where for notational simplicity, we restrict to m = 1), the statement of which is analogous to that on the convergence of the spline extension associated with BS methods [18].In the proof of the theorem, we relate to the quasi-interpolation approach for function approximation, the peculiarity of which consists of being a local approach.For example, in the spline context considered here, this means that only a local subset of a given discrete dataset is required to compute a B-spline coefficient of the approximant; refer to [22] for the details.Theorem 3. Let us assume that the assumptions on f done in Corollary 1 hold and that (28) holds.Then, the spline extension s 2R approximates the solution y of (1) with an error of order O(h 2R ) where h := max i=0,...,N−1 h i .Proof.Let s 2R denote the spline belonging to S 2R obtained by quasi-interpolating y with one of the rules introduced in Formula (5.1) in [22] by point evaluation functionals.From [22] (Theorem 5.2), under the quasi-uniformity assumption on the mesh distribution, we can derive that such a spline approximates y with maximal approximation order also with respect to all the derivatives, that is, where K is a constant depending only on R, M l and M u . On the other hand, by using the triangular inequality, we can state that: Thus, we need to consider the first term on the right-hand side of this inequality.On this concern, because of the partition of unity property of the B-splines, we can write: where c := (c −2R ; . . .; c (N+1)R+1 ) and c := (c −2R ; . . .; c (N+1)R+1 ). Now, for any function g ∈ C 2R [t 0 , t 0 + T], we can define the following linear functionals, where: and the vector ( α(R) i,r ; β(R) 1,i,r ; . . .; β(R) R,i,r ) has been defined in the previous section.Considering from Proposition 2 that s 2R , as well as any other spline belonging to S 2R can be written as follows, 29), we can deduce that: ) is defined in (26) as the r-th column of the inverse of the matrix G (i) .On the other hand, the entries of such nonsingular matrix do not depend on h, but because of the locality of the B-spline basis and of the R-th multiplicity of the inner knots, only on the ratios h j /h j+1 , j = i − 1, i, which are uniformly bounded from below and from above because of (28). Thus, there exists a constant C depending on M l , M u and R such that G (i) −1 ≤ C, which implies that the same is true for any one of the mentioned coefficient vectors.From the latter, we deduce that for all indices, we find: On the other hand, taking into account the result reported in Corollary 1 besides (29), we can easily derive that w (i) Approximation of the Derivatives The computation of the derivative u (j) n , j ≥ 2, from the corresponding u n is quite expensive, and thus, usually, methods not requiring derivative values are preferred.Therefore, as well as for any other multiderivative method, it is of interest to associate with BSHO methods an efficient way to compute the derivative values at the mesh points.We are exploiting a number of possibilities, such as: • using generic symbolic tools, if the function f is known in closed form; • using a tool of automatic differentiation, like ADiGator, a MATLAB Automatic Differentiation Tool [23]; • using the Infinity Computer Arithmetic, if the function f is known as a black box [6,7,10]; • approximating it with, for example, finite differences. As shown in the remainder of this section, when approximate derivatives are used, we obtain a different numerical solution, since the numerical scheme for its identification changes.In this case, the final formulation of the scheme is that of a standard linear multistep method, being still derived from (7) with coefficients in (11), but by replacing derivatives of order higher than one with their approximations.In this section, we just show the relation of these methods with a class of Boundary Value Methods (BVMs), the Extended Trapezoidal Rules (ETRs), linear multistep methods used with boundary conditions [24].Similar relations have been found in [25] with HO and the equivalent class of the super-implicit methods, which require the knowledge of functions not only at past, but also at future time steps.The ETRs can be derived from BSHO when the derivatives are approximated by finite differences.Let us consider the order four method with R = 2.In this case, the first derivative of f could be approximated using central differences: =: f i and u =: f i , is: after the approximation becomes: rearranging, we recover the ETR of order four: With similar arguments for the method of order six, R = 3, by approximating the derivatives with the order four finite differences: and: and rearranging, we obtain the sixth order ETR method: This relation allows us to derive a continuous extension of the ETR schemes using the continuous extension of the BSHO method, just substituting the derivatives by the corresponding approximations.Naturally, a change of the stepsize will now change the coefficients of the linear multistep schemes. Observe that BVMs have been efficiently used for the solution of boundary value problems in [26], and the BS methods are also in this class [16]. It has been proven in [21] that symmetric linear multistep methods are conjugate symplectic schemes.Naturally, in the context of linear multistep methods used with only initial conditions, this property refers only to the trapezoidal method, but when we solve boundary value problems, the correct use of a linear multistep formula is with boundary conditions; this makes the corresponding formulas stable, with a region of stability equal to the left half plane of C (see [24]).The conjugate symplecticity of the methods is the reason for their good behavior shown in [27,28] when used in block form and with a sufficiently large block for the solution of conservative problems. Remark 1.We recall that, even when approximated derivatives are used, the numerical solution admits a C R 2R-degree spline extension verifying all the conditions in (22), where all the u (j) n , j ≥ 2 appearing on the right-hand side have to be replaced with the adopted approximations.The exact solution of the rectangular system in ( 22) is still possible, since (7) with coefficients in (11) is still verified by the numerical solution u n , n = 0, . . ., N, by its derivatives u (1) n = f(u n ), n = 0, . . ., N and by the approximations of the higher order derivatives.The only difference in this case is that the continuous spline extension collocates at the breakpoints of just the given first order differential equation. Numerical Examples The numerical examples reported here have two main purposes: the first is to show the good behavior of BSHO methods for Hamiltonian problems, showing both the linear growth of the error for long time computation and the conservation of the Hamiltonian.To this end, we compare the methods with the symplectic Gauss-Runge-Kutta methods and with the conjugate symplectic EMHO methods.On the other hand, we are interested in showing the convergence properties of the spline continuous extensions.Observe that the availability of a continuous extension of the same order of the method is an important property.In fact for high order methods, especially for superconvergent methods like the Gauss ones, it is very difficult to find a good continuous extension.The natural continuous extension of these methods does not keep the same order of accuracy, without adding extra stages [29].Observe also that a good continuous extension is an important tool, for example for the event location. We report results of our experiments for BSHO methods of order six and eight.We recall that the order two BSHO method corresponds to the well-known trapezoidal rule, the property of conjugate symplecticity of which is well known (see for example [12]) and the continuous extension by the B-spline of which has been already developed in [18].The order four BSHO belongs also to the EMHO class, and it has been analyzed in detail in [10]. Kepler Problem The first example is the classical Kepler problem, which describes the motion of two bodies subject to Newton's law of gravitation.This problem is a completely integrable Hamiltonian nonlinear dynamical system with two degrees of freedom (see, for details, [30]).The Hamiltonian function: describes the motion of the body that is not located in the origin of the coordinate systems.This motion is an ellipse in the q 1 -q 2 plane, the eccentricity e of which is set using as starting values: and with period µ := 2π.The first integrals of this problem are: the total energy H, the angular momentum: M(q 1 , q 2 , p 1 , p 2 ) := q 1 p 2 − q 2 p 1 . Only three of the four first integrals are independent, so, for example, A 1 can be neglected. As in [10], we set e = 0.6 and h = µ/200, and we integrate the problem over 10 3 periods.Setting y := (q 1 , q 2 , p 1 , p 2 ), the error y j − y 0 1 in the solution is computed at specific times fixed equal to multiples of the period, that is at t j = 2πj, with j = 1, 2, . . .; the errors in the invariants have been computed at the mesh points t n = πn, n = 1, 3, 5 . ... Figure 2 reports the obtained results for the sixth and eighth order BSHO (dotted line, BSHO6, BSHO8), the sixth order EMHO (solid lines, EMHO6) and the sixth and eighth order Gauss-Runge-Kutta (GRK) (dashed lines, GRK6, GRK8) methods.In the top-left picture, the absolute error of the numerical solution is shown; the top-right picture shows the error in the Hamiltonian function; the error in the angular momentum is drawn in the bottom-left picture, while the bottom-right picture concerns the error in the second component of the Lenz vector.As expected from a symplectic or a conjugate symplectic integrator, we can see a linear drift in the error y j − y 0 1 as the time increases (top left plot).As well as for the other considered methods, we can see that BSHO methods guarantee a near conservation of the Hamiltonian function, of the second component of the Lenz vector and of the angular momentum (other pictures).This latter quadratic invariant is precisely conserved (up to machine precision) by GRK methods due to their symplecticity property.We observe also that, as expected, the error for the BSHO6 method is 3 10 of the error of the EMHO6 method. To check the convergence behavior of the continuous extensions, we integrated the problem over 10 periods starting with stepsize h = µ/N, N = 100.We computed a reference solution using the order eight method with a halved stepsize, and we computed the maximum absolute error on the doubled grid.The results are reported in Table 3 for the solution and the first derivative and clearly show that the continuous extension respects the theoretical order of convergence. Non-Linear Pendulum Problem As a second example, we consider the dynamics of a pendulum under the influence of gravity.This dynamics is usually described in terms of the angle q that the pendulum forms with its stable rest position: q + sin q = 0, where p = q is the angular velocity.The Hamiltonian function associated with ( 31) is: An initial condition (q 0 , p 0 ) such that |H(q 0 , p 0 )| < 1 gives rise to a periodic solution y(t) = (q(t), p(t)) corresponding to oscillations of the pendulum around the straight-down stationary position.In particular, starting at y 0 = (q 0 , 0) , the period of oscillation may be expressed in terms of the complete elliptical integral of the first kind as: For the experiments, we choose q 0 = π/2; thus, the period µ is equal to 7.416298709205487.We use the sixth and eighth order BSHO and GRK methods and the sixth order EMHO method with stepsize h = µ/20 to integrate the problem over 2 • 10 4 periods.Setting y = (q, p), again, the errors y j − y 0 in the solution are evaluated at times that are multiples of the period µ, that is for t j = µj, with j = 1, 2, . . .; the energy error H(y n ) − H(y 0 ) has been computed at the mesh points t n = 11hn, n = 1, 2, . ... Figure 3 reports the obtained results.In the left plot, we can see that, for all the considered methods, the error in the solution grows linearly as time increases.A near conservation of the energy function is observable in both pictures on the right.The amplitudes of the bounded oscillations are similar for both methods, confirming the good long-time behavior properties of BSHO methods for the problem at hand.To check the convergence behavior of the continuous extensions, we integrated the problem over 10 periods starting with stepsize h = µ/N, N = 10.We computed a reference solution using the order eight method with a halved stepsize, and we compute the maximum absolute error on the doubled grid.The results are reported in Table 4 for the solution and the first derivative and clearly show, also for this example, that the continuous extension respects the theoretical order of convergence. Conclusions In this paper, we have analyzed the BSHO schemes, a class of symmetric one-step multi-derivative methods firstly introduced in [14,15] for the numerical solution of the Cauchy problem.As a new result, we have proven that these are conjugate symplectic schemes, thus suited to the context of geometric integration.Moreover, an efficient approach for the computation of the B-spline form of the spline extending the numerical solution produced by any BSHO method has been presented.The spline associated with the R-th BSHO method collocates the differential equation at the mesh points with multiplicity R and approximates the solution of the considered differential problem with the same accuracy O(h 2R ) characterizing the numerical solution.The relation between BSHO schemes and symmetric linear multistep methods when the derivatives are approximated by finite differences has also been pointed out. Future related work will consist in studying the possibility of associating with the BSHO schemes a dual quasi-interpolation approach, as already done dealing with the BS linear multistep methods in [16,18,31]. Figure 2 . Figure 2. Kepler problem: results for the sixth (BSHO6, red dotted line) and eighth (BSHO8, purple dotted line) order BSHO methods, sixth order Euler-Maclaurin method (EMHO6, blue solid line) and sixth (Gauss-Runge-Kutta (GRK6), yellow dashed line) and eighth (GRK8-green dashed line) order Gauss methods.(Top-left) Absolute error of the numerical solution; (top-right) error in the Hamiltonian function; (bottom-left) error in the angular momentum; (bottom-right) error in the second component of the Lenz vector. Figure 3 . Figure 3. Nonlinear pendulum problem: results for the Hermite-Obreshkov method of order six and eight (BSHO6, red, and BSHO8, purple dotted lines), for the sixth order Euler-Maclaurin (EMHO6, blue solid line) and Gauss methods (GRK6, yellow, and GRK8, green dashed lines) applied to the pendulum problem.(Left) plot: absolute error of the numerical solution; (upper-right) and (bottom-right) plots: error in the Hamiltonian function for the sixth order and eighth order integrators, respectively. , Corollary 1. Let us assume that f ∈ C 2R+1 (D), where D := {y ∈ IR m | ∃t ∈ [t 0 , t 0 + T] such that y − y(t) 2 ≤ L b }, with L b > 0.Then, there exists a positive constant h b such that if max Table 2 . Coefficients of the modified differential equations and Bernoulli numbers. Table 3 . Kepler problem: maximum absolute error of the numerical solution and its derivative computed for 10 periods.
9,468.2
2018-05-24T00:00:00.000
[ "Mathematics" ]
Effects of repeated cigarette smoke extract exposure over one month on human bronchial epithelial organotypic culture Cigarette smoke is a known risk factor for inflammatory diseases in the respiratory tract, and inflammatory exacerbation is considered pivotal to the pathogenesis of these diseases. Here, we performed two repeated exposure studies in which we exposed human bronchial epithelial tissues in an organotypic culture model to cigarette smoke extract (CSE); the first study was conducted over a four-day period to determine the suitable dose range for the extended exposure period, and the second was a one-month exposure study to elucidate the exposure-by-exposure effects in bronchial tissues. We focused on matrix metalloproteinase (MMP)-9 and -1/3 and the inflammatory cytokines interleukin (IL)-8 and growth factor related oncogene to evaluate the transition into an inflammatory state. Even at CSE doses with no or low toxicity for a single exposure, the repetition of exposure induced cumulative effects on both the inflammatory responses, specifically the IL-8 and MMPs levels, and tissue morphology. Interestingly, untreated controls initially had relatively high baseline levels of these secreted proteins; these levels gradually declined, after which they showed periodic level changes, suggesting an acclimation period may be needed for this system. These results demonstrate the usability of this system for the elucidation of sub-chronic effects in vitro. Introduction Cigarette smoke (CS) is a major risk factor for airway diseases, such as chronic obstructive disease (COPD) [1,2]. In the pathogenesis of such diseases, cells and tissues in the respiratory tract directly interact with the reactive substances in CS, which elicits responses that lead to inflammation. To reproduce and investigate the pathogenesis of inflammatory airway diseases, many in vivo studies have been conducted with rodent models, but their results leave some uncertainties about what effects may be due to interspecies differences [3]. Meanwhile, in vitro models of respiratory tract tissue have improved over the past decades, and these enhanced models now enable investigations of the effects of test substances to be conducted using human cells that resemble human in vivo tissues. The model tissues consist of functionally differentiated ciliated cells, basal cells, and club cells, and these cells form a pseudostratified columnar epithelium [4]. Previous studies showed the similarity between this in vitro model and in vivo epithelia, in terms of their morphologies and transcriptomes. Pezzulo et al. demonstrated that transcriptional profiles of organotypic culture of bronchial epithelium are comparable with those of airway epithelia in vivo [5], and Mathis et al. also reported that such organotypic bronchial epithelia show a similar transcriptional perturbation following exposure to CS compared with human lungs that have inhaled CS [6]. The organotypic culture model has the additional advantage of having a long-shelf life. Baxter et al. demonstrated that, over a few months, organotypic cultured tissues retain their phenotype, including not only morphology but also expression of xenobiotic metabolizing genes and key mucociliary protein markers [7]. Furthermore, Anderson et al. previously conducted work on repeated exposure to limonene and its reaction products using organotypic cultured tissues, and they found increases of some inflammatory cytokines that were not observed following a single exposure in cell lines [8]. These results suggest that organotypic culture models are useful tools for the elucidation of subchronic effects. E-mail addresses<EMAIL_ADDRESS>(S. Ito<EMAIL_ADDRESS>(K. Ishimori<EMAIL_ADDRESS>(S. Ishikawa). T repetition increased [9]. We hypothesized that the augmentation could be replicated by exposure to non-toxic dose of cigarette smoke extract (CSE) containing smoke constituents other than those in the gas and vapor phase, and here, to determine suitable repeated exposure conditions of cigarette smoke extract (CSE), we first performed four-day exposures to CSE at intervals of 24 h. We then performed a one-month CSE exposure study with three exposures per week and analyzed the alterations of several inflammatory mediators. For comparative purposes, we also observed the spontaneous inflammatory responses in organotypic culture. To our knowledge, this is the first report to elucidate the long-term repeated dose effects of CSE on inflammatory exacerbation with organotypic cultured bronchial tissues, and to investigate the transition of inflammatory state in the long term cultivation without any exposures. We believe these findings will contribute to the further use of this in vitro model for complex studies. Cell culture The organotypic cultured bronchial epithelial tissues (MucilAir) and culture medium (MucilAir culture medium) were purchased from Epithelix Sàrl (Geneva, Switzerland). Cell culture was performed in accordance with the manufacturer's instructions. Separate single donors were used for the four-day exposure study and one-month exposure study; the donors were a 41-year-old Caucasian female and a 28-year old Caucasian male, respectively. Preparation of the cigarette smoke extract (CSE) 3 R4F reference cigarettes were purchased from the University of Kentucky and conditioned under 22 ± 2°C and 60 ± 5% relative humidity for at least 48 h before use. Smoke of the 3R4F was generated with a Borgwaldt RM20H smoking machine (Hamburg, Germany) under the Health Canada Intense smoking regimen (a 55-ml puff taken over 2 s, repeated every 30 s) [10]. The total particulate matter was collected on a 45-mm diameter Cambridge filter pad, and then it was extracted with dimethyl sulfoxide purchased from Sigma Aldrich (St. Louis, MO, USA). The initial concentration of total particulate matter was adjusted to 40 mg/ml and appropriately diluted with MucilAir culture medium for subsequent in vitro exposures. Repeated exposure study design Two individual repeated CSE exposure studies were conducted: a four-day exposure study with intervals of 24 h and a one-month repeated exposure study with three exposures per week for 29 days. The MucilAir tissues were cultivated at 37°C in 5% CO 2 and a humidity of more than 95% beginning at least 96 h prior to the initial CSE exposure. Schematics of the study design are shown in Fig. 1. The tissues were exposed to CSE from the basolateral compartment. Tissues treated with only medium changes were used as non-treatment controls. CSE concentration doses were 5, 10, 20, and 50 μg/ml and 1, 5, 10, and 20 μg/ ml CSE for the four-day and one-month exposure study, respectively. The media were collected at each medium change and exposure time and were subjected chemo/cytokine measurement and zymography. The tissue cultures on the day of terminal harvest were subjected to histological evaluation. Histological analysis The tissues were fixed in 4% paraformaldehyde (Wako Pure Chemical Industries, Tokyo, Japan) at 4°C for at least 24 h and then embedded in paraffin. Tissues was sectioned into slices with a 5-μm thickness and stained with hematoxylin and eosin. Measurement of chemo/cytokines Concentrations of chemo/cytokines were determined with the Human Cytokine Magnetic Kit (Merck Millipore, Billerica, MA, USA) using the Bio-plex 200 (Bio-Rad, Hercules, CA, USA). Growth factor related oncogene (GRO), interleukin (IL)-8, Interferon gamma-induced protein-10 (IP-10), Monocyte Chemotactic Protein-1 (MCP-1), macrophage inflammatory protein-1β (MIP-1β), regulated on activation, normal T cell expressed and secreted (RANTES), and stromal cell-derived factor 1α (SDF-1α) were measured in the samples obtained from the four-day repeated CSE exposure study. Only GRO and IL-8 were measured in the samples obtained from the one-month repeated CSE exposure study because these chemokines were observed to be highly secreted from the tissues and had shown dose-dependent increases in the preceding four-day CSE exposure study. Gelatin zymography The samples obtained from the one-month CSE exposure study were mixed with non-reducing sodium dodecyl sulfate (SDS) sample buffer for polyacrylamide gel electrophoresis. To detect MMP-2 and -9, gelatin zymography was performed. The proteins in the samples were separated with SDS gel electrophoresis in 7.5% acrylamide gels containing 1.0 mg/ml gelatin. After the electrophoresis, the gels were washed twice with 2.5 mM Tris-HCl buffer at pH 7.5 containing 0.5% Triton X-100 and 150 mM NaCl and then incubated in 2.5 mM Tris-HCl buffer at pH 7.5 containing 20 mM NaCl and 10 mM CaCl 2 for 20 h. The gels were subsequently stained with 0.1% Coomassie blue and photographed using a LAS-4000. The intensities of corresponding bands for each MMP were quantified using Image Quant TL (GE Healthcare, Little Chalfont, UK). The intensity values of the untreated control at each exposure day were used for normalization. Chemicals used for zymography were purchased from Wako Pure Chemical Industries. Casein zymography To detect putative MMP-1/3, 0.5 mg/ml casein-containing gels were used. SDS gel electrophoresis was performed at 4°C, followed by 60min pre-electrophoresis at room temperature. The remaining experimental procedures were the same as those described above for the gelatin zymography, except for the incubation time for casein zymography, which was 60 h. Statistical analysis All presented data are shown as the means and standard errors of triplicate cell culture inserts. Parametric one-way analyses of variance followed by Dunnett's multiple comparison tests were performed to identify statistically significant differences (set as p < 0.05) compared with the untreated control. Results and discussion We first performed a four-day CSE exposure study to determine the suitable dose range for the subsequent one-month repeated CSE exposure study. Schematics of the study designs are shown in Fig. 1A. Histological evaluation revealed that the tissues exposed to CSE at any of the tested dosages in the four-day exposure study had no obvious morphological alterations, suggesting that these doses were in the low or no toxicity range (Fig. 2). Additionally, we investigated the cytokine secretion during this short CSE exposure study, and resulted that only IL-8 showed a clear dose response over the experimental period (Fig. 3B), and GRO and RANTES showed a weak dose response and were only partly statistically significant ( Fig. 3A and E). Other tested cytokines showed no or weak dose response without statistical significance (Fig. 3C, D, F and G). It is suggested that the 5-50 μg/ml of CSE is in no or low toxic range, because no apparent toxic effects were observed based on the morphology, but the increased IL-8 levels indicated that inflammation still occurred in the tissues at these doses. Therefore, we decided that the same dose range would be suitable for further extension of the repeated CSE exposure period. We next performed a one-month CSE exposure study with three exposures per week, for a total of 13 repeated exposures. Since IL-8 is known to be released from the cells in respiratory tract during both acute and chronic inflammation [11,12], and showed a dose-responsive increase in the short term exposure study (Fig. 3B), we analyzed the secretion of IL-8 to monitor the inflammatory state transition in the one-month exposure study. Additionally, GRO, which is also known to be a biomarker of acute lung injury [13], was measured because as well as IL-8, the secretion level of GRO was significantly higher than the other cytokines. As results, IL-8 levels gradually increased after exposure to more than 10 μg/ml of CSE; no significant induction of IL-8 was observed at day 3, but IL-8 levels finally reached a maximum of 28fold higher than the non-treatment control at day 24 ( Fig. 4A and B). A slight augmentation of GRO levels was found in the case of exposure to 20 μg/ml CSE, and a clear CSE dose dependency of GRO levels was observed after day 18 ( Fig. 4C and D), suggesting that the tissues became susceptible to CSE after repeated exposure. Importantly, IL-8 and GRO are considered to act as chemoattractant proteins for neutrophils, which has pivotal roles in the pathogenesis of airway inflammatory diseases by releasing granule proteins, and resulting in further inflammatory exacerbation in respiratory tract [14][15][16]. Therefore, it is suggested that the augmentation of these cytokines observed in this study could recapitulate the primary responses of cell-specific inflammation that leads to tissue-level inflammatory exacerbation that involves immune cells. We also examined MMP secretion because certain kinds of MMPs are known to be related with inflammatory responses [17,18]. We performed gelatin and casein zymographies to evaluate the secreted amounts of MMP-9 and putative MMP-1/3, respectively. The results of both zymographies indicate that the secreted amounts of MMPs increased as the number of exposure increased (Fig. 5A-D). Surprisingly, augmentation of MMPs was observed over time, even in the samples exposed to only 1 μg/ml CSE, suggesting that MMPs are more sensitively induced by CSE than the other tested inflammatory cytokines. Normally, MMPs contribute to tissue remodeling [19], and previous work has demonstrated that they are induced by acute injury [20]. Thus, the induction of such MMPs can be elicited by acute effects and may be causative to inducing inflammatory cytokines. Furthermore, our histological analysis also revealed potentially cumulative effects of CSE exposure. In contrast to the results of the four-day CSE exposure study, focal alterations of morphology were observed in the tissues exposed to 20 μg/ml CSE over a one-month period; in these areas, there appeared to be abnormal proliferation and metaplastic alteration (Fig. 6). Together, these results suggest that repeated CSE exposure elicited cumulative effects on organotypic cultured bronchial epithelial tissues. The present findings are in agreement with our previous report, which demonstrated that pro-inflammatory cytokine secretion is augmented in tissues under chronic, direct exposure to whole CS [9]. The test substance used here is a CS extract, which contains other chemicals than the ones present in the gas and vapor phases; therefore, inflammatory exacerbation may be attributable to such chemicals. We additionally investigated the changes from the baseline levels of cytokines and MMPs in non-treated controls during the one-month CSE exposure experimental period. These measurements revealed that the secretions of inflammatory mediators from the non-treated controls were relatively high at the start of the study but declined by day 8 (Fig. 7A-D). Based on this finding, it is possible that although the organotypic cultured bronchial epithelial tissues are shipped in a fully differentiated state, they may become damaged during shipping. Furthermore, after day 8, periodic increases and decreases were found in the secretions of IL-8, GRO, and MMP-9, and these changes seemed to be synchronized (Fig. 7A-C). Thus, there may be some non-24 h circadian-like rhythm in the tissues that can show spontaneous inflammatory responses. These results are noteworthy because they suggest that the timing of exposure is likely critical for the interpretation of study results, especially for single-exposure studies. Additionally, to avoid confounding the data, we propose that organotypic cultured bronchial epithelial tissues be subjected to an acclimation period during which the tissues are cultured 6. Histology of organotypic cultured bronchial tissues after a one-month period of repeated CSE exposure. Hematoxylin and eosin staining was performed after exposing the organotypic bronchial tissue cultures to each concentration of cigarette smoke extract (CSE) for one month. (A-D) Representative histology images from tissue exposed to 0 (A), 5 (B), 10 (C), or 20 (D) μg/ml CSE. (E-F) Images of abnormal morphology focally observed in tissue exposed to 20 μg/ml CSE. The scale bar indicates 100 μm. Histology images from tissue exposed to 1 μg/ml CSE is not prepared due to technical failure. with only medium changes until their inflammatory responses triggered by shipping calm down. Conclusions The results of these two repeated CSE exposure studies indicate that the organotypic cultured bronchial tissues show cumulative effects from repeated exposure to CSE doses with low or no toxicity and are therefore a useful tool for the evaluation of tissue-specific sub-chronic effects. In addition, our findings also support the implementation of an acclimation period for organotypic cultured bronchial tissues before use to avoid potentially confounding factors.
3,537.8
2018-08-18T00:00:00.000
[ "Biology", "Medicine" ]
Impact of Using Chevrons Nozzle on the Acoustics and Performances of a Micro Turbojet Engine : This paper presents a study regarding the noise reduction of the turbojet engine, in particular the jet noise of a micro turbojet engine. The results of the measurement campaign are presented followed by a performances analysis which is based on the measured data by the test bench. Within the tests, beside the baseline nozzle other two nozzles with chevrons were tested and evaluated. First type of nozzle is foreseen with eight triangular chevrons, the length of the chevrons being L = 10 percentages from the equivalent diameter and an immersion angle of I = 0 deg. For the second nozzle the length and the immersion angle were maintained, only the chevrons number were increased at 16. The micro turbojet engine has been tested at four different regimes of speed. The engine performances were monitored by measuring the fuel flow, the temperature in front of the turbine, the intake air flow, the compression ratio, the propulsion force and the temperature before the compressor. In addition, during the testing, the vibrations were measured on axial and radial direction which indicate a normal functioning of the engine during the chevron nozzles testing. Regarding the noise, it was concluded that at low regimes the noise doesn’t presents any reduction when using the chevron nozzles, while at high regimes an overall noise reduction of 2–3 dB(A) was achieved. Regarding the engine performances, a decrease in the temperature in front of the turbine, compression ratio and the intake air and fuel flow was achieved and also a drop of few percent of the propulsion force. Introduction Nowadays, one of the main problems with which aviation is faced is noise pollution [1] and the need to heavily reduce the noise exposure of the areas adjacent to airports. In aviation, the most important noise sources are the take-off and landing phases of flight. For most commercial jets, the primary noise sources are the engines. The secondary one originates in the airflow around the aircraft (aerodynamic source) [2]. In terms of aircraft noise sources [3], the engine noise has one of the highest proportions [4] where the jet noise has an important contribution. There are many studies and research projects treating the jet noise reduction [5,6], but the simplest technique is to manufacture chevrons on the nozzle without having high loss of the propulsion force [7][8][9][10][11]. By definition, the chevrons are dynamic gas equipment that, by initiating the vortical flow, smoothens the mixture of the two flows, having different velocities, and decreases the resultant noise of those flow interactions [12]. These solutions were proposed for the first time by the Lighthill in 1952 [13], and since then, these were studied and developed, and complex geometries resulted. For example, the fluid chevrons [14,15], stepped nozzles [16], and nozzle are manufactured from smart material [17]. As was mentioned before, these solutions were heavily studied, but there are no studies in which the nozzle with chevrons, to be mounted on a turbojet engine, and the performance influences are assessed [18]. One might observe that, in specialized literature, we find many studies of chevrons' acoustic effects. Most of them present studies of chevrons' effects on nozzles, or on simple pipes working with air, but there is a missing part when we think about the effects of chevrons on the turbojet performances. The present paper proposes a study of the chevrons effects on the acoustic and turbojet performances using a test bench equipped with a micro turbojet engine. This paper will focus on noise reduction of the micro turbojet engine JET CAT P80, which is used on drones [23] or for academic purpose [24]. The paper novelty consists in acoustic and performance evaluations of this engine tested with three different nozzles (straight nozzle considered the reference and two nozzles with chevrons one with 8 chevrons and the second one with 16 chevrons, both having the chevrons length of 10 percent from the equivalent diameter and immersion angle of zero degrees) at different functioning regimes of speed. The laboratory tests were focused on acoustic evaluation of each nozzle and the engine parameters (compression ratio, air and fuel flow, temperature after the compressor, and in front of the turbine, and the vibrations on axial and radial directions). Experimental Test Bench The measurements were performed on the micro turbojet engine Jet CAT P80 [25]. which is in the endowment of the aerospace engineering faculty of the Polytechnic University of Bucharest. The acoustic and vibration measurements were performed by using the multi-channel acquisition system, Orchestra, from 01 dB, and the vibrations were measured on axial and radial direction with two accelerometers PCB 352C03 mounted on the engine supports, the noise was recorded with 3 microphones GRAS 40 AQ. The microphones were placed at 0.4 m from the engine nozzle and 0.4 m between them as is presented in Figure 1. As was mentioned before, these solutions were heavily studied, but there are no studies in which the nozzle with chevrons, to be mounted on a turbojet engine, and the performance influences are assessed [18]. One might observe that, in specialized literature, we find many studies of chevrons' acoustic effects. Most of them present studies of chevrons' effects on nozzles, or on simple pipes working with air, but there is a missing part when we think about the effects of chevrons on the turbojet performances. The present paper proposes a study of the chevrons effects on the acoustic and turbojet performances using a test bench equipped with a micro turbojet engine. This paper will focus on noise reduction of the micro turbojet engine JET CAT P80, which is used on drones [23] or for academic purpose [24]. The paper novelty consists in acoustic and performance evaluations of this engine tested with three different nozzles (straight nozzle considered the reference and two nozzles with chevrons one with 8 chevrons and the second one with 16 chevrons, both having the chevrons length of 10 percent from the equivalent diameter and immersion angle of zero degrees) at different functioning regimes of speed. The laboratory tests were focused on acoustic evaluation of each nozzle and the engine parameters (compression ratio, air and fuel flow, temperature after the compressor, and in front of the turbine, and the vibrations on axial and radial directions). Experimental Test Bench The measurements were performed on the micro turbojet engine Jet CAT P80 [25]. which is in the endowment of the aerospace engineering faculty of the Polytechnic University of Bucharest. The acoustic and vibration measurements were performed by using the multi-channel acquisition system, Orchestra, from 01 dB, and the vibrations were measured on axial and radial direction with two accelerometers PCB 352C03 mounted on the engine supports, the noise was recorded with 3 microphones GRAS 40 AQ. The microphones were placed at 0.4 m from the engine nozzle and 0.4 m between them as is presented in Figure 1. A detailed study with noise assessment of this micro turbojet engine can be found in reference [26], where the experimental studies for 2 turbo jet engine regimes were made, in the anechoic chamber of the Acoustic and Vibration Laboratory of the National Research and Development Institute for Gas Turbines COMOTI, and the directivity patterns A detailed study with noise assessment of this micro turbojet engine can be found in reference [26], where the experimental studies for 2 turbo jet engine regimes were made, in the anechoic chamber of the Acoustic and Vibration Laboratory of the National Research and Development Institute for Gas Turbines COMOTI, and the directivity patterns of both regimes, were performed from the acoustic signals of the microphones dispose around the engine. As is presented in Figure 1, the entire engine is held by two supports mounted on a sliding board that pushes into a force transducer. On these supports, the two accelerometers were placed, Acc1 in axial direction, Acc2 in radial direction, and the microphones were mounted in a straight line, where Microphone 1 is situated to the right of the nozzle. The measurements were performed with the engine running in four different regimes (35,000 RPM, 55,000 RPM, 101,000 RPM, and 116,000 RPM) for the three tested nozzles. Each regime was performed by keeping the speed constant for a duration of 1 min, the measured parameters being averaged in this time period. Designing and Manufacturing the Chevrons Nozzles The main parameters which define the chevrons are: chevrons number N, length L and the immersion angle I [27]. According to the study presented in [28], it is recommended that the chevrons' length be between 5 and 10 percent of the equivalent diameter. The first step of the study was to scan the reference nozzle with ATOS 5 M 3D Optical Scanner, from which the digital 3D model result is presented in Figure 2. The chevrons were designed on the 3D reference model and the geometrical parameters of these nozzle type being: N is the number of the chevrons; L is the chevrons' length, and I is the immersion angle of the chevrons inside of the gas jet. In this paper, only the number of chevrons is assessed. The other parameters being maintained are constant. of both regimes, were performed from the acoustic signals of the microphones dispose around the engine. As is presented in Figure 1, the entire engine is held by two supports mounted on a sliding board that pushes into a force transducer. On these supports, the two accelerometers were placed, Acc1 in axial direction, Acc2 in radial direction, and the microphones were mounted in a straight line, where Microphone 1 is situated to the right of the nozzle. The measurements were performed with the engine running in four different regimes (35,000 RPM, 55,000 RPM, 101,000 RPM, and 116,000 RPM) for the three tested nozzles. Each regime was performed by keeping the speed constant for a duration of 1 min, the measured parameters being averaged in this time period. Designing and Manufacturing the Chevrons Nozzles The main parameters which define the chevrons are: chevrons number N, length L and the immersion angle I [27]. According to the study presented in [28], it is recommended that the chevrons' length be between 5 and 10 percent of the equivalent diameter. The first step of the study was to scan the reference nozzle with ATOS 5 M 3D Optical Scanner, from which the digital 3D model result is presented in Figure 2. The chevrons were designed on the 3D reference model and the geometrical parameters of these nozzle type being: N is the number of the chevrons; L is the chevrons' length, and I is the immersion angle of the chevrons inside of the gas jet. In this paper, only the number of chevrons is assessed. The other parameters being maintained are constant. The tested nozzles are presented in Figure 3, the first one being the reference nozzle with an equivalent diameter of 44.3 mm, the second nozzle is the one with 8 chevrons with a length of 10% from equivalent diameter of the baseline nozzle having an immersion angle of 0 and the third nozzle has 16 chevrons and the other parameters are the same as nozzle 2, as is presented in Figure 3 The tested nozzles are presented in Figure 3, the first one being the reference nozzle with an equivalent diameter of 44.3 mm, the second nozzle is the one with 8 chevrons with a length of 10% from equivalent diameter of the baseline nozzle having an immersion angle of 0 o and the third nozzle has 16 chevrons and the other parameters are the same as nozzle 2, as is presented in Figure 3 Appl. Sci. 2021, 11, x FOR PEER REVIEW 4 of 14 Acoustic Evaluation Results In the present study, it is investigated how the noise reduction, produced by the hot jet gases through 2 nozzles with chevrons, compares with the reference nozzle. It must be mentioned that, while the micro engine is functioning, besides the jet noise, there are many other noise sources generated by: air circulation at intake level, turbulences generated by the air passage through compressor stage, mechanical noise generated by the speed of the shaft, rotor-stator interaction, known as Blade Pass Frequency BPF, combustion noise, etc. One of these types of noise, identified in this study, is the combustion noise, which is composed from two components: direct noise and indirect noise. Regarding the direct combustion noise, this is produced from the fluctuations of the heat release inside the combustion chamber in the flame region. Indirect noise is produced by the entropy waves, induced by the unsteady combustion process and vorticity. These two different noise sources will propagate through (potentially) multiple turbine blade stages to finally radiate in a far field. The noise of interest, evaluated in this study, is the jet noise, which is produced due to the turbulent mixing of high-speed gases with the ambient air [29]. In order to compare the noise reduction, produced only by the jet gases, with the chevrons nozzle, it is necessary to identify the acoustic spectral components of the acoustic sources, which were previously mentioned, followed by the filtering of those that are not in our interest. The data of the engine parameters, and the acoustic and vibration signals, were recorded and post processed, and their centralization is presented further on. The acoustic and vibration raw signals were processed using the Fast Fourier Transform (FFT) function in frequency domain and by averaging the entire time duration. For a better comparison and graphical representation, the noise spectra obtained for the three microphones were averaged according to the Equation (1), thus, for each regime and tested configuration, a spectrum resulted. where Lp is the averaged sound pressure level, n is the microphone number and Li are the sound pressure level in each microphone in dB. Acoustic Evaluation Results In the present study, it is investigated how the noise reduction, produced by the hot jet gases through 2 nozzles with chevrons, compares with the reference nozzle. It must be mentioned that, while the micro engine is functioning, besides the jet noise, there are many other noise sources generated by: air circulation at intake level, turbulences generated by the air passage through compressor stage, mechanical noise generated by the speed of the shaft, rotor-stator interaction, known as Blade Pass Frequency BPF, combustion noise, etc. One of these types of noise, identified in this study, is the combustion noise, which is composed from two components: direct noise and indirect noise. Regarding the direct combustion noise, this is produced from the fluctuations of the heat release inside the combustion chamber in the flame region. Indirect noise is produced by the entropy waves, induced by the unsteady combustion process and vorticity. These two different noise sources will propagate through (potentially) multiple turbine blade stages to finally radiate in a far field. The noise of interest, evaluated in this study, is the jet noise, which is produced due to the turbulent mixing of high-speed gases with the ambient air [29]. In order to compare the noise reduction, produced only by the jet gases, with the chevrons nozzle, it is necessary to identify the acoustic spectral components of the acoustic sources, which were previously mentioned, followed by the filtering of those that are not in our interest. The data of the engine parameters, and the acoustic and vibration signals, were recorded and post processed, and their centralization is presented further on. The acoustic and vibration raw signals were processed using the Fast Fourier Transform (FFT) function in frequency domain and by averaging the entire time duration. For a better comparison and graphical representation, the noise spectra obtained for the three microphones were averaged according to the Equation (1), thus, for each regime and tested configuration, a spectrum resulted. where L p is the averaged sound pressure level, n is the microphone number and L i are the sound pressure level in each microphone in dB. First stage of noise signal filtering was made by applying a band stop filter on the spectral components, which corresponds to the shaft speed, from which it results the compressor, turbine BDF, and its harmonics. In Figure 4, the unfiltered and filtered averaged sound spectra for each regime and nozzle configuration are presented. First stage of noise signal filtering was made by applying a band stop filter on the spectral components, which corresponds to the shaft speed, from which it results the compressor, turbine BDF, and its harmonics. In Figure 4, the unfiltered and filtered averaged sound spectra for each regime and nozzle configuration are presented. . From the unfiltered noise spectra, it is observed that the tonal components corresponding to the engine speed, BPF components, and its harmonics have amplitudes that reach almost 110 dB (especially at high speeds Regime 3 and 4), and applying the band stop filter leads to a broadband acoustic spectrum. Comparing all the spectra from all regimes it identified a broadband component with the peak at a frequency of around 200 Hz, which regardless of engine speed, its maximum peak frequency does not change significantly (max 80 Hz frequency shift), instead its amplitude increases from 83 dB at Regime 1 to 95 dB at Regime 4. It is assumed that this component is produced by the combustion, and this will not be taken into account in overall sound pressure level calculation. The overall vibration and sound pressure levels (OASPL) in each microphone, and for each regime and nozzle, are presented in Table 1. The acoustic OASPL were computed based on the filtered acoustic spectra presented in Figure 4 in frequency domain of 500-20,000 Hz. Based on the OASPL in each microphone presented in Table 1, the averaged OASPL was calculated for each regime and nozzle configuration, the results being presented in Figure 5. From the unfiltered noise spectra, it is observed that the tonal components corresponding to the engine speed, BPF components, and its harmonics have amplitudes that reach almost 110 dB (especially at high speeds Regime 3 and 4), and applying the band stop filter leads to a broadband acoustic spectrum. Comparing all the spectra from all regimes it identified a broadband component with the peak at a frequency of around 200 Hz, which regardless of engine speed, its maximum peak frequency does not change significantly (max 80 Hz frequency shift), instead its amplitude increases from 83 dB at Regime 1 to 95 dB at Regime 4. It is assumed that this component is produced by the combustion, and this will not be taken into account in overall sound pressure level calculation. The overall vibration and sound pressure levels (OASPL) in each microphone, and for each regime and nozzle, are presented in Table 1. The acoustic OASPL were computed based on the filtered acoustic spectra presented in Figure 4 in frequency domain of 500-20,000 Hz. Based on the OASPL in each microphone presented in Table 1, the averaged OASPL was calculated for each regime and nozzle configuration, the results being presented in Figure 5. Based on these OASPL presented in Figure 5, the noise reduction of the chevron nozzles were computed, taking, as reference, the baseline nozzle noise. The noise reductions for each chevron nozzle, at each regime is presented in Figure 6. For the first two regimes, the chevron nozzles lead to a slight increase in the noise, while at higher regimes with high velocities of gases, the chevrons are beginning to become effective, and the noise is reduced with almost 3 dB. For the first two regimes, that are not stable from the functioning point of view, the chevrons effect is no longer reducing the noise. On the contrary, it can be observe that the noise level has a small increase of almost 0.5 dB. An assumption of the noise increase at the first two regimes is that, by reducing or shortening the nozzle with chevrons, the noise from inside of the engine is propagating easier, with the larger surface of the reference nozzle acting as an obstacle. Another assumption of the noise increase is related to the modification of the engine work line. Based on these OASPL presented in Figure 5, the noise reduction of the chevron nozzles were computed, taking, as reference, the baseline nozzle noise. The noise reductions for each chevron nozzle, at each regime is presented in Figure 6. Based on these OASPL presented in Figure 5, the noise reduction of the chevron nozzles were computed, taking, as reference, the baseline nozzle noise. The noise reductions for each chevron nozzle, at each regime is presented in Figure 6. For the first two regimes, the chevron nozzles lead to a slight increase in the noise, while at higher regimes with high velocities of gases, the chevrons are beginning to become effective, and the noise is reduced with almost 3 dB. For the first two regimes, that are not stable from the functioning point of view, the chevrons effect is no longer reducing the noise. On the contrary, it can be observe that the noise level has a small increase of almost 0.5 dB. An assumption of the noise increase at the first two regimes is that, by reducing or shortening the nozzle with chevrons, the noise from inside of the engine is propagating easier, with the larger surface of the reference nozzle acting as an obstacle. Another assumption of the noise increase is related to the modification of the engine work line. For the first two regimes, the chevron nozzles lead to a slight increase in the noise, while at higher regimes with high velocities of gases, the chevrons are beginning to become effective, and the noise is reduced with almost 3 dB. For the first two regimes, that are not stable from the functioning point of view, the chevrons effect is no longer reducing the noise. On the contrary, it can be observe that the noise level has a small increase of almost 0.5 dB. An assumption of the noise increase at the first two regimes is that, by reducing or shortening the nozzle with chevrons, the noise from inside of the engine is propagating easier, with the larger surface of the reference Appl. Sci. 2021, 11, 5158 8 of 14 nozzle acting as an obstacle. Another assumption of the noise increase is related to the modification of the engine work line. An important result is that the nozzle cu N = 16 has a better impact on reducing the sound pressure than the one with N = 8. The analysis of vibration signals from Table 1 indicates that, in the axial direction, the vibration levels are higher than in the radial direction, and the comparison between the tested nozzles does not indicate a specific rule of the vibration variation with the engine speed. In the case of the vibrations, it is necessary to perform a FFT analysis of the raw signals to identify the spectral components that produce these vibrations. In Figure 7, the vibration spectra on axial direction for each regime and tested nozzle is being presented. An important result is that the nozzle cu N = 16 has a better impact on reducing the sound pressure than the one with N = 8. The analysis of vibration signals from Table 1 indicates that, in the axial direction, the vibration levels are higher than in the radial direction, and the comparison between the tested nozzles does not indicate a specific rule of the vibration variation with the engine speed. In the case of the vibrations, it is necessary to perform a FFT analysis of the raw signals to identify the spectral components that produce these vibrations. In Figure 7, the vibration spectra on axial direction for each regime and tested nozzle is being presented. The vibration spectra highlight that the main vibration source corresponds to the engine speed for each regime. For the first two regimes, the speed component has higher amplitude for the reference nozzle, and for the last two regimes, the chevron nozzle configurations obtain higher vibration. Due to the fact that the vibrations are caused mainly by the shaft speed, the replacement of the baseline nozzle with one with chevrons cannot affect the mechanical part of the engine. However, all these vibration variations do not endanger the stability and the integrity of the turbo engine. Micro Turbojet Engine Experimental Performances Evaluation The measured parameters from Table 2 were used in performance evaluations of the chevron nozzles. It measured the temperature in front of the micro turbojet engine, which was considered as being the temperature at intake level T1. Using these parameters, the specific consumption and engine efficiency were computed. The relations used in the calculation of specific consumption and compressor efficiency are further presented [30]. The specific consumption S is defined in Equation (2): The compressor efficiency η is defined in Equation (3): The vibration spectra highlight that the main vibration source corresponds to the engine speed for each regime. For the first two regimes, the speed component has higher amplitude for the reference nozzle, and for the last two regimes, the chevron nozzle configurations obtain higher vibration. Due to the fact that the vibrations are caused mainly by the shaft speed, the replacement of the baseline nozzle with one with chevrons cannot affect the mechanical part of the engine. However, all these vibration variations do not endanger the stability and the integrity of the turbo engine. Micro Turbojet Engine Experimental Performances Evaluation The measured parameters from Table 2 were used in performance evaluations of the chevron nozzles. It measured the temperature in front of the micro turbojet engine, which was considered as being the temperature at intake level T 1 . Using these parameters, the specific consumption and engine efficiency were computed. The relations used in the calculation of specific consumption and compressor efficiency are further presented [30]. The specific consumption S is defined in Equation (2): Appl. Sci. 2021, 11, 5158 9 of 14 The compressor efficiency η c is defined in Equation (3): where c p is the specific heat capacity and k is the adiabatic exponent. The measured engine parameters, presented in Table 2, were used for the calculation of the specific consumption and the compressor efficiency. During the measurements the environmental temperature (T1) value was T1 = 290 K, and it was measured near the intake engine. For a better overview of the engine parameters presented in Table 2, and their variation during the tested regimes, the following graphs present the force losses, fuel consumption, and compressor efficiency in comparison with the reference nozzle (Figures 8-12). where p c is the specific heat capacity and k is the adiabatic exponent. The measured engine parameters, presented in Table 2, were used for the calculation of the specific consumption and the compressor efficiency. During the measurements the environmental temperature (T1) value was T1 = 290 K, and it was measured near the intake engine. For a better overview of the engine parameters presented in Table 2, and their variation during the tested regimes, the following graphs present the force losses, fuel consumption, and compressor efficiency in comparison with the reference nozzle (Figures 8-12. The recorded parameters highlight that the temperature in front of the turbine drops by increasing the number of chevrons in idle regime 35,000 RPM, 100,100 RPM, 116,000 RPM, but it has a tendency of remaining constant with a slight increase at 55,000 RPM. The compression ratio of the engine at low regimes (idle and cruise) doesn't present variations, but at high regimes, the compression ration drops when the chevrons' number increases. For the all tested regimes, the fuel flow decreases by increasing the chevrons' numbers. One of the most important observations is related to the generated force, which decreases by increasing the number of chevrons, and the same downward trend is observed for the intake air flow. The specific fuel consumption for the chevron nozzle cases present decreases comparing with the reference nozzle. The following Figures 11 and 12 present the most important parameters that dictate the micro turbojet engine performances for the three tested chevrons. The recorded parameters highlight that the temperature in front of the turbine drops by increasing the number of chevrons in idle regime 35,000 RPM, 100,100 RPM, 116,000 RPM, but it has a tendency of remaining constant with a slight increase at 55,000 RPM. The compression ratio of the engine at low regimes (idle and cruise) doesn't present variations, but at high regimes, the compression ration drops when the chevrons' number increases. For the all tested regimes, the fuel flow decreases by increasing the chevrons' numbers. One of the most important observations is related to the generated force, which decreases by increasing the number of chevrons, and the same downward trend is observed for the intake air flow. The specific fuel consumption for the chevron nozzle cases present decreases comparing with the reference nozzle. The following Figures 11 and 12 present the most important parameters that dictate the micro turbojet engine performances for the three tested chevrons. The recorded parameters highlight that the temperature in front of the turbine drops by increasing the number of chevrons in idle regime 35,000 RPM, 100,100 RPM, 116,000 RPM, but it has a tendency of remaining constant with a slight increase at 55,000 RPM. The compression ratio of the engine at low regimes (idle and cruise) doesn't present variations, but at high regimes, the compression ration drops when the chevrons' number increases. For the all tested regimes, the fuel flow decreases by increasing the chevrons' numbers. One of the most important observations is related to the generated force, which decreases by increasing the number of chevrons, and the same downward trend is observed for the intake air flow. The specific fuel consumption for the chevron nozzle cases present decreases comparing with the reference nozzle. The following Figures 11 and 12 present the most important parameters that dictate the micro turbojet engine performances for the three tested chevrons. Appl. Sci. 2021, 11, x FOR PEER REVIEW 11 of 14 Figure 11. Variation of the compressor ratio depending on the intake air flow at constant speed. Analyzing Figure 11, it can be observed that the work lines of the compressor for the chevron nozzles have been slightly modified, but the surge line was not overpass, without endangering the integrity of the engine. Assessing the variation of efficiency characteristics of the compressor, depending on the air flow, it can be observed that, at the same regime, the compressor efficiency drops by increasing the number of chevrons. For the entire engine round, its operation did not present any problem for all tested nozzles; the only drawback is the force loss for chevron cases. Modification of the exhaust section, by manufacturing the chevrons, has led to growth of the internal resistances in Figure 11. Variation of the compressor ratio depending on the intake air flow at constant speed. Analyzing Figure 11, it can be observed that the work lines of the compressor for the chevron nozzles have been slightly modified, but the surge line was not overpass, without endangering the integrity of the engine. Analyzing Figure 11, it can be observed that the work lines of the compressor for the chevron nozzles have been slightly modified, but the surge line was not overpass, without endangering the integrity of the engine. Assessing the variation of efficiency characteristics of the compressor, depending on the air flow, it can be observed that, at the same regime, the compressor efficiency drops by increasing the number of chevrons. For the entire engine round, its operation did not present any problem for all tested nozzles; the only drawback is the force loss for chevron cases. Modification of the exhaust section, by manufacturing the chevrons, has led to growth of the internal resistances in Assessing the variation of efficiency characteristics of the compressor, depending on the air flow, it can be observed that, at the same regime, the compressor efficiency drops by increasing the number of chevrons. For the entire engine round, its operation did not present any problem for all tested nozzles; the only drawback is the force loss for chevron cases. Modification of the exhaust section, by manufacturing the chevrons, has led to growth of the internal resistances in the engine. This was reflected in the fact that the compressor had to generate greater mechanical work, which led to an increase of the temperature of air after the compressor. Because the control law of the engine is based on constant speed, during the nozzle testing, the engine tried to maintain the same power for the compressor, but because the mechanical work on the compressor increased, results of the mechanical work on turbine increased too. Growing mechanical work on the turbine can be produced by increasing the temperature in front of the turbine, which is produced by increasing the fuel flow. A bigger fuel flow will lead to a higher speed, which is contrary with the control law, so the only possibility to increase the mechanical work on the turbine, without increasing the speed, was to reduce the fuel flow and implicitly reduce the temperature in front of the turbine. Decreasing the temperature in front of the turbine leads to a temperature reduction after the turbine, which will lead to a decrease in the gas speed out of the engine. The reduction in the intake airflow, fuel flow, and the gas speed explains the reduction of the thrust force. Conclusions The main conclusion of this paper is related to the overall noise reduction of 2-3 dB for the configurations with chevron nozzles, which is obtained only at high-speed regimes. The noise reduction of the chevrons is obtained through a better mixture film between the jet gases and the air from the atmosphere. The FFT analysis of the acoustic signals highlighted that the noise reduction generated by the chevrons has broadband characteristics. The vibration spectra highlighted that the main vibration source corresponds to the engine speed for each regime, and there was not any identified rule of the vibration variation related to the engine speed. The replacement of the reference nozzle with chevron nozzles does not produce any significant increase in the vibration levels. During the chevrons' testing, the vibration levels were slightly increased but without putting the engine function in danger. Regarding the engine performances, modification of the nozzle, by cutting the chevrons, leads to a reduction in the propulsion force, especially for a nozzle with 16 chevrons. In terms of engine traction force, the engine suffered losses up to 6% for regime 3 and, for the other regimes, around 4%. The losses were slightly higher for the configuration with N = 16 compared to N = 8. Regarding fuel consumption, a 6-7% decrease at the maximum regime was recorded. The main conclusion is that the noise reduction of almost 3 dB comes with a penalty to the engine performance. The future research will focus on optimization of the chevrons' design by finding optimum concepts in terms of noise reduction and performances losses without modifying the work line of the compressor and by testing the new optimized chevron designs in an anechoic room with multiple microphones in polar configuration. Data Availability Statement: The datasets used and analyzed during the current study are available from the corresponding author on request. Conflicts of Interest: The authors declare no conflict of interest.
8,250.2
2021-06-02T00:00:00.000
[ "Engineering", "Physics" ]
Entanglement in Lifshitz-type quantum field theories We study different aspects of quantum entanglement and its measures, including entanglement entropy in the vacuum state of a certain Lifshitz free scalar theory. We present simple intuitive arguments based on “non-local” effects of this theory that the scaling of entanglement entropy depends on the dynamical exponent as a characteristic parameter of the theory. The scaling is such that in the massless theory for small entangling regions it leads to area law in the Lorentzian limit and volume law in the z → ∞ limit. We present strong numerical evidences in (1+1) and (2+1)-dimensions in support of this behavior. In (2 + 1)-dimensions we also study some shape dependent aspects of entanglement. We argue that in the massless limit corner contributions are no more additive for large enough dynamical exponent due to non-local effects of Lifshitz theories. We also comment on possible holographic duals of such theories based on the sign of tripartite information. Introduction Studying the physics of non local correlations due to the quantum entanglement in quantum many-body systems, quantum field theories and especially holographic field theories has gained increasing attention during the last decade [1][2][3][4]. Furthermore, in order to quantify entanglement and have a deeper understanding of it, different measures has been studied so far such as entanglement entropy (EE), mutual information and logarithmic negativity [5][6][7][8]. In particular EE is a good measure of entanglement for pure stets, since it behaves as an entanglement monotone decreasing under LOCC [9]. The recipe for computing EE is simple and straightforward. Considering the simplest setup where the physical system consists of two subsystems such that H tot. = H A ⊗ HĀ. Integrating out the degrees of freedom which live in the second subsystem, i.e.,Ā, one can find a reduced density matrix for A which we denote by ρ A . EE is given by the corresponding von Nuemann entropy for ρ A . In contrast to the thermal entropy, the entanglement entropy is nonvanishing at zero temperature and thus it is a good probe for studying properties of quantum phase transitions. Also it captures some characteristic properties of the underlying theory which the best known examples are even dimensional conformal field theories (see [10,11] for reviews). At a quantum critical point, a physical system typically exhibits a Lifshitz scaling symmetry as follows [12][13][14] t → λ z t, x → λ x, (1.1) where z denotes the dynamical critical exponent. The above inhomogeneous scaling in general breaks down the beloved Lorentz invariance. Merely in the very special case of z = 1, JHEP07(2017)120 the relativistic dynamics is recovered. A huge amount of efforts has been made to understand the entanglement structure in relativistic QFTs, which has resulted in many interesting features and of course many open questions. This constructs the very most literature of quantum entanglement in the context of QFTs. Although there have been some attempts so far to analyse the entanglement structure in the non-relativistic cases with z = 1 (see [15][16][17] on the QFT side and [18][19][20][21][22][23] in the context of gauge/gravity duality 1 ), still there are several open questions about the role of z in the entanglement structure of such theories. Here one of our main motivations is to construct a more stringent analysis of the nature of quantum correlations at Lifshitz critical points. An interesting tool to do so is studying quantum entanglement in theories with Lifshitz symmetry. We will consider the simplest example of such QFTs, which is a free scalar theory with generic dynamical exponent z. This family of QFTs represents several interesting features which is not of our interest here (see e.g. [28] for a review). A similar analysis has been previously done for the same theory but for 0 < z < 1 in [29][30][31][32]. We show that the behavior of z > 1 is totally different from what was found for z < 1. 2 The first step to study the entanglement structure of these theories is to focus on wellknown measures such as entanglement entropy. This is often a formidable task to carry out analytically in the context of QFTs. In principle there are different strategies including, replica trick [2], numerical approaches or even 'generalized' holographic prescriptions [33] which one can employ them to overcome this problem and gain some information about the entanglement structure of theories with Lifshitz symmetry. In this paper we will tackle the problem numerically using the so called correlator method [34,35]. This method is based on a semi-analytic approach leading to the reduced density matrix (as well as its n-th power) corresponding to a given subregion. The correlator method which is applicable to quadratic field theories is based on reducing all correlation functions to two-point functions restricted to a given subregion. This method which we will shortly review in the next section makes it possible to calculate entanglement and Renyi entropies in free field theories. We will employ this method to study entanglement measures in the ground state of a free scalar theory with Lifshitz symmetry. We will focus on free massless and massive scalar theory with dynamical exponent z ≥ 1. In order to perform numerical calculations we put the theory on a square lattice. The main difference between theories with Lifshitz symmetry and their Loretzian counterpart, i.e. the same theory with the dynamical exponent z = 1, is that the number of lattice sites which are 'correlated' together due to the spatial part of the kinetic term is z-dependent. To be more precise, for a positive integer z, the kinetic term has a spatial derivative of order 2z, thus it correlates (2z + 1) lattice points together. This fact revises the well-known intuitive understanding of the area law of entanglement in local field theories which is based on the short range correlation due to the kinetic term. In figure 1 we have illustrated the intuitive counterpart in this case that which the area law transits to a volume law while the dynamical exponent increases. 1 QFTs with Lifshitz symmetry has been widely studied in the context of gauge/gravity duality, see e.g. [24][25][26][27]. 2 We would like to thank Tadashi Takayanagi for bringing our attention to these references during the final steps of this work. JHEP07(2017)120 . . Figure 1. The nearest points inside a disk entangling region to its boundary are shown in green. Those points which are correlated to the green ones due to the discrete kinetic term of this theory are shown with a red shadow. The left panel belongs to the Lorentzian case z = 1. This shows how we intuitively understand the area law of entanglement for regions which their characteristic length, say , is much bigger than the lattice spacing (the inverse of the UV cut-off). Moving from the left panel to the right, the dynamical exponent is increasing. One can see that for a disk with a radius ∼ 3.5 in units of lattice spacing, for z > 6, since all points inside the entangling region are correlated with the green ones, we expect entanglement measures to scale with the volume (instead of the area) of the entangling region. We will show numerically that this is the correct scaling of entanglement entropy for large enough z in the following of this paper. Although this figure belongs to (2 + 1)-dimensions, its horizontal (vertical) slices describe what happens in (1 + 1)-dimensions and it can be generalized to higher dimensions straightforwardly. The rest of this paper is organized as follows: in section 2 we briefly introduce our QFT setup and correlator method which we use in our analysis. We report our results for different entanglement measures of our Lifshitz theory of interest in the massless regime in section 3 and show that EE obeys a volume law for large z. We also study the shape dependence of EE in 2+1 dimensions and report our results for different entangling regions such as a disk and square and present strong evidences for emergent volume law behavior of EE at large z. We also show that corner contributions to EE get away of being local effects in such theories for z > 1. In section 3.2 we continue our study by considering a massive scalar theory in different dimensions. The last section is devoted to the conclusions and several possible interesting directions to investigate in future works. Note added. Reference [36] which appeared after our paper has some overlap with our study in (1+1)-dimensions. The results are compatible where overlap exists. QFTs with Lifshitz symmetry The starting point of our analysis is the following nonrelativistic action for a free massive scalar field in (d + 1)-dimensions [28] 1) 3 Note that the most general theory with such a symmetry may also include terms with lower order spatial derivatives, say φ (∂ · ∂) ν φ with ν being an integer and ν < z. These terms have generic coefficients in the action which (for simplicity) we have set all of them to vanish in our analysis (see [28] for detailed analysis of such theories). JHEP07(2017)120 which in the massless case is invariant under the scaling introduced in eq. (1.1). According to this action the mass dimensions are given as follows where z should be an integer parameter. In the following discussion we always consider z > 1 case. In this theory in the massless case one can easily find the scaling behavior of the ground state two point correlator to be [32] This behavior shows that the power law growth of correlator as the dynamical exponent increases. This is in agreement with what we have explained intuitively in figure 1. Using the momentum density conjugate to φ, i.e., π = ∂L ∂φ , the corresponding expression for the Hamiltonian density can be written In the following we concentrate on case d = 1 having in mind that the generalization to arbitrary d is straightforward. In order to push our calculations we introduce a UV cut-off and thus put the theory on a lattice. The Hamiltonian density on a square lattice is given by where we consider a lattice with N sites and without loss of generality we set the lattice spacing equal to unity. Note that φ n and π n satisfy standard commutation relations, i.e., [φ n , φ m ] = [π n , π m ] = 0 and [φ n , π m ] = iδ nm . In the following sections we will consider periodic and Dirichlet boundary conditions which correspond to φ N = φ 0 (π N = π 0 ) and φ N = φ 0 = 0(π N = π 0 = 0) respectively. In the case of periodic boundary conditions, using the translation invariance one can diagonalize the Hamiltonian given in eq. (2.5) with the following Fourier transformations Expanding these new canonical variables in terms of creation and annihilation operators, i.e.,φ the Hamiltonian density becomes JHEP07(2017)120 where (2.9) We have used the following sum rule (2.10) to derive the dispersion relation given in eq. (2.9). Notice that although in eq. (2.1) we consider z as an integer parameter, actually eq. (2.9) shows the exact analytic continuation to non integer dynamic critical exponents. Employing the above definitions of dynamical fields in terms of creation and annihilation operators the vacuum correlators are given as follows where for the infinite lattice limit, i.e., N → ∞, becomes where Ω(x) 2 = m 2z + (2 sin πx) 2z . Of course the correlators only depend on (m − n) due to the translation invariance of the system. 4 Note that according to the above expressions, in the case of periodic boundary condition the massless limit, i.e., m → 0, is not well-defined due to the existence of a zero mode. This makes us consider a non-zero mass in order to regularize the corresponding divergences. One way to get rid of the zero mode is to break the translational symmetry of the system. As an example one can consider Dirichlet boundary condition on the borders of the total system. Since there is no translation invariance in this system, one should consider a Fourier sine transformation for the fields which gives [38], The analysis then proceeds in analogy with the case of periodic boundary condition. Evaluating the corresponding two point functions, we find that In this case the zero mode is no more excited and there is no divergence in eq. (2.14) while m → 0. In this case we will consider the massless limit. 5 4 See [37] for similar analysis of Lorentzian free scalar theory. 5 An explicit analysis of zero modes in EE in relativistic free scalar theory has been recently carried out in [39]. Review of correlator method As we mentioned in the introduction section, in order to find the entanglement entropy, one should construct the reduced density matrix corresponding to the subregion of interest. Since we are interested in a free (quadratic) theory, we use the correlator method first introduced in [34]. 6 This method is based on a very simple idea: if we are with the expectation value of any operator restricted to region A, which we denote by O A , in principle this would fix the reduced density matrix corresponding to this subregion uniquely through In the context of quadratic theories, thanks to Wick's theorem one can reduce the right hand side of eq. (2.15) to all possible 2-point functions of the theory. In this way it is not hard to show that the eigenvalues of the reduced density matrix which has the following form where ν k 's are the eigenvalues of C = √ X.P which itself is given by the following 2-point functions Using the definition of entanglement and Renyi entropies it is a very simple task to show that where n A is the number of lattice points enclosed by the entangling region A. The method just outlined is very general and can be employed to any quadratic QFT with different matter content such as fermions in any space-time dimensions. Also note that using this method one can find other entanglement measures, e.g., mutual and tripartite information as we will do in the subsequent sections. restricts us to impose Dirichlet boundary condition in which the zero modes are not excited and we do not need an IR cutoff. We consider the discrete version on a lattice and use eq. (2.14) to find the corresponding correlators. The results for entanglement entropy are collected in figure 2 where we demonstrate numerical results for different values of dynamical exponent. Based on this figure, it is evident that the value of entanglement entropy increases while the dynamical exponent is increased. Such a behavior is not surprising because as we have seen in the discrete version of the Hamiltonian in eq. (2.5), for larger dynamical exponents the number of lattice points which are coupled together increases and thus the correlation between points inside and outside the entangling region increases. Indeed, for generic z(> 1), the number of correlated points due to the kinetic term increases as 2z + 1 (this is illustrated in any horizontal or vertical slice of different panels of figure 1). Entanglement measures in Lifshitz-type QFTs Interestingly, according to figure 2 for large values of z and small enough entangling regions the theory exhibits a volume law scaling of the entanglement entropy. By small enough we mean /N 1. This is in contrast with the behavior of entanglement entropy in typical vacuum states of a local relativistic Hamiltonian which exhibits area law scaling and in (1+1)-dimensions is replaced with a logarithmic scaling. Again, from the expression of lattice Hamiltonian, i.e., eq. (2.5), it is clear that by increasing the critical exponent the theory starts to show nonlocal effects such that for z 1 the corresponding QFT becomes highly nonlocal. This is why we expect and numerically verify the volume law rather than the area law. Actually according to figure 2 one expects that EE is proportional to with a certain power, i.e., S ∼ α . But in order to confirm the volume law, the asymptotic behavior should be α| z→∞ → 1. In the left panel of figure 3 we check this asymptotic behavior considering S S as a function of the dynamical exponent for two different entangling regions. According to these curves, S S approaches to unity while z goes to infinity which confirms the volume law. Volume law scaling of entanglement entropy has been previously observed in several field theories with nonlocal effects [40][41][42][43][44] (See also [45] for a perturbative study of entanglement entropy in nonlocal theories.). 7 The model of our interest for large dynamical JHEP07(2017)120 exponents is similar to a certain non-local scalar theory studied in [40]. In figure 4 we have shown that this scaling behavior also happens for Renyi entropies for large values of z. We would like to clarify a very important point about theories which show a volume scaling of entanglement entropy in a certain range of parameters. As long as we are dealing with pure states, one may worry about some fundamental features of entanglement entropy in these theories. Explicitly one may ask about the validity of S A = SĀ (Ā is the complement of A) is such theories for regions which are small enough to show a volume law. Lets consider A the small region which its entropy scales with its volume. In this case the complement regionĀ is a large subregion which dose not have a volume law. If we have the wrong impression that the SĀ scales with its area, then we will lose S A = SĀ! The important point is that SĀ does not scale with the area ofĀ in this case. This is simply because the boundary of A andĀ are the same (∂A = ∂Ā) and the characteristic length of the boundary is smaller than the non-locality scale. In this case it is not enough to consider a strip of dof's around ∂Ā to find the leading term of entanglement entropy. In other words for regions which are large such that their complements show a volume law, the entanglement entropy does not scale with area but with the volume of their complement. With this in mind S A = SĀ is still valid in these theories. For small values of the dynamical exponent we still expect the logarithmic scaling to be the leading contribution to entanglement entropy. For a fixed subregion as the dynamical exponent increases we expect the scaling to tend to volume law. In order to investigate this competition between the contributions due to logarithmic and volume scaling more precisely, we study the ratio of these portions with considering the following fit function 0 is a constant (and also non-universal) term where we drop it in the following discussion by defining a subtracted EE as ∆S (z) ( ) ≡ S (z) ( )−S z 0 . Analysing the numerical data shows that the ratio of the volume term with respect to the subtracted entanglement entropy, i.e., S (z) vol. ∆S (z) should tend to / in the large z limit. This behavior shows that for large values of critical exponents the volume law contribution to EE substantially is dominant. Based on our numerical results, we propose that there should be a leading piece in S (z) ( ), whose dependence on characterizes a transition from area to volume law at a Lifshitz quantum critical point, and its form for d = 1 is given by where for z = 1 should be replaced by a logarithmic scaling. In the right panel of figure 3 we demonstrate the numerical results which are in agreement with our proposal. Of course the scaling introduced in eq. (3.2) is not the most general form but rather the simplest function supporting such a transition between area and volume law. We expect whatever the exact analytic behavior is, which can be found via different methods, e.g., working out the propagator of scalar field on an n-sheeted plane (with Lifshitz symmetry), having the same behavior as eq. (3.2) in the → 0 limit. It is also worth to note that although for large values of z and for N , the scaling of EE differs from the usual area law, but for pure state that we have considered the S A = SĀ condition still holds. We demonstrate this behavior in figure 5. A relevant question would be how the entropy depends on the dynamical exponent z. We have also studied this numerically which the result is shown in figure 6. As one can see there is a quadratic growth as z starts to increase from the Lorentzian case. The validity of this regime depends on the size of the entangling region. In the large z limit the numerical results shows linear growth of entanglement entropy as a function of z. The smaller the region is, the faster the entropy growth enters the linear regime. This is in agreement with the analytic result found in [36] for a different type of entangling regions called p-alternating sublattice. Mutual and tripartite information After analyzing the behavior of entanglement entropy at a critical point with Lifshitz scaling, we now proceed with considering other entanglement measures. In particular we are interested in mutual and tripartite information between different subregions. 9 These quantities are defined in terms of EE as follows where A, B and C are three entangling regions with length a, b and c respectively which we consider in the following configuration: Although the mutual information is always positive, which is a reminiscent of subadditivity of EE, the tripartite information can be positive, negative or zero depending on the quantum field theory and subregion configuration of interest. It is important to mention that in spite of EE which is divergent, these two quantities are defined such that the contributions of different subregions to UV divergent part cancel each other and the resultant expression becomes finite. Using the above definition for mutual and tripartite information and eq. (2.19) one can find these quantities for a Lifshitz QFT. Let us first focus on mutual information. Here for simplicity we restrict our analysis to a configuration where the length of regions A and B are equal i.e. a = b. The generalization 9 Some aspects of mutual information for z < 1 have been previously studied in [30]. to more generic configurations is straightforward. For such a configuration, we expect the mutual information between two fixed subregions to be a decreasing function of the separation between them which we denote by d. The results are collected in figure 7. Based on this figure there are two points in hand: first, as the dynamical exponent increases, the mutual information between subregions for a fixed configuration increases. This in agreement with what we explained about the increase of EE as the dynamical exponent increases. The second point is related to the well-known fact that mutual information between fixed subregions vanishes while the separations between subregions increases. Our results shows that the critical separation between subregions increases with the dynamical exponent. 10 Once again, this behavior is expected due to the enhancement of spatial correlations for larger values of z. The behavior of tripartite information is qualitatively similar to mutual information. One can check that similar to what happens in Lorentzian free scalar theory, tripartite information is always positive in this Lifshitz scalar theory. This is why we think that our results may not be comparable to some results available in the literature which are found by means of Ryu-Takayanagi proposal on geometries with asymptotic Lifshitz symmetry (see for instance [18,23]). It is worth to remind the reader that in the context of holographic QFTs, it is a well-known fact that holographic tripartite information, at least for states which have a classical gravity dual, must be always negative. The proof of this statement is based on the minimality condition in Ryu-Takayanagi formula [49]. 11 (2 + 1)-dimensions In this section we proceed our previous analysis in one higher dimension which enables us to more investigate the emergence of volume law and also shape dependence of EE in presence of dynamical exponent z. 12 In order to do so we first generalize our method which was introduced in section 2 to (2 + 1)-dimensions. One can straightforwardly show that JHEP07(2017)120 . . . . Figure 8. Different entangling regions on a constant time slice of a (2 + 1)-dimensional Lifshitz field theory. From left the first panel shows a disk entangling region and the second one is a square region. The third and forth shapes are deformations of the square region. The area of the shape in the third panel is equal to that of the square but they have different number of corners (6 and 4 respectively). The shape in the forth panel with 8 corners, has the same volume as the square. We call the third one 'a-shape' and the forth one 'v-shape'. the vacuum correlators with periodic boundary conditions in this case are given by the following expressions where Ω 2 kx,ky = m 2z + 2 sin (N x , N y ) and (k x , k y ) are the number of lattice sites and the momentum in our two spatial directions. It is also straightforward to generalize the vacuum correlators for Dirichlet boundary condition in this case. This is given by Fourier double sine transformation which gives . The rest of the method which we have reviewed briefly in section 2 applies exactly in the same way in (2 + 1)-dimensions. In this section we first work out entanglement entropy for a disk and a square entangling regions (see figure 8). Our main concern here is to verify the emergence of z-dependent scaling behavior of entanglement measures (specifically EE). We also numerically verify that JHEP07(2017)120 The main difference between Lorentzian and Lifshitz cases is the corner contributions to EE are local in the former case but non-local in the latter one. It is well-known in the literature that corner contributions in Lorentzian theories are local effects and thus in (2+1)dimensions which they have a logarithmic behavior, the coefficient of this logarithmic term is an additive function for any number of singularities in the entangling region. Here we present some arguments that the effect of dynamical exponents breaks down the latter property and as z is increased the coefficient of the logarithmic term recedes from being additive. Disk and square entangling regions. In this subsection we focus on disk and square entangling regions (see the two left panels in figure 8). Once again as we have explained previously (see figure 1), due to the long tail of the kinetic term which causes (2z + 1) lattice sites to be correlated together, we expect that the EE is a monotonically increasing function of z. In figure 9 we have presented the area law behavior of entanglement entropy for square entangling region for z 20. In figure 10 we have shown the behavior of square and disk entangling regions for large values of z. One can see the change in the scaling of EE for regions inscribed in a square with length 20. We are interested to investigate about the scaling of entanglement entropy for large enough dynamical exponent for different entangling regions. Our numerical data shows that the disk entangling region in (2 + 1)-dimensions for large enough z and small enough regions fits with the following function This function, up its first term which scales with the volume of the disk and is a sign of non-locality in this theory, is the same as what is well-known for disk entanglement in Lorentzian field theories. Our fitting data shows that similar to what we reported in the previous section for (1 + 1)-dimensions, for small regions here also the ratio of the volume term increases while the dynamical exponent increases. There is one point we would like to clarify here about calculation of entanglement entropy of a disk in (2 + 1)-dimensions on a square lattice. It is sometimes mentioned in JHEP07(2017)120 the literature that entanglement entropy is not a well-defined measure for a disk in (2 + 1)dimensions since it is not a smooth function of the radius of the disk. This is because the EE of a disk shows significant fluctuates if one considers 'continuous' radii for the disks. By continuous we mean radius values which are not multiples of the lattice constant. Due to these fluctuations people focus on mutual information as a more interesting measure for disk entangling region in (2 + 1)-dimensions [1]. 13 Our numerical investigation shows that one can simply get rid of these fluctuations by focusing on disks which their radii is a multiple of the lattice spacing. As one can see there is no such fluctuation in our data in figure 10. We have also calculated the entanglement entropy of a square entangling region. The main difference between square and disk entangling regions is the singularities (corners) present in the square. It is a well-known fact that these singularities cause new divergent terms in entanglement entropy (see e.g. [53]). Our numerical analysis shows that the entanglement entropy for small squares fits with the following function We postpone a more careful study of the coefficient of this logarithmic term to the next subsection. Here we are interested to investigate how large is the contribution of the volume law to entanglement. Similar to what we reported for the disk, we found that the contribution of the volume law for small regions increases with the dynamical exponent z. Figure 11 is showing the same thing which we previously showed in figure 3 for (1+1)dimensions. Here for a square entangling region with a fixed side , we have plotted the S S in the left panel. This quantity approaches a constant value which supports the volume law behavior in the z → ∞ limit. In the right panel we have plotted the best fit for our proposed function for large values of z. In figure 12 we have plotted the entropy as function of the volume, i.e. number of sites enclosed by the entangling region. This is done for square and v-shape which have the same volume. One can find that for small regions where the entropy scales with the volume the 13 See also [52] where a F -theorem is introduced in (2 + 1)-dimensions using mutual information for co-centric disks. Figure 12. Entanglement entropy as a function of number of sites enclosed by the entangling region, n A , for square and v-shape. These two shapes have the same volume (see figure 8). In the left panel we have set z = 20 and in the right panel z = 50. For small regions, where we expect a volume law, the entropy for both shapes coincides. As n A increases, the entanglement of the v-shape overtakes the square since it has a larger area. We have set N x = N y = 250 and m = 0 with Dirichlet boundary condition. entropy of these two shapes coincides but for larger regions the entropy of the region with a larger area grows faster. For larger values of z the region of coincidence increases. We have also studied the z-dependence of entanglement entropy. In figure 13 we have shown the results for square entangling regions with different size. Our results is similar to what we have found in (1+1)-dimensions. As the dynamical exponent increases from the Lorentzian case, for small values of z the entropy grows quadratically which the length of validity for this regime depends on the size of the entangling region. After this regime the entropy growth enters a linear regime which our results does not show any stop for it. Our results show that for generic dynamical exponent z, we expect the leading term of entanglement entropy (and also Renyi entropies) to be a function which interplay between these two behaviors. The simplest choice with this property in (2 + 1)-dimensions is Lifshitz theory of interest. In this subsection we will continue our study focusing on the corner contributions to entanglement entropy in (2 + 1)-dimensions. Corner contributions to entanglement entropy are well-known to be local effects. In a local (2 + 1)-dimensional theory corner contributions appear as subleading logarithmic divergent term [1,51,54] S A = S area + S corner log + S 0 . (3.7) where and θ i is the opening angle of the i-th corner. Since the corner contribution is a universal term in the entanglement entropy expansion, a very important question is what kind of characteristic information of the theory it contains. Although we do not know the answer to this question in general, recently in the context of AdS/CFT, 14 it has been shown that in the smooth limit, i.e. θ → π, the coefficient of this term is the same as what appears in the two point function of the stress tensor of the underlying conformal field theory [57]. In order to study these singularity effects in a Lorentzian theory one can consider entangling regions with the same area but different number of singularities. See the two middle panels of figure 8 which show a square (with four corners) and the a-shape which is a deformation of the square which preserves the area but has six corners. In this case we can forget about the leading area term and focus on the subleading logarithmic term which is supposed to capture the difference between these two regions. For these two regions if one works out the entanglement entropy and find the logarithmic term, we expect (from locality) to find which can be verified numerically (see e.g. [53]). In the case of non-local theories, one would consider two regions with the same volume but different number of corners. In the left panel of figure square which has the same volume as the square but eight corners (we refer to this shape with index 'v'). Our numerical results for both 'a' and 'v' entangling regions show that as the dynamical exponent increases the ratio of the corner contribution to that of a square starts to decrease which shows that corner contributions are no more local effects. The way we can explain what happens in Lifshitz theory is as follows: contribution of each corner is mixed up with others due to correlation of (2z + 1) lattice cites. This has been demonstrated in figure 14. In order to have a neat picture we have just focused on the nearest points to the singularity which are show in green. This is done for 'v'-shape and the similar thing happens for the 'a'-shape region. The red shadowed points are those which are in the (2z + 1) neighbourhood of the green points. The left panel belongs to the Lorentzian case z = 1 which non of the shadowed points of each green point coincide with each other. The right panel belongs to z = 6. In this case there are lots of shadowed points which belong to the shadow of more than one corner (green) point. The number of such points increases with the dynamical exponent and this is why we think the corner contributions are no more additive in the case of Lifshitz scalar theory. This mixing procedure will also happen in Lorentzian theories if there are adjacent corners in the entangling region [58]. Entanglement measures in massive scalar theory In this section we continue our study by considering the massive scalar field theory. In order to investigate the effects of nonzero mass on the entanglement we will consider the periodic boundary condition for the field in different dimensions. The results for the entanglement and Renyi entropy in (1 + 1)-dimensions are summarized in figure 15. According to these results similar to the massless case the EE increases with the dynamical exponent. The important thing to note here is that the massive case is completely different from the massless case. We have not found volume law regime for small subregions at any value of z in our study. Our results shows that for periodic boundary conditions, for mass parameters which lead to stable numerical results, we always recover an area law scaling for entanglement entropy. Increasing the mass parameter makes the entanglement to grow slower as a function of the subregion length and also the dynamical exponent. We have plotted some results for entanglement and Renyi entropies in figure 15. In case of Dirichlet boundary condition for small mass parameters in the m → 0 limit we recover the results reported in the massless section. We have not found any range of parameters even with large dynamical exponents with a volume law behavior for small regions. This is in agreement with the behavior of the correlator in (1 + 1)-dimensions in the small r limit which is given by for integer dynamical exponent z > 3. The scaling of the leading term does not depend on the dynamical exponent and is divergent in the m → 0 limit. This causes our intuitive picture (illustrated in figure 1) not to be valid in the massive regime. A more careful analysis is needed to understand the massive regime of this theory which we postpone it for future works. Conclusions and discussions We have mainly studied the entanglement structure of a free massless scalar theory (see equation (2.1)) with Lifshitz scaling symmetry. This family of theories are supposed to describe systems at quantum critical points. We have analysed our theory of interest in 1+1 and 2+1 space-time dimensions. We explored the role of the dynamical exponent in entanglement measures. We would like to first summarize our main results: • In the massless limit for small entangling regions the leading part in the EE expansion transits from area law to volume law for large dynamical exponents. Regarding to our results in 1 and 2 spatial dimensions for several entangling regions, we propose the following scaling behavior as the leading term of EE in a (d + 1)-dimensional theory where is a characteristic length of the entangling region. Indeed, the spatial part of the kinetic term in this theory couples (2z + 1) lattice sites together at each point, and therefore the value of entanglement grows due to the correlation between more JHEP07(2017)120 points in comparison with the Lorentzian case. In other words the theory shows more non-local effects for larger dynamical exponents. This is why one would naturally expect a volume law scaling rather than an area law for small entangling regions. Regarding to this result we would like to also note that this result is in contrast with what has been previously reported in [16] where the author has argued that in any theory with a generic form of dispersion relation as ω = f (k 2 ), where k µ is the momentum vector, the EE scales with the area of the entangling region. • We have shown that the mutual information is an increasing function of the dynamical exponent z. Also the fall off mutual information in large separation limit is slower in comparing to the relativistic case. This behavior is also due to stronger correlations between lattice points for larger values of z. • Similar to free scalar Lorentzian theory, the resultant tripartite information is always positive for all values of z. This shows that at least the vacuum state of this theory can not have a solution of a classical gravity theory as a holographic dual. This is the main clue that forbids us to compare our results (especially eq. (4.1)) with some previous studies on Lifshitz theories in the context of gauge/gravity duality. • In 2 spatial dimensions we have argued that corner contributes which are known to be local effects in Lorentzian field theories become more and more non-local while the dynamical exponent increases. As a result corner contribution to entanglement entropy are no more additive in these theories. We have also observed that for massive theory the behavior of the m → 0 limit at fixed z is very similar to the behavior of large z at fixed m. Although at the moment it is not an easy task to understand this behavior from the correlation functions for generic z because of technical reasons, but it is worth to note that this behavior is in agreement with what is known about a massive scalar field propagating on a Lifshitz background in the context of holography (see eq. 3.23 of [24]). Of course it would be of great interest if one can verify our results with a concrete analytical analysis. As an example one can study propagators of free scalar theories on a n-sheeted space-time and work out the leading behavior entanglement and Renyi entropies analytically. Recently a nice paper appeared where heat kernels for non-relativistic field theories on specific manifolds has been worked out in it [60,61]. In principle it is possible to extend this study to singular manifolds and work out the entanglement entropy. Finally we would like to note some interesting questions which generate new directions for our analysis. One way to extend our study is to go beyond the vacuum state. In the case of mixed state, such as thermal states it is interesting to analyse other entanglement measures which are more suitable for mixed state, e.g., logarithmic negativity [7,8,59]. Another interesting question is about the time evolution of EE in such theories which is now a typical way to study universal behaviors of these family of theories in out of equilibrium phases. One can study entanglement measures under quantum quenches and study the corresponding thermalization process. Indeed, this analysis may shed light on JHEP07(2017)120 the role of critical exponent on the entanglement structure. We will report some interesting results in these directions in early future [62].
9,781.8
2017-07-01T00:00:00.000
[ "Physics" ]
Biomimetic Artificial Membrane Permeability Assay over Franz Cell Apparatus Using BCS Model Drugs A major parameter controlling the extent and rate of oral drug absorption is permeability through the lipid bilayer of intestinal epithelial cells. Here, a biomimetic artificial membrane permeability assay (Franz–PAMPA Pampa) was validated using a Franz cells apparatus. Both high and low permeability drugs (metoprolol and mannitol, respectively) were used as external standards. Biomimetic properties of Franz–PAMPA were also characterized by electron paramagnetic resonance spectroscopy (EPR). Moreover, the permeation profile for eight Biopharmaceutic Classification System (BCS) model drugs cited in the FDA guidance and another six drugs (acyclovir, cimetidine, diclofenac, ibuprofen, piroxicam, and trimethoprim) were measured across Franz–PAMPA. Apparent permeability (Papp) Franz–PAMPA values were correlated with fraction of dose absorbed in humans (Fa%) from the literature. Papp in Caco-2 cells and Corti artificial membrane were likewise compared to Fa% to assess Franz–PAMPA performance. Mannitol and metoprolol Papp values across Franz–PAMPA were lower (3.20 × 10−7 and 1.61 × 10−5 cm/s, respectively) than those obtained across non-impregnated membrane (2.27 × 10−5 and 2.55 × 10−5 cm/s, respectively), confirming lipidic barrier resistivity. Performance of the Franz cell permeation apparatus using an artificial membrane showed acceptable log-linear correlation (R2 = 0.664) with Fa%, as seen for Papp in Caco-2 cells (R2 = 0.805). Data support the validation of the Franz–PAMPA method for use during the drug discovery process. Introduction Favorable absorption, distribution, metabolism, and excretion (ADME) of orally administrated drugs are essential for therapeutic activity in vivo. Poor oral bioavailability contributes to a very high failure rate during pre-clinical drug development [1,2]. In this regard, the Biopharmaceutic Classification System (BCS) proposed by Amidon and co-workers [3] have been widely used as an important tool to support early drug development [4][5][6]. For orally administered drugs, gastrointestinal physiology is a key factor impacting on the rate and extent of drug absorption [7]. Transcellular passive diffusion across membranes is the major route and is governed by several molecular properties such as partition and distribution coefficient, as well as molecular weight [8,9]. Currently, important tools based on physicochemical properties and in vitro assays are used to predict in vivo gastrointestinal absorption [10]. In vitro methodologies include animal [11,12] or human Transcellular passive diffusion across membranes is the major route and is governed by several molecular properties such as partition and distribution coefficient, as well as molecular weight [8,9] Currently, important tools based on physicochemical properties and in vitro assays are used to predict in vivo gastrointestinal absorption [10]. In vitro methodologies include animal [11,12] or human tissues [13], cultured cells [14,15] and artificial membranes [16][17][18]. The Caco-2 cell monolayers in vitro model is thoroughly studied and generally mimics major transport pathways in the gastrointestinal tract [19]. However, this method is limited by long cell growth and differentiation cycles, risks of microbial contamination, and high implementation costs [19][20][21] Cell-free permeation systems using artificial membranes are gaining progressively more interest as an alternative model to cell-based systems that can be simpler, less time consuming, and costeffective [22,23]. Depending on the composition of the barrier, it can be classified as biomimetic barrier which is constructed from (phospho)lipids or, alternatively, from non-biomimetic barrier containing dialysis membrane [24]. Particularly, there is a growing interest in PAMPA studies with direct comparisons to Caco-2 cells using a consistent number of drugs displaying equally well prediction of in vivo data between them [25]. In this regard, major differences of key components amid cell-free membranes currently used in permeability systems was highlighted by Berben et al. (2018) [23]. Here, a previously validated biomimetic artificial permeability membrane comprising of a microfilter impregnated by a phospholipid solution [5] was mounted on horizontal Franz-cells diffusion chambers (Microette™, Hanson Research) [20]. This new setup approach, herein called Franz-PAMPA (Figure 1), was challenged to assess permeability of BCS model drugs simulating gastrointestinal permeation. Therefore, the aim of this study was to validate this Franz-PAMPA system by evaluating the correlation power between apparent permeability (Papp) for BCS model drugs to their fraction of drug absorbed (Fa%) in humans for rapid and reliable information about passively transported drugs [25,26]. Impregnation of Membrane Support Membranes were impregnated by immersion for 60 min (22 ± 1 • C) with a lipid solution (mixture of phospholipids), as previously reported [5]. Briefly, the lipid phase solution for impregnation was a mixture of 1.7% phospholipids (Lipoid ® E 80, Ludwigshafen, Germany), 2.1% cholesterol (Sigma-Aldrich Chemical Co., Milan, Italy), and 96.2% n-octanol (Synth, Diadema, Brazil). Excess lipid was absorbed with cellulose filter paper over 30 min. Next, all impregnated membranes (N = 20) were weighed on a microanalytical scale (Mettler Toledo, mod. XPE56DR, Columbus, OH, USA) and evaluated to check for its accuracy (211.2 mg ± 6.0%). Prior to use, impregnated membranes were protected from moisture atmosphere and refrigerated (−8 • C, 24 h). It is worth mentioning that all membranes were stabilized prior to use. Stabilization was confirmed by EPR spectra which did not show any signals of physicochemical degradation: none of membranes showed any difference on 14 N-hyperfine coupling constant value (14.8 G) demonstrating its stability [25]. EPR signals were compared just after 24 h of refrigeration and post-run permeability studies as well as after a month of refrigerated storage time (data not shown). Electronic Paramagnetic Resonance (EPR) The biomimetic membranes were impregnated, as described above. Spin labeling technique was employed to examine the conformational structure of the membrane using 5-DSA or 16-DSA. EPR was performed using a Bruker ESP 300 spectrometer (Bruker, Rheinstetten, Germany) equipped with an ER 4102 ST resonator. The instrument settings were microwave power of 2 mW; modulation frequency of 100 KHz; modulation amplitude of 1.0 G; magnetic field scan of 100 G; sweep time of 168 s; and a detector time constant of 41 ms. EPR spectral simulations were performed using the nonlinear-least-squares (NLLS) program for an isotropic model. The biomimetic membrane was introduced into flat, quartz EPR cell to perform the EPR measurements at room temperature (~25 • C). Permeation Studies Permeation studies were performed using a Franz vertical diffusion cell (MicroettePlus, Hanson Research, CA, USA). Impregnated artificial membranes (Franz-PAMPA) were positioned between upper and lower part of diffusion cells and, the donor (1 mL) and receptor (7 mL) compartments holding phosphate-buffered solution (PBS) pH 7.4 (USP 32). In order to minimize the unstirred water layer (UWL), receptor compartment media was stirred (500 rpm). The temperature was kept constant (37.0 ± 0.5 • C). Each drug (n = 3) was added in the donor compartment at a fixed concentration (=10 mg/mL). One milliliter of saturated drug solutions was transferred to the donor compartments and capped to prevent evaporation. The experiments were performed under 'infinite dose' conditions [26,27], except for caffein, metoprolol, propranolol, naproxen, ranitidine, and atenolol (D 0 ≤ 0.01). Individual drug solubility is further shown in results section. Metoprolol was used as a low/high BCS permeability class boundary reference drug for the Franz-PAMPA assay [28]. Samples from permeation studies were collected during 12 h (0.25; 0.5; 1.0; 2.0; 3.0; 4.0; 5.0; 6.0; 10.0, and 12.0 h) and analyzed by HPLC (Shimadzu Class VP; Kyoto, Japan or Agilent 1220, Santa Clara, CA, USA) according to official compendiums (USP 32 or Brazilian Pharmacopeia 4th edition). The sampling volume was immediately replaced with the same volume of fresh PBS prewarmed solution at 37 • ± 0.5 • C. Calibration curves were performed at least at three concentration levels for each drug tested, in a GLP-accredited laboratory (Institute of Pharmaceutical Sciences, Goiânia, Goiás, Brazil). The validated chromatographic conditions used for the drug permeability assay are given in Table 1. Permeability Calculations The diffusion area (A) was calculated from the radius of the Franz cell and was 1.77 cm 2 . Flux through membrane to receptor compartment (J; µg/cm 2 /s) was calculated by dividing the amount of drug accumulated in the receptor compartment by A. The Fick's first law was derived to calculate flux (J) at steady state (Equation (1)): where dQ is the amount of drug across the membrane (in moles), dt the permeation time (in seconds), and A the diffusion area (in cm 2 ). Note that J was obtained from the slope of the curve at steady state from typical mean cumulative concentration-time plots (minimum of triplicates), as further shown in results section. Coefficient of variation (CV) of flux for each drug was also measured. The apparent permeability (Papp) was calculated normalizing the flux (J) over the drug concentration in the donor compartment C 0 , as described by the following Equation (2): This approximation was used in all cases, even when sink conditions do not hold and donor concentrations change with time, as already described for some experiments [29]. In addition, the following equation was used to account for the fact that in most cases sink conditions were not maintained [30]. where C receiver,t is the drug concentration in the receiver chamber at time t, Q total is the total amount of drug in both chambers, V receiver and V donor are the volumes of each chamber, C receiver,t−1 is the drug concentration in receiver chamber at previous time, f is the sample replacement dilution factor, S is the surface area of the membrane, ∆t is the time interval and P eff is the permeability coefficient. This equation considers a continuous change of the donor and receiver concentrations, and it is valid in either sink or non-sink conditions. The curve-fitting is performed by non-linear regression, by minimization of the sum of squared residuals (SSR), where: C r,i,obs is the observed receiver concentration at the end of interval i, and C r,I (t end,i ) is the corresponding concentration at the same time calculated according to Equation (3) [29]. Classification as high permeability was established if the calculated permeability (under sink or non-sink conditions was higher than 0.8* metoprolol permeability [31]. The in vitro permeability (Papp) of each drug studied was compared to in vivo absorption in humans (Fa%), Papp in Corti artificial membrane [16], and Papp in Caco-2 cells. EPR Analysis and Membrane Stability The Franz-PAMPA was characterized by EPR spectroscopy of lipid spin labels of doxyl class. The spectra showed a movement consistent with lipid bilayer (Figure 1). Two analogs of stearic acid, 5-DSA and 16-DSA, and two analogs of phosphatidylcholine, 5-PC and 16-PC, having the nitroxide radical positioned at the 5th and 16th carbon atom of the acyl chain, respectively, were used to examine the molecular dynamic at two regions into the bilayer. The EPR spectra of these four spin labels are shown in Figure 2. EPR Analysis and Membrane Stability The Franz-PAMPA was characterized by EPR spectroscopy of lipid spin labels of doxyl class. The spectra showed a movement consistent with lipid bilayer (Figure 1). Two analogs of stearic acid, 5-DSA and 16-DSA, and two analogs of phosphatidylcholine, 5-PC and 16-PC, having the nitroxide radical positioned at the 5th and 16th carbon atom of the acyl chain, respectively, were used to examine the molecular dynamic at two regions into the bilayer. The EPR spectra of these four spin labels are shown in Figure 2. The EPR parameter-isotropic 14 N-hyperfine coupling constant, a0-increased with increasing dielectric constant (i.e., solvent polarity) in which the nitroxide radical is dissolved. The measured value of 14.8 G is consistent with a spin label in a membrane [32]. The spin labels 5-DSA and 5-PC with the nitroxide moiety in the region near the polar head group of the bilayer showed more restricted rotational motion relative to their positional isomers 16-DSA and 16-PC, in which the nitroxide radical is more deeply inserted in the hydrophobic core. These results indicate the existence of a gradient of flexibility along the acyl chain, with more restricted motion in the polar region. This pattern is consistent with the properties of lipid bilayers from eukaryotic cells. The rotational motion at the polar interface of the membrane was more restricted for the spin label analog of phosphatidylcholine (5-PC) with τC of 14.2 × 10 −10 s than for the stearic acid one (5-DSA) whose τC was of 8.4 × 10 −10 s (Figure 2). Membrane barriers from similar models such as PAMPA and PVPA have been proven to be stable in a pH range from 2 to 8 [33]. Here, EPR spectra were also recorded before and after permeation studies to check for the integrity of biomimetic membranes. No leaching of barrier- The EPR parameter-isotropic 14 N-hyperfine coupling constant, a 0 -increased with increasing dielectric constant (i.e., solvent polarity) in which the nitroxide radical is dissolved. The measured value of 14.8 G is consistent with a spin label in a membrane [32]. The spin labels 5-DSA and 5-PC with the nitroxide moiety in the region near the polar head group of the bilayer showed more restricted rotational motion relative to their positional isomers 16-DSA and 16-PC, in which the nitroxide radical is more deeply inserted in the hydrophobic core. These results indicate the existence of a gradient of flexibility along the acyl chain, with more restricted motion in the polar region. This pattern is consistent with the properties of lipid bilayers from eukaryotic cells. The rotational motion at the polar interface of the membrane was more restricted for the spin label analog of phosphatidylcholine (5-PC) with τ C of 14.2 × 10 −10 s than for the stearic acid one (5-DSA) whose τ C was of 8.4 × 10 −10 s (Figure 2). Membrane barriers from similar models such as PAMPA and PVPA have been proven to be stable in a pH range from 2 to 8 [33]. Here, EPR spectra were also recorded before and after permeation studies to check for the integrity of biomimetic membranes. No leaching of barrier-constituents such as phosphatidylcholine and lipids into the donor compartment could be evidenced as none of membranes showed any difference on 14 N-hyperfine coupling constant value (14.8 G) demonstrating its stability [34]. Likewise, using the same chemical composition as [5], acidic and basic drugs also showed pH-dependent permeability according to the pH partition theory [25,35]. Accordingly, close Person's correlation coefficient was seen (r = 0.7355) to our data from Franz-PAMPA versus PAMPA pH 7.4 data from literature. In this regard, pHs of drug solutions were all measured to assure buffer capacity and drug stability. Some authors correlated membrane flux with the fraction absorbed in human, showing that the flux through the egg lecithin/dodecane membrane correlated better than octanol/water logD values with the fraction absorbed in humans [17]. Later, an in-depth investigation of pH impact on drug Franz-PAMPA Pharmaceutics 2020, 12, 988 7 of 14 permeability will be necessary to increase the biomimetic and absorption predictive power of this method, although the study of this factor was beyond the scope of this work. Membrane Validation and Performance Studies here deals with a modified PAMPA method over Franz cell apparatus. The biomimetic membrane (Franz-PAMPA) has been previously described by Corti and coworkers [5] as a modified version from Kansy et al. (1998) [36]. Mannitol and metoprolol were used as a marker for the cutoff point between low and high permeability drugs. Membrane performance was assessed using 14 representative model drugs ( Table 2) cited in the FDA BCS guidance [37]. Class I model compounds were caffeine, metoprolol, propranolol and verapamil. Class II model compounds were diclofenac, ibuprofen, naproxen and, piroxicam. Class III model compounds were atenolol, cimetidine, ranitidine, and trimethoprim. Class IV model compounds were acyclovir, furosemide and hydrochlorothiazide. This classification was based in the permeability class indicated in the FDA guidance [37] and, on solubility from literature [28,38]. Cumulative drug transport through Franz-PAMPA was plotted over 12 h and the Papp was calculated from the slopes obtained from linear regressions ( Figure 3, Table 2). Of the 21 compounds studied by Corti and coworkers and of the 14 compounds studied here, there were 11 common compounds tested in both studies: acyclovir, atenolol, caffeine, cimetidine, furosemide, hydrochlorothiazide, metoprolol tartrate, naproxen, propranolol, ranitidine, and trimethoprim. For these drugs Caco-2 Papp values were also surveyed from literature and compared here ( Table 2). For low permeability drugs (BCS III and IV, Table 2), the Papp coefficient found were consistently much lower than high permeability drugs. Franz-PAMPA, Caco-2 [20,39], and Corti membrane provided value ranges of 0.2-24.6 × 10 −6 cm/s, 0.1-83.0 × 10 −6 cm/s, and 3.2-45.5 × 10 −6 cm/s, respectively. Permeability of most drugs tested here showed Papp >1.0 × 10 −5 cm/s (Figure 4). Typically, PAMPA methods are affected by high variability, and therefore, data can be somehow noisy for poorly permeable drugs. Variability is also an issue that impacts permeability for Caco-2 [5] and other in situ [19] and in vivo [24] models. For low permeability drugs (Fa < 80%), Avdeef and coworkers (2003) [6] measured variability for more than 200 different drugs accounting for more than 600 measurements. Papp values close to 10 × 10 −6 cm/s showed variability of around 10%. Such error can increase slightly for higher Papp values but is larger for Papp < 0.1 × 10 −6 (60%), with 0.01 × 10 −6 values exhibiting variability of 100% or more. Although, currently BBB (blood brain barrier) [40] or Skin-PAMPA [41] methods can achieve higher precision and reproducibility with some other controlled protocols. Specific adjustments include setting incubation time as low as possible, increasing sensitivity of analytical methods, controlling membrane homogeneity either on the filter or among filters, besides the rationale for compounds dataset amongst others. Likewise, permeability of small hydrophilic compounds is frequently underestimated in PAMPA since the membrane has hydrophobic nature besides being a cell-free system [42]. For the FDA-listed drugs, PAMPA Papp displayed values ranged from 0.00 to 2.35 × 10 −5 cms −1 , indicating it was not sensitive enough to discriminate and rank poorly permeable compounds. In contrast, Franz-PAMPA showed values in a wider Papp range of 0.4-68.1 × 10 −6 cm/s. This could be tentatively explained due to the hydrophilic nature of membrane support and pH-dependent characteristics of the drugs [22,24,31]. Moreover, Franz cell stirring clearly reduces the unstirred water layer resistance in the system. Additionally, variability of Papp values was also addressed by the calculation methods. A more sophisticated analysis is done using Artursson's equation [15] for sink and non-sink conditions as well as checking the impact of extracting a permeability coefficient from data that are not at true steady state and, thus, possibly impacted by dose depletion. Note that for both the sink and the non-sink equation, Papp values showed a particularly good correlation between them (0.8984). Similarly, Papp values obtained by us showed to be very alike to values calculated according to Artursson's non-sink equation (Table 2, Figure 5). The reason is that we used the same systematic procedure, i.e., the best fit method through the linear portion, to calculate all the slopes characterizing an accurate permeability flow, so that the impact from dose depletion is considered not above average. As a result, all drugs got the same BCS classification in both methods. Pharmaceutics 2020, 12, x FOR PEER REVIEW 10 of 15 Typically, PAMPA methods are affected by high variability, and therefore, data can be somehow noisy for poorly permeable drugs. Variability is also an issue that impacts permeability for Caco-2 [5] and other in situ [19] and in vivo [24] models. For low permeability drugs (Fa < 80%), Avdeef and coworkers (2003) [6] measured variability for more than 200 different drugs accounting for more than 600 measurements. Papp values close to 10 × 10 −6 cm/s showed variability of around 10%. Such error can increase slightly for higher Papp values but is larger for Papp < 0.1 × 10 −6 (60%), with 0.01 × 10 −6 values exhibiting variability of 100% or more. Although, currently BBB (blood brain barrier) [40] or Skin-PAMPA [41] methods can achieve higher precision and reproducibility with some other controlled protocols. Specific adjustments include setting incubation time as low as possible, increasing sensitivity of analytical methods, controlling membrane homogeneity either on the filter or among filters, besides the rationale for compounds dataset amongst others. Likewise, permeability of small hydrophilic compounds is frequently underestimated in PAMPA since the membrane has hydrophobic nature besides being a cell-free system [42]. For the FDA-listed drugs, PAMPA Papp displayed values ranged from 0.00 to 2.35 × 10 −5 cms −1 , indicating it was not sensitive enough to discriminate and rank poorly permeable compounds. In contrast, Franz-PAMPA showed values in a wider Papp range of 0.4-68.1 × 10 −6 cm/s. This could be tentatively explained due to the hydrophilic nature of membrane support and pH-dependent characteristics of the drugs [22,24,31]. Moreover, Franz cell stirring clearly reduces the unstirred water layer resistance in the system. Additionally, variability of Papp values was also addressed by the calculation methods. A more sophisticated analysis is done using Artursson's equation [15] for sink and non-sink conditions as well as checking the impact of extracting a permeability coefficient from data that are not at true steady state and, thus, possibly impacted by dose depletion. Note that for both the sink and the nonsink equation, Papp values showed a particularly good correlation between them (0.8984). Similarly, Papp values obtained by us showed to be very alike to values calculated according to Artursson's non-sink equation (Table 2, Figure 5). The reason is that we used the same systematic procedure, i.e., the best fit method through the linear portion, to calculate all the slopes characterizing an accurate permeability flow, so that the impact from dose depletion is considered not above average. As a result, all drugs got the same BCS classification in both methods. In this context, Franz-PAMPA profile is mimicking biological permeation in a graphical pattern related to permeation through Caco-2 cells (R 2 = 0.826). Obtained Papp values versus fraction of dose absorbed in humans (Fa%) showed log linear correlation (Figure 6), as also described by Zhu et al. [39] when analyzing permeability performance of 93 commercial drugs as for artificial membranes. As expected, Franz-PAMPA also showed a significantly improved log linear correlation (R 2 = 0.6982) when actively transported compounds ranitidine, trimethoprim, and verapamil were not incorporated in the regression analysis. In contrast, the Fa% versus. Corti membrane correlation was linear (R 2 = 0.904). Such discrepancy from Franz-PAMPA and Caco-2 reveals that passive permeability of tested drugs through Corti membrane was greater and better suitable, especially for low and moderate permeability drugs, as discussed elsewhere [39]. PAMPA and Caco-2 technique In this context, Franz-PAMPA profile is mimicking biological permeation in a graphical pattern related to permeation through Caco-2 cells (R 2 = 0.826). Obtained Papp values versus fraction of dose absorbed in humans (Fa%) showed log linear correlation (Figure 6), as also described by Zhu et al. [38] when analyzing permeability performance of 93 commercial drugs as for artificial membranes. As expected, Franz-PAMPA also showed a significantly improved log linear correlation (R 2 = 0.6982) when actively transported compounds ranitidine, trimethoprim, and verapamil were not incorporated in the regression analysis. In contrast, the Fa% versus. Corti membrane correlation was linear (R 2 = 0.904). Such discrepancy from Franz-PAMPA and Caco-2 reveals that passive permeability of tested drugs through Corti membrane was greater and better suitable, especially for low and moderate permeability drugs, as discussed elsewhere [39]. PAMPA and Caco-2 technique would be best suited for compounds with medium and high permeabilities. For low permeability compounds, small differences in measured Papp are expected to yield large differences in Fa% values resulting in imprecise measurements. Conclusions The Franz-PAMPA method provided a permeability pattern similar to those from Caco-2. Methodologically, the advantages of Franz-PAMPA over Caco-2 are the lower costs and simplicity of membrane preparation (e.g., reagents and artificial membrane are commercially available). Furthermore, the method is very versatile and could be transformed in a high-throughput in vitro method to detect and classify compounds absorbed by passive diffusion. Using metoprolol as a high permeability marker (Papp = 1.61 × 10 −5 cm/s; Figure 1), seven drugs were classified as highly permeable (best fit method): atenolol, caffeine, cimetidine, diclofenac, ibuprofen, naproxen, and propranolol ( Table 2). Only atenolol and cimetidine were misclassified as highly permeable drugs, relative to their prior literature classification as BCS 3 drugs. Additionally, 10 out of 17 drugs were classified as low permeability drugs in Franz-PAMPA. Nevertheless, only naproxen, piroxicam, and verapamil (3 out of 10) had their permeability underestimated according to BCS, as they performed as low permeability drugs instead of BCS2 drugs. Summing up, a potential limitation of our study is that the Papp values were calculated with an equation in which the underlying assumptions are constant donor concentration and sink conditions. In order to account for that, we also did the calculations to estimate permeability values under nonsink conditions. The obtained values are about the same compared with the true values (i.e., assuming donor concentration change and non-sink conditions). Although the relative estimation error does change across high versus low permeability compounds [29], the practical implications for predicting oral fraction absorbed would only be a "shift" to the left on the abscissa. In the case of a direct correlation with Caco-2 values, it would be reflected in a different slope, but it would not change the significance of the regression line. In the case of the use of the apparent permeabilities for classification of compounds, the reference value of metoprolol is also underestimated, so the classification outcome would not be changed [29]. As a final comment, the ability of Franz-PAMPA to classify drugs was good and can be potentially challenged at different pH conditions to predict intestinal permeability of drugs showing passive transport. Eventually, the Franz-PAMPA cell diffusion can be modulated in lipid Conclusions The Franz-PAMPA method provided a permeability pattern similar to those from Caco-2. Methodologically, the advantages of Franz-PAMPA over Caco-2 are the lower costs and simplicity of membrane preparation (e.g., reagents and artificial membrane are commercially available). Furthermore, the method is very versatile and could be transformed in a high-throughput in vitro method to detect and classify compounds absorbed by passive diffusion. Using metoprolol as a high permeability marker (Papp = 1.61 × 10 −5 cm/s; Figure 1), seven drugs were classified as highly permeable (best fit method): atenolol, caffeine, cimetidine, diclofenac, ibuprofen, naproxen, and propranolol (Table 2). Only atenolol and cimetidine were misclassified as highly permeable drugs, relative to their prior literature classification as BCS 3 drugs. Additionally, 10 out of 17 drugs were classified as low permeability drugs in Franz-PAMPA. Nevertheless, only naproxen, piroxicam, and verapamil (3 out of 10) had their permeability underestimated according to BCS, as they performed as low permeability drugs instead of BCS2 drugs. Summing up, a potential limitation of our study is that the Papp values were calculated with an equation in which the underlying assumptions are constant donor concentration and sink conditions. In order to account for that, we also did the calculations to estimate permeability values under nonsink conditions. The obtained values are about the same compared with the true values (i.e., assuming donor concentration change and non-sink conditions). Although the relative estimation error does change across high versus low permeability compounds [29], the practical implications for predicting oral fraction absorbed would only be a "shift" to the left on the abscissa. In the case of a direct correlation with Caco-2 values, it would be reflected in a different slope, but it would not change the significance of the regression line. In the case of the use of the apparent permeabilities for classification of compounds, the reference value of metoprolol is also underestimated, so the classification outcome would not be changed [29]. As a final comment, the ability of Franz-PAMPA to classify drugs was good and can be potentially challenged at different pH conditions to predict intestinal permeability of drugs showing passive transport. Eventually, the Franz-PAMPA cell diffusion can be modulated in lipid composition and may be a suitable alternative for studying other biological barriers such as blood-) correlation to %Fa (R 2 = 0.890) was essentially unchanged. Currently, a promising biomimetic barrier also adapted to Franz diffusion cells Permeapad™- [22] was reported for six drugs concurrent to our model (acyclovir, atenolol, caffeine, ibuprofen, and metoprolol). Even if a satisfactorily comparative analysis was not straightforward, BCS classification of most drugs (4 out of 5) showed to be identical with similar Papp rank order (Table 2). Conclusions The Franz-PAMPA method provided a permeability pattern similar to those from Caco-2. Methodologically, the advantages of Franz-PAMPA over Caco-2 are the lower costs and simplicity of membrane preparation (e.g., reagents and artificial membrane are commercially available). Furthermore, the method is very versatile and could be transformed in a high-throughput in vitro method to detect and classify compounds absorbed by passive diffusion. Using metoprolol as a high permeability marker (Papp = 1.61 × 10 −5 cm/s; Figure 2), seven drugs were classified as highly permeable (best fit method): atenolol, caffeine, cimetidine, diclofenac, ibuprofen, naproxen, and propranolol ( Table 2). Only atenolol and cimetidine were misclassified as highly permeable drugs, relative to their prior literature classification as BCS 3 drugs. Additionally, 10 out of 17 drugs were classified as low permeability drugs in Franz-PAMPA. Nevertheless, only naproxen, piroxicam, and verapamil (3 out of 10) had their permeability underestimated according to BCS, as they performed as low permeability drugs instead of BCS2 drugs. Summing up, a potential limitation of our study is that the Papp values were calculated with an equation in which the underlying assumptions are constant donor concentration and sink conditions. In order to account for that, we also did the calculations to estimate permeability values under non-sink conditions. The obtained values are about the same compared with the true values (i.e., assuming donor concentration change and non-sink conditions). Although the relative estimation error does change across high versus low permeability compounds [29], the practical implications for predicting oral fraction absorbed would only be a "shift" to the left on the abscissa. In the case of a direct correlation with Caco-2 values, it would be reflected in a different slope, but it would not change the significance of the regression line. In the case of the use of the apparent permeabilities for classification of compounds, the reference value of metoprolol is also underestimated, so the classification outcome would not be changed [29]. As a final comment, the ability of Franz-PAMPA to classify drugs was good and can be potentially challenged at different pH conditions to predict intestinal permeability of drugs showing passive transport. Eventually, the Franz-PAMPA cell diffusion can be modulated in lipid composition and may be a suitable alternative for studying other biological barriers such as blood-brain barrier, skin, and mucosal barriers as buccal or nasal. The current dataset adds valuable information for future analysis of drug-molecular interactions at the lipid layer and in silico model development. Additionally, all apparatus and supplies experimentally used on Franz-PAMPA are commercially available and affordable to facilitate drug discovery method application.
7,182
2020-10-01T00:00:00.000
[ "Biology", "Medicine" ]
Validation of an ultra-precision optical coordinate measuring machine for the measurement of free-form objects in industrial processes With an increasing capability for manufacturing complex free-form objects, the requirements for metrology are steadily increasing both in terms of accuracy and in the speed and point-cloud density of the measurements. Until recently, Coordinate Measuring Machines (CMMs) were restricted to relatively simple geometries, often as a result of their relatively long measurement set-up and acquisition times and the impact of environmental variations over this period. This article introduces a new, three-dimensional, ultra-precision optical coordinate machine for the measurement of free-form components and goes on to describe how the authors are adapting an industry accepted, international standard (ISO 10360) to fit its unique measurement approach. It goes on to describe how the adoption of this standard has twin benefits: firstly, it provides a unified approach to calibration and validation of the new CMM with regards to accuracy, repeatability and reproducibility of form and roughness measurements; secondly, end-users gain confidence that the instrument’s results can be compared ‘like for like’ with older metrics generated by traditional equipment, easing integration of the new technology into the production line. Résumé: Avec les progrès technologiques dans la fabrication des objets 3D de forme complexe, les exigences métrologiques croissent à la fois en terme de précision, de vitesse d’acquisition et de taille de nuage de points. Jusqu’à récemment les machines de métrologie CMM étaient limitées à la mesure de formes relativement simples notamment du fait du temps de mise en œuvre et d’acquisition et de l’impact résultant des variations environnementales. Cet article présente une nouvelle machine de métrologie 3D sans contact ultra-précise pour la mesure des objets de forme complexe. Les auteurs décrivent comment ils s’appuient sur le standard international ISO 10360 pour traiter les données de mesure. Cette approche a deux avantages. Elle fournit une approche unifiée de calibration et de validation d’une nouvelle machine CMM au regard de sa précision, répétabilité et reproductibilité des formes et rugosités. Les opérateurs sont en confiance par cette capacité à comparer les résultats obtenus avec des mesures faites par des moyens conventionnelles. Cela rend le process d’intégration de ce nouveau moyen plus simple et plus rapide sur la ligne de Introduction Manufacturing companies have been using Coordinate Measurement Machines (CMM) as a way of product verification since the 1960's [1].These machines typically comprise of a touch probe which follows a pre-programmed path, probing out point measurements on the sample that can then be compared with key dimensions from the design specification.Figure 1 shows a typical Cartesian CMM layout. The mechanical arrangement of a traditional CMM, coupled with the complexities of programming its tool-path, meant that measurement times were long even when gathering a relatively small number of sample points.Long measurement times leave the data prone to the influence of environmental changes, such as temperature, limiting the point density and number of features that could be inspected.Ultimately the number of parts that may be sampled from the production line is likely to be small in comparison to the batch size as a result of the set-up and measurement times.With the increasing complexity of manufactured parts it has become ever harder to program CMMs; furthermore the more detailed features of the parts being verified requires even longer measurement times exacerbating the effect of temperature fluctuations.Some improvements to data density were achieved by continuous sampling of the probe's position as it traces a line on the sample, however, a new approach was required if metrology was to keep pace with manufacturing advances.This has led to the development of more sophisticated measurement devices, such as non-contact systems [2,3] and multiaxis machines with more degrees of freedom.For instance, the 5-axis 'OmniLux' platform (figure 2).These machines deliver a step-increase in measurement speed and flexibility.The highly complex nature of a multi-axis instrument brings many difficulties, with axes alignment, and their inter-dependencies when moving, creating numerous challenging problems.In addition, any production lines adopting this new generation of instruments are sometimes left with a dilemma; their current inspection process has been geared towards data measured by traditional CMMs and in order to compare new and old inspection results they need the confidence that new and old are alike. One solution to this is to design the calibration and verification procedures of the new instruments around the same standard used by the traditional CMMs, namely IS0 10360 [4], thus allowing a direct comparison of performance and repeatability on a 'like for like' basis.This article describes an approach to adapting this standard to fit non-Cartesian measurement systems. Mechanical Setup Inspection of a surface using a point-probe (be it contact or non-contact) requires the probe to be traversed in 3D space across the surface; in addition, the design of many probes places a constraint on the angle which the surface must be approached from (i.e.some sensors require inspection at an angle very close to the surface normal and most sensors have a physical fixturing which would lead to collision conditions at shallow angles of approach).This requires a minimum number of degrees of freedom (DOF) between the probe and the part under inspection. The traditional approach of conventional CMMs has been to apply the necessary DOFs to the probe, and have the part fixed with respect to 'ground' (the machine base co-ordinate system).This is the mechanical configuration pre-supposed by the ISO 10360 standard for CMM verification and calibration. There are certain advantages to this approach, especially if the part is large and cumbersome, however, in many cases it is more desirable for the DOFs to be divided between the probe and the part, with both moving with respect to ground.Often, it is desirable to achieve complete 360° inspection of the part in a single pass (where repositioning and remeasuring would introduce unacceptable additional errors or datum inconsistencies).For roundness measurements, or any profiling of rotationally symmetric features, the ability to simply rotate the part can decrease measurement time by an order of magnitude, or better (table 1).For this reason, some inspection platforms include one or more degrees of freedom on the part-side of the system, rather than the probe side.This may be a single rotary platform, a rotary and linear 2D table, or a combination of rotary DOFs. Translating the ISO Standard The optical CMM described in this paper is an example of such a system, where both part and probe have DOFs with respect to ground; this facilitates excellent measurement cycle-times well suited to the demanding requirements of high part-turnover typical of a production environment.However, it does mean that the machine deviates in a number of respects from the mechanical configuration envisaged by ISO 10360. This presents some important and interesting implications for demonstrating conformance with the ISO standard.In many ways, the optical CMM is a more capable and flexible platform than the ISO standard envisages.The system may be thought of as having two distinct articulation volumes, superimposed onto one another (figure 3).To adopt the terminology of the ISO standard, this system features a 'translatory' (Tr) probe, with an 'articulating' (Art) target part: -Probe (Tr): The conventional Cartesian volume, formed by the movement of the gantry.This is expressed in x,y,z co-ordinates and represented by a cuboid defining the range-of-motion limits (the basic measurement volume envisaged by ISO 10360). -Target (Art): A secondary, non-Cartesian volume, formed by the movement loci of the two rotary axes; this volume is essentially in spherical co-ordinates, with the third rotary DOF constrained. Such a system is capable of more sophisticated profiling than the basic provisions of the ISO standard imply, and whilst a simpler verification of the Cartesian probe measurement space would meet the basic requirements of the standard, it is of course better to perform the most rigorous characterisation and calibration possible, even if this extends beyond the standards' current limitations. To address this challenge, a rigorous methodology is required to characterise the true articulation envelope of the machine.Whilst the details of this characterisation are specific to the machine in question, there are a number of general principals which would be applicable to other comparable measurement systems. System Calibration The ISO standard provides basic specific tests for calibration and verification.These are used as a first stage in system verification. Probing errors are measured using a standard test sphere, as usual.The test sphere is positioned close to the centre of the Cartesian inspection volume, and rotated to expose different surfaces for profiling, as shown in figure 4.This basic inspection provides useful information on the sensor probe calibration (figure 5) but does not give full information about the calibration of the overall system. Figure 5: Calibration sphere measurement errors for optical CMM with air bearings (top) and mechanical bearings (below).Sphere is Ø25mm, grade 3 (<80nm); 95% measurement variation is ~150nm (air), ~870nm (conventional).Comprehensive characterisation of the machine is ongoing, but these results are typical of achieved performance. The standard length measurements in ISO 10360 are concerned with capturing variations across the Cartesian translatory volume of the probe.This is essentially independent of the target, and for the OmniLux can be performed as on a CMM, with a fixed target mounted either directly to the machine base (granite), or else mounted to the rotary assembly, but with the assembly locked.The probe orientation remains fixed relative to the machine co-ordinate system as the sensor traverses the target. In reality, the rotary assembly volume must also be characterised, which is beyond the scope of ISO standard.To perform "length" measurement tests in this rotary axis system, a cage assembly was used to mount multiple gauge targets at pre-specified angles (figures 6 & 7).The cage is mounted to allow two rotations (around the mounting axis, and in the gauge block plane).By this means, the blocks are measured in multiple rotational positions.This test involves minimum movement of the probe on the Cartesian axis system; attention is focussed on the rotary axis assembly.The gauge cage pictured consists of five grade-1 calibration blocks of known and certified length.The nominal and certified length for each block along with the results of three system measurements are shown in table 2. Further Work At present, a procedure has been used which independently verifies the Cartesian measurement volume (as per ISO 10360), and additionally the rotational measurement volume. Ultimately, the intention is to include an additional verification test which combines these two measurement volumes in a single series of measurements.This will provide an additional level of confidence in the two earlier tests, and comprehensively demonstrate the performance of the overall inspection system as a single entity. There are always a wide number of factors which may affect measurement results, including thermal effects, part deformation, control errors, signal noise, and system dynamics amongst many others.Work is ongoing to improve understanding of all of these factors for these measurement machines. Conclusions The additional verification procedures described in this overview provide the missing characterisation which the ISO standard alone fails to test.The ISO standard provides solid verification of the Cartesian space with the gauge-cage providing the missing verification of the rotary space for CMM systems with a mobile target part under inspection. Ultimately, the lessons learned from developing new systems and the methods to calibrate and verify these systems should be fed back into the standards, to ensure that standards can keep pace with the constantly-moving technical field. Figure 1 . Figure 1.Typical layout of a CMM showing its mechanical gantry and touch probe stylus Figure 2 . Figure 2. Measurement of an artificial knee using a 5-axis optical CMM. CartesianFigure 3 : Figure 3: Combining DOFs on the probe-side and partside leads to an inspection volume which is more complex than the single Cartesian space envisaged by ISO 10360. Table 1 . Comparison of CMMs with static and mobile target parts. Table 2 : nominal length, certified centre-measured deviation and measured values (×3) for gauge cage assembly measurement.For this test blocks marked '*'were inserted at a 45° angle in the cage.
2,766.4
2015-01-01T00:00:00.000
[ "Engineering" ]
GenomeBlast: a web tool for small genome comparison Background Comparative genomics has become an essential approach for identifying homologous gene candidates and their functions, and for studying genome evolution. There are many tools available for genome comparisons. Unfortunately, most of them are not applicable for the identification of unique genes and the inference of phylogenetic relationships in a given set of genomes. Results GenomeBlast is a Web tool developed for comparative analysis of multiple small genomes. A new parameter called "coverage" was introduced and used along with sequence identity to evaluate global similarity between genes. With GenomeBlast, the following results can be obtained: (1) unique genes in each genome; (2) homologous gene candidates among compared genomes; (3) 2D plots of homologous gene candidates along the all pairwise genome comparisons; and (4) a table of gene presence/absence information and a genome phylogeny. We demonstrated the functions in GenomeBlast with an example of multiple herpesviral genome analysis and illustrated how GenomeBlast is useful for small genome comparison. Conclusion We developed a Web tool for comparative analysis of small genomes, which allows the user not only to identify unique genes and homologous gene candidates among multiple genomes, but also to view their graphical distributions on genomes, and to reconstruct genome phylogeny. GenomeBlast runs on a Linux server with 4 CPUs and 4 GB memory. The online version of GenomeBlast is available to public by using a Web browser with the URL . (page number not for citation purposes) Background With the rapidly increasing availability of complete genome sequences, genome-wide sequence comparison has become an essential approach for finding homologous gene candidates, for identifying gene functions, and for studying genome evolution [1,2]. Genome comparison can be used to find genes that characterize unique features in a given organism such as specific phenotypic variation or particular pathogenicity [3]. Meanwhile, genome phylogenies based on gene content or gene order shed new light on the construction of the Tree of Life [4,5]. Currently many tools such as MUMmer and Artemis are available for comparative genomic analysis [2,[6][7][8]. These tools can be used for pairwise genome alignment (e.g., [3,9,10]) as well as multiple genome alignment e.g., [11,12]). Unfortunately, most of them are not applicable for the identification of unique genes in a given set of genomes, since the tools were developed for homologous gene detection in most cases. Additionally, only a few tools can be used for the study of phylogeny from the genomic point of view [13]. The BLAST (Basic Local Alignment Search Tool) algorithm as well as other anchor-based algorithms are commonly used for the identification of homologous gene candidates across diverse genomes [2,14]. Although the BLAST algorithm has its pros such as fast computation and accurate results in detecting local highly-similar sequences regions, it sustains two cons when used to identify global sequence similarity: (1) genes that reside in local highlysimilar regions can be erroneously identified as homologue candidates; and (2) multiple local hits that happen against the same subjective sequence need to be combined to obtain the overall aligned region between the query and subject sequences. In order to solve these problems, we developed a Web tool, GenomeBlast. It performs multiple genome comparisons, identifies unique genes as well as shared (possibly homologous) genes among the genomes, and reconstructs the genome phylogeny. Identification of homologous gene candidates is done by detecting global sequence similarity using alignment coverage information. This paper describes its architecture, algorithms, and implementation. We demonstrate the practical use of Genome-Blast with an example using herpesviral genomes, and discuss its future improvement plan. Architecture The architecture of GenomeBlast is illustrated in Figure 1. In addition to input and output modules, it consists of sequence extraction, database formatting, sequence com-parison, output filtering, and visual presentation of results. The inputs to GenomeBlast are genome sequences in the GenBank format, each in a single file. Each genome sequence record needs to include the FEATURE table with coding sequence (CDS) annotations. Such data can be downloaded from public databases such as the National Center for Biotechnology Information (NCBI) [15]. Protein sequences are extracted from translation records in the CDS annotations. The formatdb program is used to generate protein database files from the protein dataset for each genome. These protein database files can be used with the blastp program. The all-against-all blasting strategy is used for genome comparison. Each of the protein sequences from one genome is compared against protein sequences from all other genomes. The BLAST results are then filtered and presented in various outputs. Three-level outputs generated from GenomeBlast include: (1) candidates for unique genes and homologous genes; (2) 2D plots of homologous gene candidates for pairwise genome comparisons; (3) a table of gene presence/ absence information; (4) genome phylogeny; and (5) a summary table for multiple genome comparison. Coverage calculation We used the blastp algorithm for protein sequence comparison. Since the BLAST search may result in identifying only short local similarities (short local similarities can be obtained from any conserved domains/regions even if the sequences are not derived from homologous genes) or in identifying multiple short similarities from the same CDS ( Figure 2), we introduced a parameter called "coverage" to detect gene-wide sequence similarity. The percent alignment coverage (c) is calculated using the following equation: where L i , L i,j , and L query represent the alignment length for the i th hit, the overlap length between the hits i and j, and the query length, respectively; and k is the total number of hits to the same subject sequence for a given query sequence. Identification of homologous gene candidates In order to identify homologous gene candidates and to exclude related genes that share similarities only with limited regions, GenomeBlast can use a combination of following thresholds: i) Coverage. The coverage is the length of aligned regions calculated as above. The default threshold is 50%. ii) Identity. The identity is the proportion (%) of identical amino acid pairs in the aligned region. The default threshold is 30%. iii) E-value. The E-value, expectation value, is the number of different alignments with scores equivalent to or better than the scores that are expected to occur in a database search by chance. The default threshold is 10. In the default setting, GenomeBlast uses only the coverage and identity, but not the E-value threshold. Genome phylogeny reconstruction Based on the results of multiple genome comparison, the presence and absence of each CDS is tabulated with 1s (for presence) and 0s (for absence) for each genome. Using this binary character matrix, the maximum parsimony method [16] with the branch-and-bound tree search algorithm is used to infer genome phylogeny. The branch-and-bound algorithm effectively searches the possible tree topologies and guarantees finding the most parsimonious phylogeny [17]. Backend programs and the Web server The blastp program in the BLAST stand-alone package ftp://ftp.ncbi.nih.gov/blast/ was used for protein sequence comparison. The PENNY program of the PHYLIP package implements the maximum parsimony phylogenetic method using the branch-and-bound tree search algorithm and a binary character data matrix [18]. The data processing/analysis and integration of the blastp and PENNY programs into GenomeBlast were implemented with the PERL programming language. The Web applications were developed using PHP. GenomeBlast runs on a Linux server, which has four processors (2.0 GHz each), 4 GB memory, and 400 GB disk space. Results We will use thirteen herpesviral genomes described in [4] as an example, and go through GenomeBlast step by step to demonstrate its functions (Figure 1). The first step is to set up blastp options. We did not choose the filter option to mask off low compositional complexity or mask for the lookup table. We used the default values provided in GenomeBlast (E-value: 10, The architecture of GenomeBlast Figure 1 The architecture of GenomeBlast. GenomeBlast consists of sequence extraction, database formatting, sequence comparison, output filtering, and visual presentation of results. The inputs to GenomeBlast are genome sequences in the GenBank format, each in a single file. The outputs include three-level results: 1) putative unique genes and homologous genes; 2) 2D plots of homologous gene candidates for pairwise genome comparisons; 3) a table of gene presence/absence information, genome phylogeny, and a summary table for multiple genome comparison. The next step is to upload genome sequence files. We set up the number of genomes to compare as 13 and clicked the OK button. We then uploaded the 13 herpesviral genome sequence files, which were originally downloaded from NCBI in the GenBank format. The average size of these genomes was approximately 150 kb. Formatting databases and performing all-against-all blastp comparison took 5 minutes 16 seconds on our server. The third step is to set up parameters for gene comparisons. We used the default threshold values, i.e., 50% coverage and 30% identity for determining homologous CDS. The last step is to view genome comparison results at three different levels, i.e., single-genome, pairwisegenome, and multiple-genome levels. We chose two alpha viruses, EBV and EHV2, to show functions available for the single-genome level analysis. Note that any number of genome combinations can be used for unique gene or homologous gene candidate identification. A total of 45 and 38 unique gene candidates were found respectively in EBV and EHV (Figure 3), whereas 82 homologous CDS candidates were identified between these two genomes ( Figure 4). For the pairwise-genome comparisons, any two genomes can be chosen and a 2D plot of distribution of homologous gene candidates is generated. We clicked the hyperlink EBV.gb-EHV2.gb (alternatively, we can choose from the drop-down menu) and a 2D plot was displayed in a new window as shown in Figure 5. Interestingly, the plot suggests that genomic inversion might have occurred between these two viruses. Clicking each dot in the plot, we can see its corresponding information including the query name, subject name, and % identity. Of the 82 homologous CDS candidates, only two proteins were found to have sequence identities higher than 80% (colored in red), 20 proteins had identities between 50% and 80% (colored in pink), and the rest had identities between 30% and 50% (colored in yellow). At the multiple-genome level, we can obtain the binary gene presence/absence table (not shown) and the genome phylogeny as shown in Figure 6A. The phylogeny indicates that there are three virus groups, which is more clearly shown in the phylogeny redrawn with the TreeView program [19] ( Figure 6B). This result showing three groups of herpesviruses is consistent with previous reports [1,4]. Discussion GenomeBlast has several unique features compared with other comparative genomics tools [2,3,[9][10][11][12]20,21]. Instead of focusing on generating alignments, Genome-Blast identifies unique and shared, possibly homologous, CDS sets among multiple genomes and presents the information in a summary table. It generates 2D plots depicting the distribution of homologous CDS between given pairs of genomes. In order to identify possible homologous CDS, GenomeBlast uses the blastp sequence similarity search program. Combining the length of alignment coverage with % identity of the aligned region, it evaluates A possible output generated by the blast program Figure 2 A possible output generated by the blast program. The blast program may find two or more highly similar regions of the same subject sequence, which need to be combined before we can evaluate global sequence similarity between the query and the subject sequence. gene-wide similarity. This combination of coverage and identity can better identify homologous CDS candidates. GenomeBlast also provides flexibility in choosing different combinations of parameters and their threshold values. Once the blast search is done, there is no need for redoing the blast search and the user can return to the parameter-setting page to reset thresholds for identifying homologous gene candidates. GenomeBlast reconstructs genome phylogeny based on gene content using the maximum parsimony method. In this context, GenomeBlast overlap with the Web server, SHOT [13]. SHOT also includes a gene-order phylogeny method. Whereas SHOT can be used for only a certain set of genomes, GenomeBlast offers more flexibility. Montague and Hutchison [4] reconstructed wholegenome phylogenies for 13 herpesviral genomes based on the Clusters of Orthologous Groups (COGs) data [22]. They used several computer programs/packages before reconstructing the genome phylogenies including the Wisconsin Package (GCG) [23], BLAST programs, and PAUP (Phylogenetic Analysis Using Parsimony) [24]. We performed the same analysis using GenomeBlast alone and our genome phylogeny agreed with their result [4]. It demonstrates that GenomeBlast is a very useful application for small genome comparison. Our plan to extend functions in GenomeBlast includes automatic CDS extraction/translation, use of FASTA sequence format, DNAlevel analysis using blastn, and gene-order based genome phylogeny. Output window of putative unique genes Figure 3 Output window of putative unique genes. Two alpha herpesviruses, EBV and EHV2, were selected for comparison. A total of 45 and 38 unique CDS candidates were found in EBV and EHV2, respectively. GenomeBlast is suitable for small genome comparison. We do not expect it to compare large genomes, such as human and mouse genomes, because such computation with large genomes is extremely expensive, which will take several days or even weeks to complete. • Any restrictions to use by non-academics: yes, contact the author GL for details A 2D plot of homologous gene candidates in genomes Figure 5 A 2D plot of homologous gene candidates in genomes. EBV and EHV2 were selected for comparison. The plot shows the distribution of homologous CDS on EBV and EHV2 genomes. The threshold values used for homologous CDS identification and the color scheme for identity representation are illustrated. Genome phylogeny among the herpes viruses Figure 6 Genome phylogeny among the herpes viruses. The 13 herpesviral genomes described in [1,4] were used for phylogeny inference. Panel A was generated from GenomeBlast, whereas Panel B was produced with the TreeView program using the same tree file from GenomeBlast.
3,186.4
2006-06-20T00:00:00.000
[ "Biology" ]
Astilbin Inhibits High Glucose-Induced Inflammation and Extracellular Matrix Accumulation by Suppressing the TLR4/MyD88/NF-κB Pathway in Rat Glomerular Mesangial Cells Diabetic nephropathy (DN) is characterized by inflammatory responses and extracellular matrix (ECM) accumulation. Astilbin is an active natural compound and possesses anti-inflammatory activity. The aim of this study was to evaluate the anti-inflammatory effect of astilbin on high glucose (HG)-induced glomerular mesangial cells and the potential mechanisms. The results showed that HG induced cell proliferation of HBZY-1 cells in a time-dependent manner, and astilbin inhibited HG-induced cell proliferation. The expression and secretion of inflammatory cytokines, including interleukin-6 (IL-6) and tumor necrosis factor alpha (TNF-α), and ECM components, including collagen IV (Col IV) and fibronectin (FN), were induced by HG. Moreover, TGF-β1 and CTGF were also induced by HG. The induction by HG on inflammatory response and ECM accumulation was inhibited after astilbin treatment. Astilbin treatment also attenuated HG-induced decrease in expression of matrix metalloproteinase (MMP)-2 and MMP-9. The TLR4/MyD88/NF-κB pathway was activated by HG, and the inhibitor of TLR4 exhibited the same effect to astilbin on reversing the induction of HG. TLR4 overexpression attenuated the effect of astilbin on HG-induced inflammatory cytokine production and ECM accumulation. The results suggested that astilbin attenuated inflammation and ECM accumulation in HG-induced rat glomerular mesangial cells via inhibiting the TLR4/MyD88/NF-κB pathway. This work provided evidence that astilbin can be considered as a potential candidate for DN therapy. INTRODUCTION Diabetic nephropathy (DN) is a kind of kidney disease that affects approximately 25% of the patients with type 2 diabetes (Lytvyn et al., 2016;Tesch, 2017). DN is one of the most common causes of end-stage renal disease (ESRD), and the patients with ESRD often require hemodialysis or even kidney transplantation to recover the kidney function (Tang et al., 2015). There are a variety of risk factors that promote the development and progression of DN, including long duration of diabetes, elevated glucose levels, high blood pressure, and dyslipidemia (Tziomalos and Athyros, 2015). Increasing studies indicate that identification and management of risk factors for DN is of paramount importance (Tziomalos and Athyros, 2015). Elevated glucose level is one of the main risk factors for the development and progression of DN (Tziomalos and Athyros, 2015). High blood sugar may lead to the formation of advanced glycation end products, which induces inflammation in the kidney and promotes the development of DN (Tziomalos and Athyros, 2015). Besides, excessive accumulation and deposition of extracellular matrix (ECM) is the major pathological alteration in DN, which results in the expansion of mesangial matrix, thickening of glomerular basement membrane and tubulointerstitial fibrosis (Dugbartey, 2017). Therefore, inhibiting inflammation and ECM accumulation is important for the management of DN. Astilbin ( Figure 1A) is an active natural compound belonged to flavonoid (Huang and Liaw, 2017). It is isolated from many kinds of herbs such as the rhizome of Smilax china L. (Smilaceae), and has been reported to possess various activities including antiinflammatory and immunoregulatory effects (Meng et al., 2016;Yu et al., 2017). Recently, astilbin is reported to protect mice from kidney injury via regulating oxidative stress and inflammatory response in an in vivo study . However, the role of astilbin in DN remains unknown. In the present study, the role of astilbin in high glucose (HG)-induced glomerular mesangial cells and the mechanism were investigated. We found that astilbin attenuated HG-induced inflammatory responses and ECM accumulation by inhibiting the TLR4/MyD88/NF-κB pathway. This work provided evidence that astilbin could be considered as a new candidate for DN therapy. MTT Assay HBZY-1 cells were seeded into a 96-well plate with the density of 1 × 10 4 cells/well. Cells were cultured under normal glucose condition (NG, 5.5 mM D-glucose), mannitol condition (NM, 5.5 mM D-glucose + 24.5 mM mannitol), or high glucose condition (HG, in the presence of 10 and 20 µg/ml astilbin (purity >98%, Tauto Biotech Co., Ltd., Shanghai, China) for 12, 24, or 48 h. The concentrations of astilbin used were based on our previous study (Chen et al., 2018). Cells were then incubated with 100 µl MTT solution (0.5 mg/ml) for 4 h at 37 • C. Subsequently, DMSO was added to dissolve the purple crystal and the absorbance was measured at 570 nm using a microplate reader (Bio-Rad Laboratories, Inc., Hercules, CA, United States). Each experiment was performed three times in triplicate. Enzyme-Linked Immunosorbent Assay (ELISA) The supernatant of HBZY-1 cells after different treatments was collected, the contents of IL-6, TNF-α, Col IV, FN, TGF-β1, and CTGF were measured by ELISA using commercial kits (CUSABIO, Wuhan, China) in accordance with the manufacturer's instructions. Each experiment was performed three times in triplicate. Western Blot Cell extracts were prepared using the radio immunoprecipitation assay (RIPA) buffer. Then the protein concentration of the cell extracts was measured using a BCA protein assay kit (Pierce Biotechnology, Rockford, IL, United States). The protein samples were separated by 10% SDS-PAGE and transferred onto PVDF membrane. After blocking with 5% (w/v) non-fat milk for 1 h at 37 • C, the membrane was incubated with primary antibodies against TLR4, MyD88, p-p65, p65, matrix metalloproteinase FIGURE 1 | Effect of astilbin on HG-induced proliferation of HBZY-1 cells. Cells were cultured under normal glucose condition (NG, 5.5 mM D-glucose), mannitol condition (NM, 5.5 mM D-glucose + 24.5 mM mannitol), or high glucose condition (HG, in the presence of astilbin (0, 10, and 20 µg/ml) for 12, 24, or 48 h. (A) Chemical structure of astilbin. Cell proliferation was detected using MTT assay after incubation for 12 (B), 24 (C), and 48 h (D). * P < 0.05 vs. HG-stimulated group (without astilbin). Statistical significance was determined by one-way ANOVA. (MMP)-2, MMP-9, and β-actin (Abcam, Cambridge, MA, United States) at 4 • C overnight. Then the proteins were probed by incubating with HRP-conjugated secondary antibody for 2 h at 37 • C. Finally, the brands were detected using ECL reagent (Millipore, Billerica, MA, United States). Each experiment was repeated three independent times. Statistical Analysis Data are expressed as mean ± SD. Comparisons between groups were performed using one-way ANOVA for multiple comparisons and the Student's t-test for two groups. Statistical analysis was performed using SPSS version 13.0 (SPSS, Chicago, IL, United States). A difference was considered significant when p-value was less than 0.05. Astilbin Inhibited HG-Induced Proliferation of HBZY-1 Cells To evaluate the effect of astilbin on HG-induced proliferation of HBZY-1 cells, cells were induced with NG, NM or HG in the presence of astilbin (0, 10, and 20 µg/ml) for 12, 24, and 48 h. The results in Figure 1B showed that HG and astilbin did not affect the proliferation of HBZY-1 cells with the incubation time of 12 h. However, after incubation for 24 or 48 h, the proliferation of HBZY-1 cells was significantly induced by HG, indicating that HG induced cell proliferation in a time-dependent manner. The induction was inhibited by astilbin (Figures 1C,D). Based on these results and a previous study , 24 h was selected as treatment time in the following experiments. Astilbin Decreased Inflammatory Cytokine Production Under HG Condition It has been reported that HG could induce inflammation responses, thus the mRNA and protein levels of IL-6 and TNF-α were detected by qRT-PCR and ELISA. As shown in Figures 2A,B, HG induced the mRNA levels of IL-6 and TNF-α in HBZY-1 cells, while astilbin (10 and 20 µg/ml) attenuated the induction. In addition, the contents of IL-6 and TNF-α in cell supernatant were increased under HG condition, and the increase was inhibited by astilbin treatment (Figures 2C,D). Astilbin Suppressed HG-Induced ECM Accumulation ECM accumulation is an important pathological change of DN and plays a crucial role in the development of renal fibrosis FIGURE 2 | Effect of astilbin on HG-induced inflammatory cytokine production. HBZY-1 cells were treated with 10 and 20 µg/ml of astilbin under HG condition for 24 h. The mRNA levels of IL-6 (A) and TNF-α (B) in HBZY-1 cells were detected by qRT-PCR. The secretion levels of IL-6 (C) and TNF-α (D) in cell supernatant were detected by ELISA. # P < 0.05 vs. Con (control group). * P < 0.05 vs. HG-stimulated group. Statistical significance was determined by one-way ANOVA. (Kolset et al., 2012). As shown in Figures 3A,B, the mRNA levels of Col IV and FN were increased in HBZY-1 cells under HG condition. Treatment with astilbin attenuated the effect of HG on the mRNA levels of Col IV and FN. Besides, HG induced the secretion of Col IV and FN in the cell supernatant, while astilbin treatment inhibited the induction (Figures 3C,D). Astilbin Inhibited the Expression of TGF-β1, CTGF, and MMPs TGF-β1 is an important regulatory factor during renal fibrosis, which can induce ECM accumulation (Yokoyama and Deckert, 1996). CTGF is a downstream factor of TGF-β1 (Li et al., 2012). MMPs are a family of proteolytic enzymes which can degrade ECM components and among MMPs, MMP-2, and MMP-9 have been implicated in the pathogenesis of DN (Sun et al., 2013). The results of qRT-PCR indicated that HG induced mRNA levels of TGF-β1 and CTGF in HBZY-1 cells and astilbin treatment inhibited HG-induced mRNA levels of TGF-β1 and CTGF (Figures 4A,B). The results of ELISA suggested that astilbin treatment exerted inhibitory effect on HG-induced secretion of TGF-β1 and CTGF in cell supernatant (Figures 4C,D). The results of western blot showed that astilbin treatment attenuated HG-induced decrease in expression of MMP-2 and MMP-9 ( Figure 4E). It has been reported that TLR4/MyD88 and NF-κB pathways are important pathways involved in inflammatory response (Wada and Makino, 2016). To evaluate whether the TLR4/MyD88 and NF-κB pathways were involved in the effect of HG induction, the expression levels of TLR4, MyD88, p-NF-κB p65, and NF-κB p65 were measured by western blot. The results in Figure 5A demonstrated that HG induced the expression FIGURE 3 | Effect of astilbin on HG-induced ECM accumulation. HBZY-1 cells were treated with 10 and 20 µg/ml of astilbin under HG condition for 24 h. The mRNA levels of Col IV (A) and FN (B) in HBZY-1 cells were detected by qRT-PCR. The secretion levels of Col IV (C) and FN (D) in cell supernatant were detected by ELISA. # P < 0.05 vs. Con (control group). * P < 0.05 vs. HG-stimulated group. Statistical significance was determined by one-way ANOVA. levels of TLR4 and MyD88, and the induction was attenuated by astilbin treatment (10 and 20 µg/ml). The expression of p-NF-κB p65 was increased in the cells treated with HG, and astilbin treatment (10 and 20 µg/ml) inhibited the expression of p-NF-κB p65 ( Figure 5B). The change of NF-κB p65 expression was not obvious in HBZY-1 cells treated with HG and/or astilbin. The TLR4/MyD88 and NF-κB Pathways Formed a Signaling Axis To further evaluate the relation between TLR4/MyD88 and NF-κB pathways, HBZY-1 cells were treated with the inhibitor of TLR4 (TAK-242, 1 µM) for 24 h under HG condition. The expression levels of TLR4, MyD88, p-NF-κB p65, and NF-κB p65 were measured by western blot. The results in Figure 6 showed that TAK-242 inhibited the expressions of TLR4, MyD88, and p-NF-κB p65 in the HBZY-1 cells under HG condition, indicating that the TLR4/MyD88 and NF-κB pathways formed a signaling axis. Inhibition of the TLR4/MyD88/NF-κB Pathway Suppressed HG-Induced Inflammatory Cytokine Production and ECM Accumulation To investigate the role of TLR4/MyD88/NF-κB pathway in the effect of HG, HBZY-1 cells were treated with the inhibitor of TLR4 (TAK-242, 1 µM) for 24 h under HG condition, and the contents of IL-6, TNF-α, Col IV, FN, TGF-β1, and CTGF in cell supernatant were measured by ELISA. As shown in Figures 7A-F, the contents of IL-6, TNF-α, Col IV, FN, TGF-β1, and CTGF were reduced in the cell supernatant after TAK-242 treatment, compared to cells without TAK-242 treatment. These results indicated that suppression of the TLR4/MyD88/NF-κB pathway inhibited HG-induced inflammatory cytokine production and ECM accumulation. Frontiers in Pharmacology | www.frontiersin.org that TLR4 overexpression attenuated the effect of astilbin on HG-induced inflammatory cytokine production and ECM accumulation. DISCUSSION The progression of DN consists of three steps: firstly, glomerular hypertrophy and hyperfiltration; secondarily, inflammation of glomeruli and tubulointerstitial regions; finally, accumulation of ECM and cell apoptosis (Makino et al., 1993). Previous studies have demonstrated that hyperglycaemia plays an important role in the pathogenesis of DN (Larkins and Dunlop, 1992;Feldt-Rasmussen, 2000). HG can promote the proliferation in glomerular mesangial cells (Jia et al., 2009). We also found that HG induced proliferation of HBZY-1 cells, and astilbin attenuated the induction. In recent years, many researchers have demonstrated that inflammation also plays an important role in the progression of DN (Lim and Tesch, 2012). Inflammation is characterized by increasing number of inflammatory cells and increasing expression levels of adhesion molecules, chemokines, and inflammatory cytokines (Barutta et al., 2015). Many researches focus on novel approaches targeting inflammation for the treatment of DN. In an in vivo study, combinations of Xiexin decoction constituents exhibited protective effect against DN by inhibiting the inflammation in rats (Wu et al., 2014). BAY 11-7082 protects rats from DN by reducing the expression of inflammatory cytokines including TNF-α, IL-1β, and IL-6, and inhibiting the oxidative damage mediated by hyperglycemia (Kolati et al., 2015). In the present study, we found that astilbin inhibited the production of HG-induced inflammatory cytokines including IL-1β and IL-6 in HBZY-1 cells, indicating that astilbin blocked HG-induced inflammation. It has been shown that DN is characterized by excessive deposition of ECM components, which usually leads to glomerulosclerosis and tubulointerstitial fibrosis (Yokoyama and Deckert, 1996). The major components of ECM that have been found to be overexpressed in DN are Col I, Col III, Col IV, Col VI, FN, and laminin (Yokoyama and Deckert, 1996). Ma et al. (2017) reported that HG induced ECM accumulation including Col IV, laminin and FN in human mesangial cells. The expression levels of the ECM-associated molecules Col IV and FN in the supernatant of cells treated with HG were significantly increased (Zhou et al., 2018). In addition, TGFβ is a key cytokine for mediating both the induction and promotion of fibrogenesis and important for ECM accumulation in DN (Yokoyama and Deckert, 1996). Many studies have indicated that HG induced the secretion of TGF-β (Yokoyama and Deckert, 1996). TGF-β1 is a subtype of TGF-β and has been demonstrated to be induced by HG in human mesangial cells (Li et al., 2017). CTGF is a co-factor for TGF-β1 and is involved in the development of DN which is induced by HG, angiotensin II, and TGF-β1 (Chen et al., 2009). MMPs are a large family of Zn 2+ -dependent enzymes that degrade many ECM components (Sun et al., 2013). Among MMPs, MMP-2 and MMP-9 have been shown to be associated with DN, in which the expression levels of MMP-2 and MMP-9 were decreased (McLennan et al., 1994;Del Prete et al., 1997). We found that HG induced the expression of Col IV, FN, TGF-β1, CTGF, and ECM accumulation in HBZY-1 cells. The induction was attenuated by astilbin treatment. Astilbin treatment also attenuated HGinduced decrease in expression of MMP-2 and MMP-9. These data indicated that astilbin suppressed the HG-induced ECM accumulation in HBZY-1 cells. TLR4 is a receptor of the innate immune system, and activation of the TLR4 pathways may cause chronic inflammation (Lucas and Maes, 2013). TLR4 pathways induce the production of reactive oxygen/nitrogen species and oxidative/nitrosative stress, leading to TLR-related diseases, including nephropathy, asthma, arteriosclerosis, stroke, type 2 diabetes, rheumatoid arthritis, and so on (Lucas and Maes, 2013). TLR4 has been reported to establish a link between inflammation and fibrosis in DN (Lepenies et al., 2011;Ma et al., 2014). NF-κB is a transcription factor family that is involved in various physiological processes, especially in inflammatory and immune responses (Pires et al., 2018). These evidences imply that inhibitors of these pathways may contribute to prevent the inflammatory diseases. The TLR4/MyD88 and NF-κB pathways are found to form a signaling axis and participate in inflammation responses Qu et al., 2017;Zhang et al., 2017). It's well established that TLR4 binding to adapter molecule MyD88 activates NF-κB and downstream signaling cascades, consequently leading to upregulation of multiple pro-inflammatory cytokines such as IL-6, TNF-α, and TGF-β1. These cytokines further promote to secrete collagen, which results in ECM (Liu et al., , 2015. We also found that HG induced the activation of TLR4/MyD88 and NF-κB pathways, and the inhibitor of TLR4 inhibited the activation of NF-κB pathway, suggesting that TLR4/MyD88 and NF-κB pathways formed a signaling axis in HBZY-1 cells under HG condition. TAK-242, a small molecule specific inhibitor of TLR4 pathway, selectively binds to Cys747 of TLR4 and subsequently disrupts its interaction with adaptor molecules TIR domain-containing adaptor protein (TIRAP) and TRIFrelated adaptor molecule (TRAM), thus suppressing TLR4 signal transduction and its downstream signaling events (Matsunaga et al., 2011). We found TAK-242 reversed the induction of HG on inflammatory responses and ECM accumulation, indicating that TLR4/MyD88/NF-κB pathway played an important role in HGinduced DN. However, the present study is restricted to in vitro studies, and an in vivo study is in progress. In summary, this study demonstrated that HG induced cell proliferation and the production of IL-6, TNF-α, Col IV, FN, TGF-β1, and CTGF in HBZY-1 cells. The treatment of astilbin reversed the induction of HG. The TLR4/MyD88/NF-κB pathway was activated under HG condition and the inhibitor of TLR4 exhibited the same effect with astilbin on reversing the induction of HG. The results provided new sight that astilbin might be served as a therapy agent for DN. AUTHOR CONTRIBUTIONS FC designed the study and drafted the manuscript. FC and XZ performed the experiments. XZ, ZS, and YM analyzed the data. XZ and YM edited the manuscript. All authors approved the manuscript to be submitted.
4,106.2
2018-10-18T00:00:00.000
[ "Biology", "Medicine", "Chemistry" ]
Option Pricing Model Combining Ensemble Learning Methods and Network Learning Structure Option pricing based on data-driven methods is a challenging task that has attracted much attention recently. There are mainly two types of methods that have been widely used, respectively, the neural network method and the ensemble learning method. The option pricing model based on the neural network has high complexity, and a large number of hyper-parameters will be generated during training, resulting in difficult model adjustment. Furthermore, a lot of training data are needed. The option pricing model based on ensemble learning is not ideal for data feature extraction, because each calculation of the ensemble learning method is mainly to reduce the final residual. Therefore, this paper adopts a learning framework that embeds the modular ensemble learning methods into the network learning structure, and an option pricing model based on deep ensemble learning is proposed. The model is mainly composed of two parts: features reorganization based on random forest, used to calculate the importance of features, combined with the original data as training input; the multilayer ensemble data training structure is based on network learning structure and embeds two ensemble learning methods as network modules, and it also designs a stop algorithm to automatically determine the number of layers. This enables the model to retain the effect of data feature extraction and adapt to small and medium data sets without generating many hyper-parameters. Moreover, in order to make the model fully absorb the advantages of the two ensemble learning methods, we adopt cross-training for data. From the experimental results, it can be concluded that compared with the current optimal method, the prediction performance of the proposed model is improved by 36% in the root mean square error (RMSE), which proves the superiority of the proposed model from the quantitative direction. Introduction Since the option as a stock derivative o cially appeared in the market, options have always been a hot topic in nancial markets. As a kind of stock derivative, options also have risk similar to stocks, and in the process of trading may also face huge losses. us, how to reasonably avoid the risk has become the early option market that has been plagued by people's problems. To solve this problem, the option pricing model is proposed. In 1973, Black and Schole [1] rst proposed an option pricing model, the Black-Schole model. e Black-Schole model is the rst parametric model proposed to predict the price of options based on strict conditional assumptions in economics. erefore, the model cannot perfectly t the changing nancial market, and there is also a certain error between the predicted value and the real value in the market. So, scholars have made further research by relaxing the strict economic assumptions in the model [2][3][4]. e parameter model attempts to nd some speci c laws from the option data obtained in the market for the prediction of future option prices and tries to convert these laws into formulas in economics and mathematics. Some information signals that can be directly obtained from the market, such as stock price, execution price, maturity time, and volatility, are taken as the input of the formula and the price forecast of the options as the output of the formula [5]. However, for the real market that is changing all the time, the sensitivity of the parameter model to the change in the market has not reached people's expectations; that is to say, the prediction error of the parameter model is not as small as expected compared with the actual market data [6]. In order to make the prediction results of the model more attached to the data in the real market, people began to introduce the data-driven model [7], compared with the parameter model, the data-driven model is generally constructed based on the machine learning method. ese models solve the prediction problem of option price in a data-driven way. Data-driven model is to find the connection between data from a large number of historical data, through which to predict the option price. Motivation e advantage of the data-driven model is that as long as the training data is enough, the regression model trained by the machine learning method can well summarize the situation outside the training data samples [5]. erefore, the datadriven model can well predict the price of options and even surpass the formula derived from economic principles. e earliest data-driven model introduced into option pricing is the neural network. Starting from the basic neural network method, with the deepening of research, deep neural network (DNN), artificial neural networks (ANN), hybrid neural network, and other methods with higher complexity and better effect are also applied to the field of option pricing. e advantage of these neural network methods is that they can well extract the feature of the original training data, yet neural networks usually require a lot of data to train. At the same time, the neural network model is generally a complex model, and there are many hidden layers and a large number of super parameters, which makes the model parameter adjustment difficult [8]. erefore, in recent years, the research on the option pricing model is not simply limited to the neural network and various optimization and improvement of the neural network. e traditional ensemble learning method has been introduced into the field of option pricing. In essence, ensemble learning is a method that combines the results of multiple basic learners to make decisions [9]. Compared with the neural network, ensemble learning can be trained on small data sets. At the same time, the ensemble learning method does not have a large number of super parameters, and the debugging of parameters is more convenient. Unfortunately, the ensemble learning method is not as good as the neural networks in feature extraction of original training data. If the advantages of these two methods can be combined and the shortcomings of each other can be complemented, the prediction effect of the model will be significantly improved. Some people [8] believe that the learning advantage of a neural network lies in their layer-by-layer processing of the original data characteristics. erefore, a deep forest method is proposed. e deep forest adopts a cascade hierarchical structure like the neural network. Moreover, random forest and completely random forest are used as the processing units of each layer. is not only retains the layer-by-layer processing of neural network but also takes advantages of random forest with fewer super parameters and trained on small data sets. e deep forest method has been applied in many fields such as medical image, image classification, and multilabel learning and has achieved ideal results [10][11][12]. Inspired by deep forest, we introduce this idea to the regression problem of option pricing, and a new option pricing model based on deep ensemble learning is proposed. e model is mainly composed of two parts: features reorganization is based on random forest, used to calculate the importance of features, combined with the original data as training input; the multilayer ensemble data training structure is based on network learning structure and embeds two ensemble learning methods as network modules and designs a stop algorithm to automatically determine the number of layers, which can produce a small number of hyper-parameters and adapt to small and medium data sets, and with good data feature extraction effect. For the input and output of data, the output of each layer is spliced with the original input data to form the input of the next layer. In the meantime, in order to make the model fully absorb the advantages of the two ensemble learning methods, influenced by the idea of mutation in genetic algorithm, we adopt cross-training for data, so that the data are trained in different methods in the adjacent two layers. To briefly summarize the contributions, we have the following: Contribution. (1) A new option pricing model based on deep ensemble learning is proposed. is model introduces the idea of deep ensemble learning into the field of option pricing, adopts a learning framework that embeds the modular ensemble learning into the network learning structure, and encompasses two subprocesses, namely, importance extraction and multilayer ensemble. (2) e multilayer ensemble structure is based on network learning structure and embeds two ensemble learning methods as network modules and designs a stop algorithm to automatically determine the number of layers, which can produce a small number of hyper-parameters and adapt to small and medium data sets, and with good data feature extraction effect. e structure also adopts cross-training for data in order to make the model fully absorb the advantages of the two ensemble learning methods. (3) A novel features reorganization module which can calculate the importance of features by using the processing results of random forests on the original data is designed. e feature importance matrix is taken as the weight matrix multiplied by the original data as the training data. 2.2. Organization. e rest of the paper is organized as follows: Section 2 explains the background of the parameter model and nonparametric model in the option pricing field. In Section 3, the principle of proposing a model is introduced. In Section 4, the experimental results of the proposed model and classical parametric and nonparametric models are discussed. Finally, Section 5 concludes the paper. Parametric Model. e earliest proposed option pricing model is the B-S model, which was proposed by Black and Schole [1] in 1973. ere are some unrealistic assumptions in the B-S model, such as the assumption of stock price returns and the assumption of market-implied volatility are not in line with the actual market [13]. erefore, the B-S model cannot perfectly fit the actual market, and there is a certain error between the predicted value and the real value. So, scholars have done further research by relaxing the assumption of the B-S model. Heston [14] assumes that volatility follows a random diffusion process and proposes the Heston model. Merton [15] believes that the price of options should be the sum of a continuous process and a traditional discrete jump process. So the Poisson distribution satisfying the random process is added to the model, and the jump-diffusion model is proposed. Kou [16] found that the prices of some options are different from the traditional discrete jump process, and the probabilities of bipolar jumps are different or show multilevel jump changes. erefore, a double-exponential jumpdiffusion model is proposed. Bollerslev [17] regards the volatility variable of options as a discrete random process of change, unlike the assumption that the volatility in the B-S model is a constant, which proposes the GARCH model. ere are also some models breaking the assumption of constant volatility, such as the random volatility model and the random volatility jump model [18]. Some scholars have broken the assumption that the risk-free interest rate is constantly replaced the risk-free interest rate with a variable short-term interest rate, proposed a stochastic interest rate model [19,20], or replaced the initial constant risk-free interest rate with a weighted average of multiple interest rates, and proposed an "interest rate affine" model, Duffee [21]. ese parameter models improve the prediction results of option prices to some extent, but they are still looking for some specific laws from the market and trying to convert them into formulas in economics and mathematics, so they are still subject to some economic and statistical assumptions. Different from the traditional parametric model, our proposed model is based on data-driven option pricing to avoid some assumptions that affect the effectiveness of the model. Data-Driven Model. Computer scientists have also used neural networks to solve the problem of option pricing for a long time [7,22,23]. Option pricing can be seen as a standard regression problem, and many methods in machine learning can be applied to the field of option pricing. Many people have been studying how to better apply neural network (NN) to option pricing over the years [24][25][26][27][28]. Bennell [29] used an artificial neural network (ANN) to predict the option price; PC Andreou [30] tried to build a new model by combining ANN and parameter model in addition to using ANN alone; Lajbcygier and Connor [31] proposed a hybrid neural network for trading by applying a guided method; Culkin [32] used deep neural networks to construct an option pricing model. ese network models make people pay more attention to the development of big data, and big data have many other aspects that are closely related to people, such as big data analysis of health care can better protect the health [33]. e standard neural network consists of many simple connection processors called neurons, each of which produces a series of real-valued activations. e input neurons are activated by the sensor of the sensing environment, and other neurons are activated by the weighted connection from the previous active neurons. ese neural networks are designed to mathematically simulate how the human brain works by receiving a wide range of stimuli and then parsing them by learning the neuron layer that associates input and output [34]. e application field of deep learning is very extensive, and the field of auxiliary diagnosis is also a recent research hotspot. Some scholars have constructed a detection model for sentiment analysis of mental disorder based on attentionbased deep learning and fuzzy classification [35]. ese neural network methods can well extract data features because they can process the original data layer by layer. Meanwhile, because the neural network methods are black boxes, the processing of each layer is invisible, resulting in a large number of super parameters, and the parameter adjustment of the model is very difficult. At the same time, because the neural network method requires a large number of data for training, the effect on small data sets may not be ideal. erefore, the traditional ensemble learning method is introduced into the field of option pricing. Similarly, ensemble learning methods have a wide range of applications, such as diabetic retinopathy classification model based on ensemble learning [36] and software cost analysis model based on multiobjective optimization [37]. Some ensemble learning methods also have good applications in financialrelated fields [38]. Codru [39] constructed multiple options pricing models by using the ensemble learning methods such as random forests, XGBoost, and LightGBM and conducted prediction experiments on the actual market data. Ensemble learning is a general term for the methods that combine multiple basic learners to make decisions, which is usually used to supervise machine learning tasks. A basic learner is an algorithm that takes a set of labelled examples as input and generates models that generalize these examples (such as classifiers or regressions). e main premise of ensemble learning is that by combining multiple models, the error of a single basic learner is likely to be compensated by other basic learners, so the overall prediction performance of the ensemble will be better than that of a single basic learner [40]. For each basic learner, it can complete training on small data sets as well, and there is no need for a large number of hyperparameters in training. erefore, the ensemble learning method has the advantages of fewer hyperparameters and adapting to small data sets. Although ensemble learning has many advantages, it is inferior to the neural networks in data feature extraction. Our proposed model processes data layer by layer to ensure the effect of data feature extraction, and each layer is a set of ensemble learning algorithms, that is, an ensemble of Mathematical Problems in Engineering the ensemble. In order to maintain the diversity of integration, we choose two ensemble learning methods at each level. As is well known, diversity is the key to overall construction [8]. In terms of the selection of specific methods, according to the actual test results of various ensemble learning algorithms in the field of option pricing [39], we choose two methods with the best performance, namely, XGBoost [41] and LightGBM [42], which retains the advantages of fewer hyperparameters of integrated learning and adapting to small data sets and obtains better results than single neural network or single ensemble learning methods. Proposed Method. As presented in Section 2.2, there are recent studies showing promising results in option pricing using neural network methods and ensemble learning methods. us, this paper aims to retain the advantages of the neural network method in feature extraction, and at the same time, it has the advantages of ensemble learning fewer hyperparameters and adapting to small data sets. Our methodology is presented in Figure 1. In this section, we first introduce the overall architecture of our proposed framework and then discuss details of the two main modules: (1) the features reorganization for obtaining feature weight and (2) the multilayer ensemble structure are used to training data. Overall Architecture. Input data are first prepared, which consists of stock prices, execution prices, maturity times, and implied volatility. First, the original input data are processed by random forests, and the output vector and the trained forest model can be obtained. Based on this, the importance of features can be calculated, and the weight matrix of features can be obtained. In the multilayer integration structure, each layer receives the feature information processed by the previous layer, and the processing results of this layer are spliced with the input vector to the next layer. e last layer is the output layer, and the input data of the output layer will no longer splice the original data, but the average value of the prediction results obtained by each processing module of the front layer is output to obtain the final prediction results. e calculation process is as follows: where y represents the prediction result of the final output, y 1 , y 2 , . . . , y n represents the output value obtained by the processing module in each layer, and n represents the number of processing modules in each layer. Features Reorganization. We obtain the weight matrix of features through the process of features reorganization and combined it with original data as training input in the model. First, the original input data are processed by random forest, and the output vector and the trained forest are obtained. en, based on this, the importance of the feature is calculated, and the weight matrix of the feature is obtained. e weight matrix is multiplied by the original input feature as the input of multilayer integrated training. e process of calculating the importance of features is as follows: where n k is the importance of a node, w k , w left , and w right are the ratio of the number of training samples to the total number of training samples in the node and its left and right subnodes, respectively, and G k , G left , and G right are the Gini impurity of the node and its left and right nodes, respectively. Multilayer Ensemble Structure. In this section, we introduce the multilayer ensemble structure in three parts, namely, the layer-by-layer training strategy, two ensemble learning algorithms, and the cross-training and stop algorithm. e Layer-by-Layer Training Strategy. e multilayered structure is based on the hierarchical structure of deep neural networks. e inner the network layer of the deep neural network can be divided into the input layer, hidden layer, and output layer according to different positions. Generally, the first layer is the input layer, the middle layers are hidden layers, and the last layer is the output layer. e layers are fully connected; that is, any neuron in layer i must be connected to any neuron in layer i + 1. Although DNN looks complex, it is very similar to sensors in small local models, i.e., a linear relationship z � m i�1 w i x i + b plus an activation function σ(z). Since there are many parameters and layers of DNN, the definitions of bias b and linear coefficient w need certain rules. Definition of bias b: the bias corresponding to the third neuron in the second layer is defined as b 2 3 . Definition of linear coefficient w: the linear coefficient from the fourth neuron in the second layer to the second neuron in the third layer is defined as w 3 24 . Among them, the upper marker 2 represents the number of layers, and the lower marker 3 represents the index of neurons where bias exists. Note that the input layer has no w parameter, bias parameter b. Assuming that the activation function we choose is σ(z), the output value of the hidden layer and the output layer is a. For the output a 2 1 , a 2 2 , a 2 3 of the second layer, we have For output a 3 1 of the third layer, we have 4 Mathematical Problems in Engineering Generalizing the above example, assuming that there is m neuron in the l − 1 layer, for the output a l j of the j neuron in the l layer, we have If l � 2, then for a 1 k is the input layer x k . From the above, it can be seen that using the algebraic method to express the output one by one is more complex, and if the matrix rule is relatively simple. Assuming that there are m neurons in the l − 1 layer and n neurons in the l layer, the linear coefficient w of the l layer constitutes a matrix W l of n × m, the bias b of the l layer constitutes a vector b l of n × 1, the output a of the l − 1 layer constitutes a vector a l− 1 of m × 1, the linear output z of the l layer before activation constitutes a vector z l of n × 1, and the output a of the l layer constitutes a vector a l of n × 1. en, expressed by the matrix method, the output of the l layer is Two Ensemble Learning Algorithms. Each level is an ensemble of ensemble learning methods, i.e., an ensemble of ensembles. Here, we include two different types of ensemble learning methods to encourage diversity. e core idea of f(x) can be expressed in three steps. e first step is to continuously add a tree, that is, to continuously split the features to generate a new tree, and each time to add a tree is actually to learn a new function, so as to fit the residual of the previous prediction. e second step is to get k trees when we complete the training, and then, we need to get a predicted sample score. Specifically, according to the characteristics of this sample, each tree will fall to a corresponding leaf node, so that each leaf node will correspond to a score. e third step is to add the scores corresponding to each tree we get, which is our predicted value for the sample. Assuming that we use y to represent the predicted value: where ω q(x) is the fraction of leaf node q, F corresponds to the set of all k regression trees, and f(x) represents one of all regression trees. Obviously, our goal is to make the current prediction value y i as close as possible to the real value y i , and improve the adaptability of the algorithm to the data outside the training sample as much as possible. erefore, from a mathematical point of view, this is a problem of finding the optimal value. We regard the objective function as the sum of the loss function and the regularization term; then, the objective function can be expressed as It can be seen from the formula that the objective function is divided into two parts, the left side i l(y i − y i ) is the loss function, and the role is to reveal the training error, that is, the gap between the predicted score and the real score. e right side k Ω(f k ) is a regularization term to define the complexity of the objective function. For formula (8), y i is the output of the entire cumulative model, and the regularization term k Ω(f k ) is a function that represents the complexity of the tree in the model. e smaller the value of the regularization term is, the lower the complexity of the tree is, and the stronger the generalization ability of the model is. e specific formula is expressed as where T represents the number of leaf nodes, c represents the parameters used to control leaf nodes, so that the leaf node T is as few as possible, ω represents the fraction of leaf nodes, and λ is used to control the fraction of leaf nodes. e role of c and λ is to minimize the prediction error and prevent overfitting. Specifically, i in the first part of the objective function represents the prediction error of the i sample, and l(y i − y i ) represents the prediction error of the i sample. Our goal is, of course, that the smaller the error is, the better. Previously, it was said that the first method needed to accumulate the scores of multiple trees to get the final prediction score. In the process of iterative implementation, each iteration was based on the existing tree, adding a tree to fit the residual between the prediction results of the previous tree and the true value. We need to select a f in each iteration to minimize our objective function. e objective function for the entire first method process can now be expressed as Similarly, l in formula (10) represents the loss function, and Ω(f t ) represents the regularization term. g(x) is a method proposed to improve some defects of the traditional boosting method. e traditional boosting algorithm needs to scan all the samples for each feature to select the best segmentation point, so the traditional boosting method will be time-consuming. erefore, with the development of the traditional boosting method, it has been unable to meet the current needs in efficiency and scalability. In order to solve the high-latitude mass sample data that need to be processed now, g(x) uses two methods, one is GOSS (gradient-based one-side sampling) method, and the GOSS method does not use all the sample points to calculate the gradient, but sample to calculate the gradient; the other is the EFB (exclusive feature bundling) method. e EFB method does not scan and calculate all the features when searching for the best segmentation point but binds some features together to reduce the dimension of the features and then finds the best segmentation point, which will greatly reduce the consumption in the process of searching for the best segmentation point. ese two methods can reduce the time complexity of processing samples but do not lose accuracy. GOSS method: in the calculation of information gain, generally, the sample points with a large gradient play a major role in the calculation; that is to say, these sample points with a large gradient can contribute more information gain, so the main idea of GOSS method is to retain these sample points with large gradient when the sample is downsampled, ignored a part of the remaining sample points with small gradient, and randomly sampled these sample points in proportion, which can not only save the processing time but also maximize the accuracy of information gain assessment. EFB method: the traditional boosting method will not only conduct data sampling but also conduct feature sampling, mainly to further reduce the training speed of the model. e second method also has feature sampling, but this feature sampling is not the same as the traditional feature sampling method. It binds mutually exclusive features to reduce the dimension of features so that there will be less consumption for feature sampling. Usually, the highlatitude data in our application are basically sparse data, which makes it possible to reduce the number of valid features by designing an almost lossless method, especially in a sparse feature space where there are many mutually exclusive features, which allows us to bind mutually exclusive features stably together to form a new feature, so as to reduce the feature dimension. e combination of mutually exclusive features in g(x) uses the histogram algorithm. e basic idea of the histogram algorithm is to discrete the continuous eigenvalues into k integers and construct a histogram with width k. According to the discrete value as the index in the histogram of the cumulative statistics, when traversing the data once, the histogram accumulated the required statistics, and then according to the discrete value in the histogram, traverse to find the optimal segmentation point. e Cross-Training and Stop Algorithm. Both f(x) and g(x) are very mature ensemble learning algorithms, which are widely used in various scenarios. In comparison, f(x) adopts the greedy process to calculate each feature to find the optimal segmentation point with better accuracy, and g(x) adopts the decision tree growth strategy of selecting a leaf node with the largest splitting gain in each leaf node layer to split with higher accuracy. To fully absorb the different advantages of the two methods in training, inspired by the idea of mutation in genetic algorithms, we exchange the output data of each layer and input the results of the two methods into the next layer after splicing with the original data, so that the adjacent two layers are trained by different methods. At the same time, the layers of our model are adaptive, and after adding a new layer, the performance of the whole model will be estimated on the verification set. If there is no obvious improvement, the number of layers will no longer increase, and the training process will terminate. For the performance evaluation criteria, we choose the root mean square error of prediction which is more suitable for the options field, not the accuracy commonly used in the classification field. e multilayered structure of the whole model can adaptively determine the number of layers by z(n), which also makes the module applicable to different scales of training data. e calculation process of z(n) is as follows: where r(n) represents the root mean square error of prediction with n layer, and r(n − 1) represents the root mean square error of prediction with n − 1 layer. Stop increasing layers when z(n) ≥ 0. Data. e data used in our experiment are daily data on the KOSPI200 option market for the period from 2 June 2009 to 7 November 2019. KOSPI200 option is based on KOSPI200 index. e KOSPI200 index is a weighted total market price index of 200 blue-chip stocks listed in the Korean stock market. In order to avoid the synchronization caused by the trading effect, we use the closing price of the KOSPI call option as price data. We use 2064 (80%) samples as the training data set, and the remaining 516 (20%) samples as the test data set. Input vector X consists of stock price S, strike price K, maturity time T − t, and volatility σ. Here, we use σ available in the market, which is given according to the general implied volatility formula. We exclude the interest rate c from the input vector because it changes very little in the period of KOSPI200 data. e evaluation criteria of pricing results are measured by root mean square error (RMSE). e errors were calculated according to moneyness and the duration of the contract. According to the duration of the contract, it can be divided into short term (<1 month), medium term (1 − 3 months), and long term (>3 months), as well as according to the moneyness, it can be divided into Deep In e Money(S⁄ (K < 0.8)), In e Money(S⁄ (K∈(0.8,0.96))), At e Money(S⁄ (K∈(0.96,1.04))), Out of e Money(-S⁄ (K∈(1.04,1.2))), Deep Out of e Money(S⁄ (K > 1.2)). Results is section discusses the experimental results of the model proposed in Section 3. We selected several comparative experimental models for analysis, including two parameter models, respectively, the Black-Scholes model (1973) [1] and the Heston model (1993) [14]. At the same time, there are three machine learning models, namely DNN [43], XGBoost [41], and LightGBM [42]. To emphasize the prediction power of our model, the comparison will be made with the error of these parametric models and machine learning models. In order to find the most suitable feature space, three models are compared, and each model has different input and output. e experimental results are shown in Table 1. Here, S represents the KOSPI200 stock price, K is the strike price, t is the expiration time of the option contract, and σ is the implied volatility available in the market. e roughened number represents the minimum pricing error of each model. e input and output of Model 1 are the same as those proposed by Hutchinson (1994) [7]. In order to reduce dimensionality, he assumes that the evaluation function is homogeneous of degree one in S and K, respectively, is approach has been intensively used in the literature [44,45] with good reported results. In this case, nonparametric models do outperform parametric models. Model 2 uses the same input and output as model 1, but the difference is that model 2 is not a homogeneous assumption. For the next models, implied volatility has been considered. Like Model 1, the performance of parametric models in Model 2 and Model 3 is not as good as that of nonparametric models. In the nonparametric model, the proposed model is better than the separate DNN, XGBoost, or LightGBM models. It can also be seen that the prediction error is the smallest when using the input and output of model 3. After determining the input and output, we performed ablation experiments on the proposed model. Ma is a model without cross-training, Mb is a model with only f(x) ensemble learning method, Mc is a model with only g(x) ensemble learning method, and Md is our model. Table 2 shows the results of ablation experiments, and we used root mean square error and mean absolute error as the evaluation indexes of the ablation experiment. It can be seen from Table 2 that the prediction accuracy of the model trained by only f(x) method or only g(x) method is lower than that of the proposed model. Similarly, the prediction accuracy of the model without cross-training method is also lower than that of the proposed model. is proves from the experimental point of view that the cross-training and two methods of training data used in the proposed model are effective. Based on the input and output of model 3, we do more detailed experiments in terms of moneyness and time until maturity. e pricing error is shown in Table 3, and we can see in detail the prediction results of various methods under different moneyness or time until maturity. From the results, even in the case of different moneyness and time until maturity, the prediction effect of all nonparametric models is better than that of parametric models. In the nonparametric model, the prediction accuracy of the two ensemble learning methods is higher than that of DNN. At the same time, it can be seen that the prediction accuracy of our method is better than the existing methods in most cases. Figure 2 presents the boxplot of pricing errors. e main body of the box represents the error distribution between the first and third quantiles. e center of the box represents the maximum and minimum record errors. As you can see, the error distribution centers of all nonparametric models are basically near 0. e error Figure 2: Boxplot of pricing error for the out-of-sample period. e body of the candle represents the distribution of errors between 25% and 75% of the data, and the wick represents the maximum and minimum recorded error. e results are shown for Model 3. Blue dots represent outliers. distribution center of the proposed model is closer to 0, and the error distribution center of the DNN model is farther away from 0. e error distribution center of the parametric model is between 2 and 3, farther away from 0. is also shows that the overall error of all nonparametric models is relatively small, and the proposed model is more accurate in prediction accuracy. For the robustness of the model, another prediction method is used for the experiment. Use 1-year data as a training set, and then use the next month's data as a test set. Taking one month as a sliding window, seven nonoverlapping periods were tested. e results can be seen in Table 4. Similarly, the nonparametric model has excellent prediction ability. In most cases, the proposed model has higher prediction accuracy than the nonparametric model. However, the differences between nonparametric and parametric models have been diminished. An explication could be the smaller train set compared with the analysis in Table 3, because the more training data of most nonparametric models, the more accurate the prediction results. Figure 3 shows the visualization results of the sevenmonth test data. It can be seen that the prediction error of all nonparametric models is much smaller than that of parametric models. Figure 4 shows the error comparison of each nonparametric model more accurately. is also proves that the proposed model has better prediction accuracy. Conclusion In this study, a new model based on deep ensemble learning was developed for option pricing. e model applies the idea of the deep ensemble to the regression problem of option pricing and encompasses two subprocesses, namely, importance extraction and multilayer ensemble. e performance of the model was experimentally verified, and the results were evaluated from many aspects. A comprehensive comparative study ensures that the model is superior to other models in different measures. erefore, the model based on deep integration learning is used as a skilled tool for option pricing. e limitation of the current work is that although the model has achieved good results on option data with certain exercise time, the pricing of option data with uncertain exercise time is still a challenging problem. In terms of future work, we will consider how to improve the pricing power of the model for different types of option data. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
8,943.2
2022-10-25T00:00:00.000
[ "Computer Science" ]
Monte-Carlo dosimetry and real-time imaging of targeted irradiation consequences in 2-cell stage Caenorhabditis elegans embryo Charged-particle microbeams (CPMs) provide a unique opportunity to investigate the effects of ionizing radiation on living biological specimens with a precise control of the delivered dose, i.e. the number of particles per cell. We describe a methodology to manipulate and micro-irradiate early stage C. elegans embryos at a specific phase of the cell division and with a controlled dose using a CPM. To validate this approach, we observe the radiation-induced damage, such as reduced cell mobility, incomplete cell division and the appearance of chromatin bridges during embryo development, in different strains expressing GFP-tagged proteins in situ after irradiation. In addition, as the dosimetry of such experiments cannot be extrapolated from random irradiations of cell populations, realistic three-dimensional models of 2 cell-stage embryo were imported into the Geant4 Monte-Carlo simulation toolkit. Using this method, we investigate the energy deposit in various chromatin condensation states during the cell division phases. The experimental approach coupled to Monte-Carlo simulations provides a way to selectively irradiate a single cell in a rapidly dividing multicellular model with a reproducible dose. This method opens the way to dose-effect investigations following targeted irradiation. Several studies of radiation-induced responses have been carried out using X and γ-Rays irradiation of C. elegans. Deng et al. have established a relation between the ceramide biogenesis pathway and the radiation-induced apoptosis in the germ line of C. elegans 16 . Using X-rays, Sendoel et al. have found that HIF-1 can regulate p53-mediated apoptotic cell death in response to radiation-induced damage through the secretion of a neuronal tyrosinase 17 . C. elegans has also been used as a biological model for space flight research, including testing the biological effects of cosmic radiation 18 . From a practical point of view, C. elegans is small enough to be compatible with microbeam irradiation since the adult body is 50 µm in diameter and 1 mm in length. Its transparent body allowing a direct visualization of specific tissues makes it a unique model for studying the production and transfer of intra-and inter-tissue damage signalling in the whole organism. A few studies have described the use of C. elegans at CPM end-stations 14,15,19 . Guo et al. have reported that irradiation of the somatic pharynx results in a significant induction of bystander germ cell apoptosis 20 . Li Q et al. have performed local irradiation of either the posterior pharynx or the vulva of C. elegans as a comparison of the intra-and inter-system bystander effects, and investigated the spatial function of the oxidative DNA damage response by tissue-specific RNA interference 21 . We report here the selective irradiation of 2-cell stage C. elegans embryos using the CPM at the AIFIRA facility (Applications Interdisciplinaires de Faisceaux d'Ions en Région Aquitaine) 22 . The aim of this study was to develop a methodology to micro-irradiate in a reproducible way the 2-cell stage C. elegans embryos with protons. Such a micro-irradiation of a dynamic 3D biological model with a fast cell division and rapidly evolving target volumes raises experimental challenges. The validation of this procedure includes an experimental validation of our ability to reproducibly induce radiation damage in embryos. Besides, Monte Carlo simulation on realistic 2-cell stage C. elegans phantoms was performed to characterize the specific energy absorbed in different biological compartments (chromatin, nucleus, cytoplasm and whole embryo), while considering different condensation states of chromatin throughout the cell division. Results embryo phantom generation for monte carlo dosimetry simulation. When targeting sub-cellular structures with a microbeam, the radiation dose is delivered in micrometric areas. Standard dosimetry methods, based on the calculation of the absorbed dose at the macroscopic scale, are not fully relevant in this configuration. The spatial inhomogeneity of the irradiation leads to difficulties in defining the volume of interest, and the concept of absorbed dose is of limited use. In such situations, ICRU (International Commission on Radiation Units and Measurements) has introduced the concept of specific energy, defined as the ratio of the energy imparted to the mass of the volume of interest 23 . As the energy deposited by charged-particles depends significantly on the geometry of the target, we developed a three-dimensional realistic voxelized model of a 2-cell stage embryo. The energy deposits in 3 compartments were determined in our dosimetric study: (i) the chromatin, (ii) the nuclear volume, and (iii) the whole embryo. The geometries of these compartments were built from confocal microscopy acquisition of paraformaldehyde-fixed embryos. Phalloidin staining reveals the actin cytoskeleton of the 2-cell stage embryo in red (Fig. 1a) and Hoechst 33342 staining revealed the chromatin in blue (Fig. 1b). As Phalloidin stains the actin-cytoskeleton which is absent in the nucleus, the nuclear volume is indirectly outlined by manual selection based on contrast rendering and cropping the area outside the region of interest (ROI). This ROI was then set as the green channel and the outside region was masked and left out from further processing (Fig. 1c). The resulting numeric phantom of a 2-cell stage was then implanted in Geant4 as a parametrized volume as shown in Fig. 1d. The microbeam spot size was set to 1.5 µm at the target (AB cell nucleus) position. The methodology described previously by Barberet et al. 24 was used to determine the energy imparted in each compartment as well as its distribution in space (Fig. 1e). The data obtained using this approach are summarized in Table 1. We calculated the corresponding specific energy as the ratio of the total energy deposit to the sum of the masses of voxels constituting the target (chromatin, nucleus or embryo). The mass of the different compartments was extrapolated by multiplying the sum of voxel volume constituting each compartment by liquid water density. We noted that the energy is deposited exclusively in the irradiated nucleus AB. The non-irradiated nucleus P1 does not receive any energy deposit (Fig. 1e). The statistical uncertainties related to the Monte-Carlo simulation are below 2%. Since cell division is a rather fast process in C. elegans embryos as illustrated in Fig. 2a, we investigated the impact of the DNA/chromatin condensation level on the energy imparted to the chromatin (Fig. 2a). For this purpose, 40 single cells within 2-cell stage embryos were simulated after irradiations with 10 3 3 MeV protons. Based on the chromatin condensation status throughout the cell cycle progression of the AB cell (t = 0 to 4 min), 5 distinct chromatin distributions could be discriminated from confocal imaging (Fig. 2b). They are representative of distinct mitosis progression states. The calculated energy imparted to the chromatin is different depending on the chromatin distribution observed in prophase or metaphase (Fig. 2b). Nevertheless, as illustrated by these results, even if some variation of the chromatin distribution is found in prophase, it does not affect drastically the energy deposit in the chromatin at the irradiation time (0.5+/− 0.15 pJ). This point is crucial in the context of the dose-response assessment of biological radiation-induced effect. In addition, as illustrated in Fig. 2c, the calculated energy deposit in the 40 embryonic cell nuclei is around 10.5 +/− 2.8 pJ per nucleus (+/− 3.8 as 95% Confidence Interval (IC)). This 25% fluctuation is related to the variation of the nuclear volume from one embryo to the next. Indeed, the mean nuclear thickness traversed by the incoming protons was evaluated to 6 µm with a variation of 1.5 µm. Despite the additional uncertainties due to the manual segmentation of the nuclear volume, this is confirmed by the 2D fluorescence images of the GZ264 strain. We observed that the nucleus diameter can vary from 6.5 to 8.5 µm depending on the cell cycle stage. Assuming a spherical shape, this corresponds to a 25% variation of the nucleus thickness. Note that if we consider the total energy imparted to the nuclear volume, only 4% of this energy is absorbed by the chromatin in prophase (Table 1). This ratio can increase up to 60% in metaphase. www.nature.com/scientificreports www.nature.com/scientificreports/ Selective and targeted irradiation reveals HUS-1::GFP nuclear foci. We adapted well-established CPM irradiation techniques previously developed for in vitro culture studies 2,4 . Embryos, prepared as described in Fig. 3a, were kept in M9 medium between two stretched polypropylene foils. The chosen foils are thin enough (4 µm) to allow the charged-particles to traverse through the sample and be detected downstream (Fig. 3b). This procedure resulted in the reproducible placement of the embryos in the focal plane of the objective lens. Under such conditions, we found that the slight compression of the polypropylene foils has no significant effect on the first cellular divisions. C. elegans embryos undergo fast, cleavage-type divisions in which the cell volume decreases after every cell division. The cell cycle in the early embryo stages is only composed of consecutive rounds of DNA synthesis (S phase) and cell division during mitosis (M phase), with no gap phase until the 28-cell stage 25,26 . The 2-cell stage embryos are formed of a larger anterior blastomere, AB, and a smaller posterior blastomere, P1, which have different fates and cell division timings; AB is divided 2 min before P1 21 . AB and P1 are oriented along the anteroposterior axis (Fig. 3c). In our experimental conditions, the first cellular divisions were found to be similar to the ones observed in conventional experimental conditions with C. elegans zygote exhibiting rotational holoblastic cleavage. We chose to irradiate selectively and specifically the nucleus of the AB cell. This corresponds to time 0 of our time lapse recording (t = 0 min). www.nature.com/scientificreports www.nature.com/scientificreports/ In order to obtain a direct visualization of the radiation-induced DNA damage in the C. elegans embryo, we used the WS1433 (opIs34[hus-1::GFP]) strain. In C. elegans, HUS-1 is a conserved nuclear protein that is expressed in early embryos and the adult germline. In proliferating embryo nuclei, HUS-1::GFP expression is low, homogeneously distributed and limited to the nucleus as shown in Fig. 4 (Control). HUS-1 is part of the 9:1:1 complex belonging to DNA damage checkpoint protein acting as a DNA damage sensor. Thus, it is required for DNA damage-induced cell cycle arrest and apoptosis 27 and is essential for genome stability, as demonstrated by an increased frequency of spontaneous mutations, chromosome non-disjunction, and telomere shortening 28 . When DNA damage occurs, HUS-1 is relocated into the nucleus in distinct foci that overlap chromatin. These foci are likely sites of double-strand breaks (DSBs). Following targeted micro-irradiation with 10 4 protons, HUS-1::GFP relocated swiftly and formed a distinct and bright focus at the site of irradiation (Fig. 4, Supplementary Videos S1 and S2). However, we observed some variability between the nuclei in terms of diameter and number of foci per nucleus. HUS-1::GFP foci were maintained during the mitotic division and found in the daughter cell nuclei ABa and ABp. HUS-1::GFP foci were only detected for the highest irradiation specific energy (180 Gy/chromatin) in 8/9 embryos (additional data are shown in Fig. S2) harbouring nuclear HUS-1::GFP foci (total of 9 irradiated embryos). By contrast, we never observed the occurrence of HUS-1::GFP foci in non-irradiated nuclei and their vicinity (P1 and related daughter cells), as well as in control embryos and in irradiated embryos with the lowest specific energy. In addition, no HUS-1::GFP foci were detected when the embryos were irradiated with 10 3 protons (data not shown). Consequences of the micro-irradiation on chromosomes in 2-cell stage embryo. The cellular consequences of targeted irradiation were followed using a strain (MG152) expressing the histone H2B::GFP fusion protein. This protein allows the imaging of both mitotic chromosomes and interphase chromatin in intact, living embryos. Embryos expressing H2B::GFP were observed using time-lapse imaging to determine the pattern of chromatin staining in interphase and mitosis after irradiation of the AB nucleus. As observed in control embryos, H2B::GFP enables highly sensitive chromatin detection in all phases of the mitosis. Figure 5a shows control embryos with chromosome condensation, the formation of a regular metaphase plate, a sudden and www.nature.com/scientificreports www.nature.com/scientificreports/ complete separation of anaphase chromosomes without the presence of chromatin bridges between daughter cells (ABa, ABp, EMS, and P2). In contrast, in the irradiated embryos, abnormalities in chromosomal morphology became apparent during the first mitotic division of the AB cells, with chromosomes more poorly condensed than in non-irradiated embryos during anaphase (Fig. 5a, irradiation). In addition, H2B::GFP provided a remarkable level of sensitivity which allowed the detection of chromatin bridges. These are lagging strands of DNA between chromosomes that are separating during the mitotic phase and became obvious between the two daughter cells upon reaching the end of anaphase. These chromatin bridges remained until cytokinesis occurred, when the cell division broke them to produce a "cut" phenotype, resulting in the uneven distribution of genomic DNA to daughter cells. Chromatin bridges were clearly observed in all the embryos irradiated at the highest specific energy (5/5, additional data are shown in Fig. S3). The P1 blastomeric division, which occurs 2-3 minutes later, was normal; the chromosomes appeared well condensed in both controls and in irradiated embryos. No chromatin bridges were identified between the P1 daughter cells, and their nuclear morphology under epifluorescence was normal (Fig. 5, Supplementary Videos S3-S6). Epifluorescence images of H2B::GFP embryos show the presence of one polar body at the anterior part of the embryo. Polar bodies are visible as small membrane-bound oval structures between the eggshell and the embryo, and are clearly visible with histone H2B::GFP in living embryos. The first polar body is 2N, born during eggshell secretion, and trapped between eggshell layers. The second polar body 1N is born after eggshell formation and is in contact with the embryo. At the 2-cell stage, the second polar body is on the surface of the anterior AB cell. During AB cell division, the second polar body is drawn between cells by the ingression furrow. At the 4-cell stage, the second polar body is inside a membrane within one of the AB daughter cells. In irradiated embryos, the presence of the chromatin bridges between the two daughter cells ABa and ABp affected and stopped the migration of one of the polar bodies until the cytokinesis occurred (Fig. 5a). Despite risking aneuploidy, C. elegans embryonic cells internalize the polar body and degrade polar body chromosomes inside a phagolysosome 29 . Here, we showed that the fate of the polar bodies seems to be modified in presence of the DNA bridges. . Before irradiation HUS-1::GFP is homogenously distributed in nuclei (t = 0 min). The AB cell nucleus is targeted with protons at t = 0 min. In micro-irradiated AB nucleus of HUS-1::GFP embryo, a focus, indicated with white arrow (→), appears just before the first cell division of AB (t = 2 min.) and reappears in the daughter cells ABa and ABp indicated with stars (*, right). We never observed foci in both non-irradiated embryo (control) and neighbouring non-irradiated nuclei (P1, EMS, and P2). Irradiated embryos (opls34 = 10 4 protons) were observed in real time following irradiation. Scale bar: 10 µm. Real-time analysis of micro-irradiated nuclei reveals synchronization disruption of cell divisions within the 4-cell stage C. elegans embryos. The dynamics of DNA replication in the C. elegans 2-cell stage embryo were estimated with a transgenic stain expressing PCN-1, the C. elegans orthologue of PCNA (proliferating cell nuclear antigen) 30 , fused to GFP. PCN-1 acts as a processivity factor for DNA replication and is one of the last components incorporated into the replisome upon replication initiation 31 . We used the GFP::PCN-1 chimera to visualize S-phase in early embryos and during subsequent cell or nuclear cycles. In other organisms, its accumulation in nuclear foci has been previously demonstrated to report on the active sites of DNA replication 32,33 . In time-lapse epifluorescence images of developing 2-cell stage embryos, GFP::PCN-1 is bright and reveals the AB and P1 nuclei. It is restricted to the nucleus when an entire nuclear membrane is formed. Nuclear membrane breakdown releases the GFP::PCN-1 in the cytoplasm (Fig. 5b, Supplementary Videos S5 and S6). The nucleus of AB cell was targeted with 10 4 3 MeV protons (t = 0 min). We did not observe GFP::PCN-1 foci but, similarly to what was observed in H2B::GFP strain, we noted the formation of chromatin bridges during the first division (Fig. 5b, t = 10 min., additional data are shown in Fig. S4). The formation of such chromosomal abnormalities was never observed in non-irradiated nuclei (Fig. 5b, EMS and P2 cells, n = 9) or in non-irradiated embryos (Fig. 5b, control, n = 5). The cytokinesis induced the chromosomal bridge breakage and the formation of an extra nuclear DNA fragment (suggesting micronucleus formation) excluded from the nucleus. The formation of nuclei around mis-segregated chromosomes and chromosomal fragments apparently contributes to the formation of micronuclei, some of which with very little DNA (Fig. 5b). The disruption of the cell division synchronization within the 4-cell stage C. elegans embryo is shown in Fig. 5b and Supplementary Fig. S1 (t = 30 min). In control embryos, the loss of the nuclear localization of GFP::PCN-1 is observed in ABa and ABp cells at t = 20 min. In irradiated embryos, the GFP::PCN-1 signal is still present at this time stamp and restricted to the nucleus in these cells. This is also underlined by comparing the time of division between ABa, ABp and EMS, P2. EMS, P2 entered mitosis before ABa and ABp in irradiated embryos. This is never observed in controls. By extending the time-lapse imaging until the 8-cell stage, we could observe that chromosomal bridges appear during the mitotic division of ABa and ABp as well, suggesting the maintenance of a genetic instability (Supplementary Fig. S1). Discussion The aim of targeted irradiation is to induce damage localized in specific cellular compartments. Laser micro irradiation is the most commonly used method to achieve this goal mainly because one can use the same microscope to perform irradiation and observation. In addition, the easy access to different wavelengths for irradiation allows the induction of specific cellular damage like SSB, DSB or base lesions with a varying range of doses 34 . On the contrary, targeted irradiation using a CPM requires the use of distinct equipment for observation and irradiation, thus increasing the number of key parameters to control such as alignment of irradiation beam with the microscope, specific sample preparation, time-lapse imaging and irradiation synchronization. Despite this complexity, CPM offers a unique capability of irradiation dose control, down to a single particle 22,35 . In addition, with direct access to charged particle beams instead of photons, CPM plays a major role in the study of ionizing radiation in biology. In the present study, we report the possibility to extend this approach to specific cells in a developing multi-cellular organism. Single cell nuclei can be targeted with a controlled number of charged-particles in living 2-cell stage C. elegans embryos and followed-up by time-lapse imaging. We could visualize radiation-induced DNA damage, and undoubtedly DSBs, a few minutes after irradiation in live and intact early C. elegans embryos with 10 4 protons (equivalent to 180 Gy in the chromatin). Likewise, HUS-1::GFP radiation-induced foci were also observed in the daughter cells, indicating a persistence of altered DNA through the division cycle in agreement with the absence of active cell cycle checkpoints during DNA damage response in the early C. elegans embryo 36 . The production and maintenance of DNA bridges through the successive cell divisions have been shown in several C. elegans strains. We also observed that these DNA bridges alter the cell division timing, inducing a cell cycle asynchrony and modifying the movement of the cells within the irradiated embryos ( Fig. 5b and supplementary Fig. S1). Our experimental approach is complementary to the well-established laser micro-irradiation 34,37 and can complement it by producing more representative damage of ionizing radiation effects. CPM irradiation, commonly used on adherent mammalian cells, can be adapted to target specific cells in a developing multi-cellular organism to address the question of the in vivo radiation-induced consequence on a specific cell lineage, the cell-to-cell communication, and the low-dose effect. Data from the voxelized model indicate that the generation of DNA bridges and genomic instability observed experimentally require an energy deposit in the chromatin above 4 pJ (corresponding to a specific energy of about 180 Gy). This could be due to the limited sensitivity of our fluorescent markers or to the lack of specificity for the detection of the radiation-induced damage. But it could also suggest the existence of a dose-threshold below which no sufficient radiation-induced effect would be produced on genomic DNA during the first cell division of AB. The simulation and mapping of the energy deposited in three dimensional phantoms allows us to correlate the energy deposited in each comportment (chromatin and nucleus) to the radiation-induced damage. First, we clearly observed that no energy deposit is found in the non-targeted nucleus and cell (P1). Second, Monte-Carlo simulations indicate that the biological effects described here (DNA damage foci, DNA bridges and cell cycle delay) appear at rather high specific energies, i.e. 180 Gy in the chromatin of the targeted nucleus (AB). Third, stage C. elegans embryo is also seen (*). The PCN-1::GFP signal helps to distinguish a clear shift of cellular division between irradiated and non-irradiated embryos (*). The formation of DNA bridges was never observed in non-irradiated nuclei (P1, EMS, P2) and or non-irradiated embryos. www.nature.com/scientificreports www.nature.com/scientificreports/ the energy deposits at various stages of the cell division show a variation of 25% correlated to the variation of the nuclear volume. Fourth, the fraction of energy imparted to the chromatin shows large fluctuations considering various phases of the division, ranging from only 4% in prophase to about 60% in metaphase. Fifth, as the specific energy imparted to the chromatin depends mainly on its condensation state, we verified that only limited variations in terms of energy imparted to the chromatin are expected between samples at the time of irradiation. Indeed, even if the precision in time at which a cell nucleus can be targeted is limited by the visual recognition of specific cell division patterns, a delay in irradiation timing below 2 minutes (chromatin in prophase, Fig. 2b) has no significant influence on the specific energy deposited in the chromatin. The ability to assess energy deposits and specific energy in various cell compartments relies on the possibility to handle parametrized volumes obtained from fluorescence microscopy in the Geant4 simulation toolkit at the sub-micrometre scale. We believe that new geometrical models developed in the frame of the Geant4-DNA extension, will strengthen biological damage prediction at the DNA molecule scale in cell-based phantoms 38 , particularly by including the production of DNA single and double strand breaks induced by the direct and indirect effects of oxidative radical species on the cell DNA content 39 . Materials and Methods Worm strains and culture. C. elegans worm strains were maintained on nematode growth medium (NGM) agar plates and fed ad libitum with Escherichia coli strain OP50 at 20 °C, according to standard protocols (Brenner, 1974). We used the following transgenic C. elegans strains carrying appropriate fluorescent markers: preparation of large population C. elegans embryos. Embryos were isolated from synchronized populations of young gravid hermaphrodites, treated with hypochlorite solution ("Bleaching") (Fig. 3a). The bleaching technique was used for synchronizing C. elegans cultures at the first larval stage (L1). The principle of the method lies in the fact that worms are sensitive to bleach while the egg shell protects the embryos from it. After treatment with an alkaline hypochlorite solution and rinsing, embryos were maintained on NGM agar plates without food, which allows hatching but prevents further development. Once hatched, the L1 larvae were transferred onto NGM agar plates seeded with OP50 E. coli as a food source until the worms reached the adult stage. A second and final bleaching step just before irradiation was performed. These synchronized populations of young gravid hermaphrodites from standard, well-fed culture stocks were collected with M9 buffer (3 g/l KH 2 PO 4 , 6 g/l Na 2 HPO 4 , 5 g/l NaCl, 1 mM MgSO 4 ) and washed three times with sterile water to remove bacteria. Then, worms pelleted via centrifugation (2 min., 2000 rpms, room temperature) were treated with a freshly prepared alkaline hypochlorite solution (1.5% (v/v) NaOCl, 1 M NaOH). The suspension was swirled every 2 minutes with vortex-mixing (~6 min.). The released embryos were pelleted via centrifugation at 2000 rpms for 2 minutes. Supernatant was carefully discarded and embryos washed three times with M9 buffer followed by centrifugation at 2000 rpms for 2 min. The pelleted embryos were suspended in 50 µL of fresh M9 and an aliquot ~2 µl was directly positioned and used in experimental conditions for micro-irradiation and live imaging using a custom-made support dish described previously by Muggiolu et al. 35 . This sample holder provides a stable long-term environment for microscopic analysis and micro-irradiation experiments. The elapse time between "bleaching" and irradiation was shortened in order to favour the presence of both one-cell and 2-cell stages embryos. Micro-irradiation. 3 MeV protons (H + , LET = 12 keV.µm −1 in liquid water) were accelerated by a 3.5 MV electrostatic accelerator (Singletron, High Voltage Engineering Europa, The Netherlands) present in the AIFIRA facility 40 . In order to target a single cell of a 2-cell stage embryo, the beam was strongly collimated to reduce the particle flux to a few thousand ions per second on target and focused using a triplet of magnetic quadrupoles to achieve a sub-micron resolution under vacuum. After extraction in air, the beam spot size is 1.5 µm. The delivered dose was controlled by counting the particles with a PIN diode installed downstream the sample on the microscope objective wheel. Embryos were irradiated with two doses: 10 3 and 10 4 protons per AB cell nucleus. Imaging. Embryos were positioned for micro-irradiation and live imaging using a custom-made support dish described by Muggiolu et al. 35 . A drop of 3-µL of M9 medium containing a suspension of freshly extracted embryos was deposited and maintained between two 4-µm thick polypropylene foils (Goodfellow) stretched on a rigid frame made in PEEK (Polyether ether ketone) by means of a clip, thus avoiding the use of any glue. These two foils were slightly stretched to close the dish and maintained the C. elegans embryos within a thin layer of M9 which still allowed the traversal of the incoming protons. The M9 medium was kept at +19 °C and the mounted sample kept at 20 °C (+/− 2 °C). All along the experiments, the temperature was monitored and recorded using a thermal probe (PicoLog TC-08). AB nuclei were targeted in their centre based on fluorescence of either GFP tagged HUS-1, histone H2B, or PCN-1. The AB cell was identified due to its larger volume than P1 cell. Embryos were imaged using an inverted fluorescence microscope (AxioObserver Z1, Carl Zeiss Micro-Imaging GmbH) and an x63 objective (LD Plan-Neofluar 63x/0.75) positioned horizontally at the end of the beamline. Images were captured at 10 sec periods using an AxioCam ™ CCD camera and directly transferred to a personal computer through a Firewire 400 connection. The irradiation was triggered when the nucleus reached a central position in the AB cell. Time series of micro-irradiated embryos were created using Image J software (http://rsbweb.nih.gov/ij/). A minimum of five embryos were analysed for each experimental group. Immunofluorescence staining. Freshly extracted embryo populations were immediately fixed in cold 4% (w/v) paraformaldehyde and their eggshells were freeze-cracked by placing at −20 °C during 15 min. Then, embryos were pelleted via centrifugation (2 min., 2000 rpms, RT) and paraformaldehyde replaced by cold acetone for permeabilization (2 min, −20 °C) and, finally twice washed in M9. After fixation and washing, M9 was removed and replaced by a freshly prepared solution of phalloidin-AF 594 (10:1000 (v/v), Molecular Probes) and Hoecsht 33342 (2:5000 (v/v), Molecular Probes). Embryos were incubated overnight at RT under gentle agitation and then washed via two series of centrifugation (2 min. 2000 rpms) with M9. The supernatant was discarded and the pelleted embryos suspended in 2-3 drops of Prolong Gold Antifade reagent (Molecular Probes) and transferred by pipetting for mounting between glass slides using ProLong ™ Antifade Gold Reagent (Invitrogen). Three-dimensional images were acquired with a Leica DMRE TCS SP2 AOBS confocal microscope (oil-immersion objective × 63, 1.4 NA), assembled, and reconstructed using Image J software. Monte Carlo simulation. Geant4-DNA [41][42][43][44] , the open source very low energy extension of the Geant4 Monte Carlo simulation toolkit [45][46][47] , was used for this work (release Geant4.10.3.p01). We used a physics list based on the recommended "G4EmDNAPhysics_option4" constructor 44 . The Geant4-DNA processes are all step by step processes; as such, they simulate explicitly all the physical interactions of ionizing particles in the irradiated medium and do not use any production cut-off. In the MeV range, the dominant physical processes affecting protons are nuclear scattering, electronic excitation, ionization, and charge exchange. Further details on the physical process classes can be found in the Geant4 documentation (http://geant4-dna.org/) 48 . The primary beam was modelled as mono-energetic 3 MeV proton beam having a 1.5 µm FWHM Gaussian distribution. The 2-cell stage embryo of C. elegans was modelled as a parametrized geometry designated as "phantom" in the following. The phantom was simulated in the form of a voxel arrangement determined from images acquired with a confocal microscope. The methodology for converting confocal image data into a 3D phantom has been described previously in Barberet et al. 22 . The set of images of the 2-cell stage embryos acquired from the confocal microscope was transferred into the public domain ImageJ software (http://rsbweb.nih.gov/ij/) for 3D reconstruction (Fig. 1a,b). An intensity threshold was then applied for each colour channel to separate fluorescent objects from the background in each slice of the stack. The nuclear volume was defined as the green channel by manually selecting a ROI around the chromatin on the red channel based on contrast selection, and cropping the region outside the ROI (Fig. 1c). The chromatin, nuclear volume and embryo volume could be extracted into an individual text file containing: (i) the total number of voxels for the 3 channels (Red, Green and Blue), (ii) the voxel size along the 3 dimensions, (iii) a position shift in order to centre the embryo in the simulated irradiation dish, (iv) the list of each voxel's position and material composition, as well as the intensity fluorescence content of the voxel. The phantom could then be imported directly into the Geant4 simulation using the method described by Incerti et al. 11 . The implemented phantom was formed of multiple identical parallelepiped voxels having the same size, indicated in the form of the confocal images. The simulations were performed for 10 3 and 10 4 incoming protons to reproduce the experimental conditions.
7,173.6
2019-07-22T00:00:00.000
[ "Biology", "Physics" ]
An Agent-Based Model of Sustainable Corporate Social Responsibility Activities An agent-based model of firms and their stakeholders' economic actions was used to test the theoretical feasibility of sustainable corporate social responsibility activities. Corporate social responsibility has become important to many firms, but CSR activities tend to get less attention during busts than during boom times. The hypothesis tested is that the CSR activities of a firm are more economically rational if the economic actions of its stakeholders reflect the firm's level of CSR. Our model focuses on three types of stakeholders: workers, consumers, and shareholders. First, we construct a uniform framework based on a microeconomic foundation that includes these stakeholders and the corresponding firms. Then, we formulate parameters for CSR in this framework. Our aim is to identify the conditions under which every type of stakeholder derives benefits from a firm's CSR activities. We simulated our model with heterogeneous agents by computer using several scenarios. For each one, the simulation was run 100 times with different random seeds. We first simulated the homogeneous version discussed above to verify the concept of our model. Next, we simulated the case in which workers had heterogeneous abilities, the firms had cost for CSR activities, and the workers, consumers, and shareholders had zero CSR awareness. We tested the robustness of our simulation results by using sensitivity analysis. Specifically, we investigated the conditions for the pecuniary advantage of CSR activities and effects offsetting benefits of CSR activities. Finally, we developed a new model installed bounded rational and simulated. The results show that the economic actions of stakeholders during boom periods greatly affect the sustainability of CSR activities during slow periods. This insight should lead to a feasible and effective prescription for sustainable CSR activities. Introduction Corporate social responsibility (CSR) has become a primary concern in business.Many companies have undertaken CSR activities and make regular reports on them to their various stakeholders: employees, consumers, localities, and shareholders.However, CSR activities generally get less attention during recessions than during boom periods, meaning that they are difficult to sustain.Before describing our study, we explore the meaning of CSR.The concept of CSR is ambiguous and thus very confusing (Morsing 2006).It can be regarded as philanthropy, i.e. as cause-related marketing, employee volunteerism, or as another innovative program. CSR activity is narrowly defined here as certain social action programs, such as philanthropic programs, that do not directly benefit the firm.If a model can show that CSR activities under our narrow definition can be sustained, then CSR activities under a wider definition will be more readily sustainable. There have been a number of studies on CSR activities.Empirical studies have found evidence that CSR activities produce non-economic benefits in the form of consumer donations (Lichtenstein et al. 2004); however, the variables used for measuring the benefits were difficult to control.Other empirical studies have focused on the economic benefits.Baron (2004) developed a model of a firm's CSR activities and the risks related to activist campaigns such as boycotts.Testing using this model revealed the importance of strategic CSR activities; however, the societal effects of CSR activities were not analysed. Does the economic climate affect CSR activities?In a special issue of the Journal of Productivity Analysis on CSR and economic performance, Paul and Siegel (2006) pointed out that the vast majority of studies on CSR and its effect on financial measures had been done from an economic perspective.They suggested that a more fundamental issue is the relationship between economic performance and CSR behaviour, where economic performance entails technological and economic interactions between output production and input demand, with respect to the opportunity costs of inputs and capital formation.They concluded that the costs of CSR activities must be balanced by the benefits motivating firms to carry them out.Selvi et al. (2010) studied Turkey's financial crisis in 2007 and mentioned that proponents of CSR claim that it has many benefits for the company such as enhancing its reputation and that opponents claim that it cannot protect a firm from financial harm in times of crisis.Reinhardt et al. (2008) addressed the question of how one should think of the notion of firms sacrificing profit in the interest of society.They considered cases of "voluntary" CSR, "reluctant" CSR, and "unsustainable" CSR.As an example of reluctant CSR, they pointed out that investors might "be forced to accept profit-sacrificing activities that are the result of external constraints" like having to use equipment designed for higher than required pollution limits.If CSR activities are unsustainable, firms might have to "raise prices, reduce wages, accept smaller profits, or pay smaller dividends".In the short-term, this could lead to a loss of market share, an increase in insurance costs, an increase in borrowing costs, and a loss of reputation. Possible long-term consequences include shareholder litigation, corporate takeover, and closure.So why would a firm chose to continue CSR activities that are unsustainable?One explanation they present is the principal/agent problem: managers (the "agents") may make decisions that commit the firm to short-term CSR activities that are not to the benefit of the owners (the "principals"), meaning that those activities should not be continued in the long run (SeeBaron 2006, for example). We can survey some theoretical approach for describing CSR.Fehr (1999;2002) introduced psychological insights into a theoretical economic model in the form of a function for the pecuniary payoff to both oneself and others; i.e. he assumed a kind of altruism.Since awareness of CSR includes altruistic motivation, he used a more general framework; however, it is difficult to apply his model to CSR activities. 2.5 Brekke ( 2004) developed a microeconomic model in which social welfare is added to the pecuniary payoff function.Testing with this model showed that "green firms", i.e. firms with a high level of CSR activity, may be better able to survive in the long run and that the CSR profile of a firm may affect both wages and worker productivity.The key point of his mechanism for survival is that morally motivated workers work harder when they are employed by a green firm.That is, firms with a higher level of CSR activity can potentially survive longer without public intervention.However, his model has an unrealistic assumption: each worker calculates his or her social welfare utility in a society in which all workers act like the worker.Moreover, the other stakeholders are not included.Shinohara et al. (2009) modelled CSR activity by using an agent-based approach.In their model, dissemination of CSR activities requires that consumers socially learn to value such activities and that firms identify economic incentives for embracing them.The model, however, is not based on microeconomic principles; for example, it lacks a function for maximising an agent's benefit.Moreover, the model has a fixed interaction topology and no interaction among multiple stakeholders.Therefore, it is insufficient for addressing Friedman's critique.A model with a stronger economic framework is needed. Aim and overview To understand CSR activities from the economic viewpoint, a model is needed that takes into account the various types of stakeholders.Moreover, it should reflect the observation that a firm's cost for CSR activities and its stakeholders' CSR awareness are different.So, we analyse how a firm with a certain level of CSR activities behaves economically in a society of stakeholders with differences in CSR awareness.This means that a static approach to solving the equilibrium condition is insufficient.To overcome this insufficiency, we developed an agent-based model.In accordance with Axtell's classification (2000), our approach corresponds to "third use, models ostensibly intractable or provably insoluble: agent computing as a substitute for analysis". Our model focuses on three types of stakeholders: workers, consumers, and shareholders.First, we construct a uniform framework based on a microeconomic foundation that includes these stakeholders and the corresponding firms.Then, we formulate parameters for CSR in this framework.Our aim is to identify the conditions under which every type of stakeholder derives benefits from a firm's CSR activities. After describing our model in Section 2, we describe our computer simulation using it and present key results in Section 3. The results are discussed in Section 4. We conclude with a brief summary and a mention of future work in Section 5. Model We first explain the uniform framework in which economic agents operate and describe the concepts for modelling it.Then we describe the agents' awareness and the firm's CSR activities.The notation and structure equations are then defined.Finally, we formulate an agent-based model for simulation. Framework Our model has four types of economic agents: firms, workers, consumers, and shareholders. Products made by a firm depends on its capital stock and the sum total of the workers' efforts (Eq.1).We use the constant elasticity of substitution production function (Arrow 1961).Product quantity affects both the economic attractiveness of the firm's products (Eq.6) and worker wages (Eq.4).Here we assume that the higher the quantity of a product, the higher its price, and that the higher its economic attractiveness, the higher the worker wages.For simplicity, all workers at a firm are assumed to earn the same wage (Eq.4).The assumption for capital stock is simple: it depreciates at a constant rate and new stock is added each month (simulation period) (Eq.2).Investors invest one unit in any firm of their choice each month. A worker hired by a firm consumes production goods as a consumer, so the number of workers equals the number of consumers.This assumption is important because the effect of a firm's CSR activities redounds through a mechanism for changing consumption quantities as wages change.It may be easy to understand if you would regard that a pair of a worker and a consumer makes a family budget.Firm size is based on the calculated aggregate demand.The demand quantity of firm F is the total wages of consumers who choose the firm F. Shareholder behaviour is modelled on investment behaviour, and shareholders tend to focus on firm size.The size is based on the aggregate demand (Eq.7).Shareholder investment in a company contributes to advances in product quantity (Eq.1). In every simulation period, which is described below, a firm is chosen by workers (Eq.5), consumers (Eq.6), and shareholders (Eq.7) through their job hunting activities, consumption behaviours, and investment destination selections, respectively.The economic actors in each group use a function to calculate the value to them of each firm.Each actor defines a probability vector consisting of all the firm's values and selects one firm.Therefore, all stakeholders have the ability to communicate with all firms.Figure 1 shows a conceptual diagram of this framework. Concept The greater the level of a firm's CSR activities, the more the firm degrades product quantity (Eq.1).This is because CSR activity is defined here as cost.The three types of stakeholders have biased evaluations of the firms by their observations of firms ' CSR activities (Eqs. 3,5,6,7). An essential point of modelling a job market is including a firm's need to find a small number of candidate workers with both high task ability and high CSR awareness.Due to asymmetric information, firms cannot distinguish such workers ex ante.In contrast, workers can distinguish firms with a high level of CSR activity.When a firm with a high level of CSR activity hires workers who have both high task ability and high CSR awareness, it is likely to earn higher profits (Eq. 1) because such workers implicitly approve of their new employer's support of CSR activities and thus exert greater efforts in their work. Workers generally favour firms that pay higher wages; moreover, those who highly value CSR pay more attention to the CSR activities of firms.In our model, every worker calculates the value of every firm and decides where to accept employment on the basis of these valuations (Eq.5).Since we assume that there is a shortage of workers, each worker can work at whichever firm he or she prefers (Burdett 1998).Workers decide their levels of effort on the basis of their own abilities.The higher the wage they receive, the greater their effort, and the higher the firm have cost for CSR, the greater the effort of the workers who highly value CSR (Eq.3). A small number of consumers with high environmental awareness are a key factor in our model.Firms with higher support of CSR set higher prices for their products, while consumers with high environmental awareness favour those products.Consumers also choose firms and buy their goods on the basis of their incomes (Eq.6).Which firms do consumers choose?Each consumer calculates the value of each firm on the basis of its economic attractiveness and level of CSR activity.Strictly speaking, consumers base their buying decisions on each firm's level of CSR in our model.That is, we assume that a firm's level of CSR affects its activities directly. Shareholders (investors) who highly value CSR bias their investments toward firms with more CSR activities.They have a function for calculating the value of firms and use it to decide their investment allocations (Eq.7).For simplicity, we consider neither dividends nor shareholder consumption behaviours. A conceptual diagram of the parameters related to CSR is shown in Figure 2. where w i , q i , and x 1 are defined relative to W i , Q i , and X i , respectively, and where #W, #F, and #H are the number of workers, firms, and shareholders, respectively.For example, w i , the relative wage level for workers at firm i is defined by Parameter s i is defined as shown in Table 3. Agent-based model On the basis of the notation and structure described above, we constructed an agent-based model of firms, workers, consumers, and shareholders to investigate the effect of CSR activities. The agents do not have adaptive processes, so the level of CSR awareness and activities does not change throughout the simulation except as otherwise noted.While Shinohara (2009) used agents with adaptive processes, the processes were modelled without a function for maximising the agent's benefit, and the model cannot be used to analyse variations in CSR awareness in society after the dissemination of CSR awareness. The parameters for the simulation were set as shown in Table 4.The pseudocode for the model is shown in Appendix A, and source code in the C language is here.Some parameters were calibrated.Equation 8 (Table 4) shows that s 2 and s 3 must be close to each other to balance the labour factor with the capital stock factor in the production function (Eq.1).If they are not balanced a change in either has little effect on product quantity.Our aim is to observe the effect of both factors on cost for CSR activities, so we set s 2 = s 3 , which is a limitation of our study.It is common in economics for the depreciation rate, s 3 , to be set to 0.1 (Alti 2003).Parameters s 1 and s 5 and the range of A affect model stability, so they are set for easily observing the behaviour of the simulation model.4) Simulation We simulated our model with heterogeneous agents by computer using several scenarios.For each one, the simulation was run 100 times with different random seeds.The metric was the average quantity of capital stock for over the 100 runs.Since capital stock reflects firm size, it is an appropriate metric for firm survivability. Next, we simulated the case in which workers had heterogeneous abilities, the firms had cost for CSR activities, and the workers, consumers, and shareholders had zero CSR awareness.Figure 3 shows the average quantity of capital stock (K), wage (W), and demand (X).It shows that the greater the degree of cost for CSR a firm has, the worse its performance.This supports the well-known insight that CSR activities are not economically advantageous to firms. 3.4 Figure 3. Values of W, X, and K for different degrees of firm cost for CSR activities when other entities had zero CSR awareness.Note that by 10 instead of K.As formulated in Appendix A, each firm has a different CF i , so the firms line up on Results To determine the effect of a firm's CSR activities on its survival, we tested various cases in our agent-based simulation.First, we investigated which type of stakeholder is most effective for sustaining CSR. Figure 4 shows that the CSR effect strongly depends on the social tolerance ratio by firm's performance to CSR activities.If the ratio is low (s 1 = 0.1), all three types of stakeholders substantially affect the firm's CSR activities.If the ratio is much higher (s 1 = 0.3), firms with much CSR activity have trouble surviving.Moreover, as shown in Figure 4, any type of stakeholder has the effect on that a firm's cost for CSR activities makes the firm large.Although, the advantage of CSR activities for a firm subsequently declined, it did have inertia. We then extended this case by assuming that stakeholders of one type retained their CSR awareness while those of the other types lost it.Figure 6 shows the trend in the quantity of investment for the same two firms.The advantage of CSR activities was retained despite all but one stakeholder type losing its CSR awareness. 3.8 Figure 6.Trend in quantity of investment for same two firms in Figure 5 but (a) workers retain CSR awareness after simulation period ten omitted because it is almost the same as that for consumers. Finally, we tested the effect of activities on the depreciation of capital stock.Figure 7 shows the results for various values of depreciation rate s 3 .The greater the inertia of stock depreciation (= lower s 3 ), the greater the advantage of CSR activities. 3.10 Figure 7. Effect of CSR activities on degree of depreciation rate for s 3 = 0.001, 0.1, and 1.0.Note that s 2 was set equal to s 3 to enable equili are normalized. Sensitivity analysis We tested the robustness of our simulation results by using sensitivity analysis.Specifically, we investigated the conditions for the pecuniary advantage of CSR activities and effects offsetting benefits of CSR activities. First, we analyzed whether there is a threshold for CSR awareness.As shown in Figure 6, CSR awareness by only one stakeholder is sufficient to maintain advantage of CSR-aware companies.This phenomenon is implicit in the model, so we investigated it in detail by conducting additional experiments.As shown in Figure 8, surprisingly, one stakeholder with CSR awareness is sufficient to maintain the overall advantage of CSR activities even if the remaining stakeholders have no CSR awareness initially. 3.11 Figure 8. Difference in quantity of capital stock with maximum and minimum expense for CSR activities under the same conditions as fo awareness of two kinds of stakeholders disappears at is changed as a variable.The curve for shareholders is omitted because How does the strength of CSR awareness weaken? Figure 9 shows that the stregnth of CSR awareness weakens when the simulation period reaches ten.We investigated whether the degree of the advantage of CSR-awareness increases the number of stakeholders with CSR awareness.We found that the advantage of CSR activities is retained if at least 30% of one stakeholder retains their awareness. http://jasss.soc.surrey.ac.uk/14/3/4.html15 31/07/2011 3.12 3.13 Figure 9. Difference in quantity of capital stock with maximum and minimum expense for CSR activities under the same conditions as for F awareness showed a x% drop in awareness when the simulation period reached ten.The curve for shareholders is omitted becau Next, we tested two effects offsetting the benefits of CSR activities.The first effect is increased marginal cost of production. We represent this by the addition of multiplier m to the production function (eq.( 1)): (1') Figure 10 shows the change in capital stock with an increase in m.The change in the marginal cost of production clearly had no effect on the benefits of CSR activities although there were some slight differences. 3.14 Figure 10.Difference in quantity of capital stock with maximum and minimum expense for CSR activities under the same conditions as for F The second effect is a change in the distribution of workers' ability.We set the range of their abilities, r, to 0.1 (i.e. the range of their abilities was 0.9 to 1.1) in the basic model.The simulation results for various values of r are shown in Figure 11.As in the case of a multiplier added to the production function, a change in the distribution of workers' ability had no effect on the benefits of CSR activities. 3.15 Figure 11.Difference in quantity of capital stock with maximum and minimum expense for CSR activities under the same conditions as f (horizontal axis). Installing bounded rationality Some might feel that our model is too restrictive with regard to several points.Generally speaking, agents cannot have perfect information or rationality.We thus revised our model to incorporate this imperfection.The new bounded rational model differs from the model described above in four ways. 1. Separation between workers and consumers.Shareholders as well as workers consume, so a family budget does not always need a pair of one worker and one consumer.The new model allows the number of those stakeholders to be freely adjusted.In this model, #S represents the number of consumers.The consumption unit is defined as Q / #S because the total income for the economy equals the total production, and consumers are assumed to have equal consumption power.2. Cognitive limitation.Since a stakeholder usually does not know all of the firms, the new model has restricted stakeholder memory capacity.Initially, an agent knows a list of M firms selected randomly.It calculates certain values, such as the average wage, for the firms in its memory list.For simplicity, the new model does not need the value function, VW, VC, VH in this phase.A stakeholder chooses a firm in memory list randomly; that is, the firms in a list have the same probability of being selected. 3.16 3. Informative limitation.Since a stakeholder also usually does not know the actual evaluated values of firms, the new model incorporates cognitive probability, p, as a value function.Actually, each stakeholder knows the correct value of the value function (VW, VC, or VH) with probability p, so the new model uses a random value that follows a uniform distribution instead of the correct distribution.This function is calculated in a revision phase newly added at the end of each simulation period.In this phase, the half of the firms with lower values are replaced.4. New values for some parameters.The extensions described above resulted in the use of new values for the parameters listed in Table 5.The other parameters had the values shown in Table 4. Feasible source code for the bounded rational model in C language is here.Figure 12 shows the effect of stakeholder's cognitive error (p) on the advantage of CSR activities in the case where every stakeholder has CSR awareness randomly.As was shown as Figure 4, CSR-aware firms have an advantage if all stakeholders have CSR awareness.Moreover, the CSR effect strongly depends on the level of the stakeholders' cognitive error.If they have perfect information on the level of firms' CSR activities (p = 1.0), they can choose firms they prefer correctly.Therefore, firms are divided into groups of winners and losers definitely which depends on the size of stakeholder's memory, M. On the other hand, if they have less than perfect information (p = 0.0), the performance looks like it is random due to cognitive error.The results for p = 0.1 suggest that CSR activities can keep their advantages in spite of slight information for their activities.We tested the robustness of the CSR effect á la Figures 5 and 6.The results are shown in Figures 13 and 14, respectively.As shown in Figure 13, if stakeholders have little information on the level of firms' CSR activities (p = 0.1), the inertia of the advantage of CSR activities quickly loses its power.Even if p = 0.5, the advantage cannot be maintained.In contrast to the heterogeneity model (Figure 5), however, when the level of information known by stakeholders exceeds a certain level (p = 0.7), the inertia maintains its power.This is because the degree of the advantage is sufficient for stakeholders to keep choosing the same firm due to their cognitive limitation.The case in which one stakeholder can retain its CSR awareness keeps advantage even if p = 0.1, as shown in Figure 14.This means that there is strong inertia for maintaining the advantage of CSR activities in spite of a large cognitive error. Discussion As mentioned in the introduction, it is generally believed that CSR activities are economically inefficient.However, our testing showed that, if a firm's various stakeholders make decisions that take into consideration a firm's CSR activities, its CSR activities have a pecuniary payoff.The stakeholders considered here are workers, consumers, and shareholders.Moreover, simulation using our model showed that the pecuniary advantage of CSR activities has inertia.That is, CSR-aware firms continue to have a pecuniary advantage for a while after the CSR awareness of its stakeholders disappears. Why does the pecuniary advantage of CSR have inertia?It has inertia because a firm with high cost for CSR activities tends to establish a high-profit structure.Product quantity is not affected positively by CSR awareness directly but by the cumulative investment in capital stock and by the high wages paid to labour for quality work. As illustrated in Fig. 7, investment in depreciable capital stock results in the inertia of CSR activities.The fact that almost all capital stock is depreciable ensures the sustainability of CSR activities.This benefit, however, is not derived when a uniform framework consisting of various stakeholders is not developed. 5.3 The most important finding here is that, even if only one type of stakeholder retains CSR awareness, the firms have a pecuniary advantage.The realization by stakeholders of the importance of CSR apparently has an irreversible effect.Once enhanced awareness of CSR, driven by a few stakeholders who highly value CSR, has been attained, its effect may continue even if this awareness diminishes over time.CSR activities thereby become irreversible, and firms with a higher level of CSR activity can better survive.Therefore, a promising approach is to formulate a policy in which at least one type of stakeholder retains its awareness of CSR even in recessions. The workers in particular greatly affect a firm's CSR activities (Figures 4 and 6).This is because only workers affect CSRaware firms in two ways: choice of firm and level of effort.In contrast, consumers have only one effect (product choice) and shareholders have only one effect (investment choice).Although shareholders could affect the decisions a company makes about CSR activities by, for example, speaking up at the shareholders' meeting, the effects among shareholders would not be equal.In any case, in our model, the case in which a small group of workers has high CSR awareness has the largest possible effect among all possible cases. These insights provide a clue for how to sustain CSR activity when the economy weakens.Typically during a recession, firms cut their expenses by reducing or eliminating CSR activities.However, if CSR-aware firms could keep the pecuniary advantage they enjoy during boom periods, they would be less inclined to reduce or eliminate their CSR activities.Our findings show that the economic actions of stakeholders during boom periods greatly affect the sustainability of CSR activities during slow periods.These insights should lead to a feasible and effective prescription for attaining sustainable CSR activities. Summary and Future Work We simulated an agent-based model of firms and stakeholder economic behaviours in order to test the theoretical feasibility of attaining sustainable corporate social responsible activity.The results provide significant insights into the pecuniary advantage of CSR activity and the inertia of its effect.The key insights are that investment in capital stock brings inertia to CSR activities and that even if only one type of stakeholder retains its CSR awareness during a recession, the firm will be less likely to reduce or eliminate its CSR activities.These insights should lead to a feasible and effective prescription for attaining sustainable CSR activities. Various other possible approaches to attaining sustainable CSR activities remain for future work.One idea is to use more specific case studies.However, our aim was to construct a theoretical and operational model.Rather, we need to extend the model itself.For example, the economic entities should be able to learn so that they can imitate the entity achieving the highest performance.Moreover, our model does not directly connect the consumption quantity to a firm's performance, so there is no feedback related to consumption.This means that consumer behaviour, such as boycotting a firm's products, is not considered. There are other limitations as well.For example, there is only a one-dimensional perspective on CSR activity, so it is hard to distinguish between eco-friendly production and improvement in the work environment.In the current model, the CSR awareness of an employer is independent of that of its employees.The model should be extended to allow dependence between them.Moreover, it should be extended to incorporate a learning process.Although we simplified the model, we can still make it multidimensional without changing the framework once the base model is constructed and verified.Finally, our model does not consider firm loyalty, a major factor in economic sustainability. Figure 2 . Figure 2. Conceptual diagram of CSR Parameters s 1 Upper limit of social tolerance ratio by firm's performance to CSR activities s 2 Ratio of investment to workers' productivity on production function s 3 Depreciation rate s 4 Constant degree of worker's effort s 5 Term for tuning of value functions for firms If all the economic entities (firms, workers, consumers, shareholders) have the same level of CSR awareness and activities (=C), and all workers have the same ability (=A), and each agent is both a worker and a consumer, the model has a robust equilibrium state, We call this model a homogeneous model as it is based on the assumption of homogeneous economic agents.Moreover, as mentioned in Section 1.2, the model should reflect the observation that a firm's CSR activities and its stakeholders' CSR awareness are different.Therefore, parameter values should differ between agents and be dynamic.If we allow entity http://jasss. Figure 4 .Figure 5 . Figure 4. Effect of degree of CSR activity on firm size (=K) for s 1 = 0.1, 0.2, and 0.3: (a) only workers have CSR awareness: (b) only con omitted because it is almost the same as that for consumers.Next, to test the stability and robustness of the CSR effect, we assumed that the CSR awareness of stakeholders suddenly disappears.Parameters (I, K, X, W, Q) were initialised as (#H/#F, I/s 3 , Q, Q #F/#W, (1-s 1 CF i ){(s 4 + A #W / #F + s 2 K} where A is the average of A i .Figure5shows the trend in the quantity of investment for two firms with the largest and smallest degrees of cost for CSR activities.The CSR awareness levels for all stakeholders (workers, consumers, and shareholders) were set randomly in the initial period; they were switched to zero when the number of simulation periods reached ten. Figure 12 . Figure 12.Values of K for different degrees of a firm's expense for CSR activities: (a) p = 0.0, 0.1, and Figure 13 .Figure 14 . Figure 13.Trend in capital stock for firms with maximum and minimum expense for CSR activities, where CSR awareness of all stakeholde bounded rational model with p = (a) 0.1, (b) 0.5, and (c) 0.7. Table 1 : The total expenditures on CSR activities in 2009 (Data:Toyo Keizai 2011) heterogeneity, it is difficult to analyse the model.This is because Q i cannot be calculated analytically because the choice functions (VW, VC, and VH) use probability vectors and a firm's level of CSR activities CF i . is heterogeneous.Therefore, we need agent-based simulation. Table 4 : Parameter Setting Table 5 : New Parameter Settings
7,285.6
2011-06-30T00:00:00.000
[ "Business", "Economics", "Environmental Science" ]
Distinguishing among technicolor/warped scenarios in dileptons Models of dynamical electroweak symmetry breaking usually include new spin-1 resonances, whose couplings and masses have to satisfy electroweak precision tests. We propose to use dilepton searches to probe the underlying structure responsible for satisfying these. Using the invariant mass spectrum and charge asymmetry, we can determine the number, parity, and isospin of these resonances. We pick three models of strong/warped symmetry breaking, and show that each model produces specific features that reflect this underlying structure of electroweak symmetry breaking and cancellations. Introduction The idea that the electroweak phase transition is driven by new strong dynamics is not a new one [1,2]. Dynamics are responsible for other phase transitions such as confinement in of color interactions, or superconductivity at low temperature. Unfortunately, with strong interactions, one is faced with intractable computations, and predictions thus entail large errors unless they rely on symmetries. The fact that that the idea of dynamical electroweak symmetry breaking is still under investigation after over 30 years attests to it attractiveness: it is a physical mechanism that already occurs in Nature and is devoid of unstable hierarchies. Ironically, large uncertainties do not save strong EWSB from facing very serious experimental constraints. One can estimate the effect of the new sector to the electroweak gauge boson parameters as measured at LEP by considering a reduced set of parameters, S, T and U [3][4][5][6]. The inevitable conclusion is that strong EWSB would have shown up at LEP as a deviation from Standard Model predictions, unless symmetries or specific dynamics are in place. Model-builders are accustomed to implementing symmetries in order to suppress deviations, and they have achieved this for T and U . The third parameter, S, has not been tamed by similar approaches -small S seems to require special dynamics, our understanding of which is more tenuous. Our arsenal is essentially limited to two main dynamical assumptions: walking and warping. In the picture of walking [7][8][9][10][11][12][13], a large anomalous dimension could lead to a parametric suppression of S. In the walking scenario a nearly marginal and slightly relevant operator runs slowly and becomes strong and breaks electroweak symmetry. This picture often assumes the theory is near an interacting fixed point. It is unclear that a large anomalous dimension is possible in the context of electroweak symmetry breaking, but this idea is currently under study using lattice [14][15][16][17][18][19] and analytical [20] methods. JHEP01(2012)092 Warping relies on a holographic approach to strong dynamics [21][22][23][24]. The holographic correspondence is set between the four-dimensional (4D) strong dynamics of interest, and a five-dimensional (5D) perturbative theory. Holography provides insight into suppression mechanisms which are not described in terms of symmetries. 5D suppressions are thus dynamical ones. Indeed, in holography the localization in an extra dimension is interpreted in 4D as effects from the non-perturbative dynamics, where the renormalization group evolution is encoded into the wave-functions along the extra-dimension. Interestingly, warping can be implemented in a way that would describe the effect of walking. While warping does not address the origin of the walking behaviour as a lattice study could, it can predict consequences on other observables that the lattice study cannot, such as production cross sections and lifetimes, and it is computationally less demanding than lattice. The two techniques are thus complementary. The reason 5D models can be predictive is as follows. If it were possible to find a localization of fields in the extra dimension which suppresses some undesirable operators, such as the S-parameter, one could correlate the localization assumption with some other effects, such as the spectrum and decay rates. In other words, while this technique does not offer insight on the mechanism or dynamics underlying a suppressed operator, it does allow us to predict observable consequences. The literature describes many implementations of this idea: approaches to describe solutions to the gauge [25] and mass hierarchies [26,27] and flavor problems [28], have all been addressed in the holographic picture as a consequence of localization inside the bulk, and not as a consequence of symmetries. Warped or walking, strong dynamics leads to scenarios where new resonances show up as composite objects of the strong dynamics. The common prediction to all those models is that the resonances would couple strongly to W, Z bosons and help in the unitarization of W W scattering. Unfortunately, experimental access to this prediction is very limited [29]. In this paper, we propose a different approach: scenarios of strong dynamics may be distinguished using simple channels such as dileptons. In fact, even if the resonance couplings to light fermions is suppressed, the s-channel production may turn out to be the discovery channel: this channel is very clean and provides charge correlations. Indeed, many scenarios predict sizable s-channel production. In this paper, we focus on three distinct scenarios based on warped extra-dimensions and technicolor scenarios, and use dileptons as the discovery and also characterization channel. The models considered here are Cured Higgsless (CHL), Holographic Technicolor (HTC) and Low-Scale Technicolor (LSTC), and we outlined their main characteristics in the text. The main point to take away is that each of these models addresses electroweak precision tests in a specific way, and that this information is encoded in the spectrum of resonances, and in their parity. The paper is organized as follows: In section 2, we relate the warping and walking scenarios to the mass reconstruction in the dilepton final state. In section 3 we recast the current LHC bounds on dilepton resonances in terms of lepton-resonance couplings for each of the three models. Next, in section 4 we perform a simulation of the di-electron mass reconstruction for each model. As an accurate characterization of the lepton resolution is crucial to determining how well nearby resonances can be distinguished, we model the Figure 1. The spectrum of dilepton resonances in the Cured Higgless (CHL), Holographic Technicolor (HTC) and Low Scale Technicolor (TCSM) cases. In TCSM, the splitting between the first and second tier of resonances is not specified. detector response using the fast simulator ATLFAST++ [30]. Once the masses of new resonances have been determined, their coupling structure is the next question to answer. We discuss a simple, low-statistics method to address the couplings structure in section 5, then conclude with a discussion. Technicolor/warped in dileptons In this paper we focus on three different models of dynamical electroweak symmetry breaking; two are five-dimensional and therefore are 'warped', while the third model is a 'walking', purely four-dimensional scenario. The holographic approach can be used to achieve S ∼ = 0, and there the suppression has a definite bearing on the spectrum of a theory. Indeed, possible ways of obtaining a small S in 5D models involve either a direct modification of the spectrum of spin-1 fields (Holographic Technicolor [31][32][33], or HTC) or a balance of spin-1 versus spin-1/2 particles (Cured Higgsless [34][35][36][37], or CHL). In this paper, we are going to focus on characterizing strong EWSB using the dilepton final state. In HTC, the cancellation of spin-one resonances in the S parameter requires two close-by resonances. Those two resonances would show up in dileptons, whereas in CHL there is only one low-lying resonance, and another resonance waits at a larger mass, about 1.6 times the mass of the first resonance. CHL is a model based on an Anti-de Sitter (AdS) geometry in 5D [35][36][37][38][39][40], and the ratio 1.6 is just a ratio between the zeroes of Bessel functions. 1 HTC is a model based on 5D warped space-time, but the geometry of HTC is no longer pure AdS, but AdS with large deviations from conformality [41][42][43]. Those deviations in the geometry are mapped to the presence of condensates breaking the conformal symmetry of a strongly coupled sector. For example, Our third scenario, Low-scale technicolor (LSTC) (also called the technicolor straw man, TCSM) [44][45][46][47][48][49][50] takes a different approach. It gives up calculability, assumes walking, and is more phenomenologically driven, see ref. [51][52][53] for a different popular fourdimensional scenario. Rather than model the strong dynamics, LSTC introduces a small parameter in a sector of the theory. Some quantities can then be modeled and related to each other as an expansion in this small parameter. Given these model dependencies, one still expects a resonance to lie at low energies playing the role of a techni-rho, ρ T C . The scale of the next resonance, the techni-axial (a T C ) is not fixed in this model, although one could take QCD as a guidance, where the ratio between the ρ and a 1 is about 1.7, very close to the warped scenario. The spectrum of LSTC also differs from the holographic models in that it contains technipions π T -uneaten pseudo-Goldstone bosons that are typically the lightest composite states in the spectrum (other than the W/Z) and which couple strongly to the spin-1 resonances. Technipions couple to SM quarks and leptons proportionally to the fermion's mass, thus they rarely decay to leptons. The only impact the technipions have on our study is indirect; if allowed, the ρ T , a T prefer to decay to technipions, thereby changing the branching fraction of the ρ T , a T to leptons. In figure 1 , we show the vector spectrum for CHL, HTC and TCSM. In CHL, the splitting between the first and second tier of resonances is set by the AdS geometry. In HTC, the splitting is set by the requirement that the first and second tier of resonances conspire with each other to cancel contributions in the S parameter. As a consequence, in HTC the two tiers are close to each other, although the degeneracy can be resolved experimentally (see section 4). Finally, TCSM makes no assumptions about the spectrum besides the presence of a vector resonance at low energies. Notice that the above differences between models are not casual, but rather, reflect the deeper structure of each model, and the way electroweak cancellations are built into it. Current bounds on dilepton resonances Dilepton resonances are a clean search channel, and there is an ongoing improvement of these searches as colliders analyze more data. Obviously, the bounds coming from these searches depend on the resonance mass and its coupling to light quarks and leptons. In this paper we are going to consider resonances with masses around the TeV, and the best limits for this mass range come from LHC. The ATLAS and CMS collaborations are looking for new resonances into dileptons, and results with an integrated luminosity of about 1 fb −1 are available [54][55][56]. Both collaborations obtain similar results, but we are going to focus on the ATLAS limits because their results show a comparison with different models, including a sequential Z (a heavy spin-one resonance with couplings equal to the SM Z boson). ATLAS quotes a bound on the cross section times branching ratio of for a resonance mass of about 700 GeV. Assuming that the acceptance of our Z to the cuts used for these search are the same as quoted for a sequential Z of the same mass, one can recast the limit in eq. 3.1 in a limit of coupling and branching ratio to leptons, where B mod (B SM ) is the branching ratio of a new resonance (the Z) boson to dileptons (lepton=e ± , µ ± ). For example, if B mod = B SM , the bound on the coupling of the Z to light fermions can be rewritten as g Z ff < 0.12 g SM . Among the three scenarios considered in this paper, the only model predicting a specific range of couplings to light fermions is CHL. The light fermion-resonance couplings in CHL are tied to the cancelation required in the S parameter. Even within the context of the CHL model, the allowed range of couplings is fairly large for a resonance in the TeV range, g Z ff 0.15, see tables 3 and 4 in ref. [34]. In the mass point we use in the paper, the branching fraction of the first resonance is B CHL 4 × 10 −3 , which is about a factor 8 below the SM branching ratio to electrons. In our second holographic model, HTC, electroweak precision measurements do not set strong constraints on the coupling of resonances to light fermions since the mechanism of canceling the contributions to the S parameter involves uniquely the vector sector. Couplings in HTC are therefore a free parameter and can accommodate the LHC bounds. Our walking model does not address the problem of large contributions to the electroweak precision tests, and therefore there is no relation between the couplings of SM fermions to the new resonances coming from indirect measurements. Nevertheless, the tecnirho (ρ T ) branching ratio to electrons depends on the assumptions about the technipion (π T ) mass. Assuming ρ T → π T π T is not allowed (as is usually done), the branching fraction ρ T → e + e − varies from 0.002 (m π T ∼ m ρ T ) to 0.009 (m π T + m W/Z > m ρ T ) [44][45][46][47][48][49][50]. In summary, all models are constrained by LHC searches in dileptons, but it does not yet impinge on the parameter space we are interested in this paper. Mass reconstruction For all the three models CHL, HTC and LSTC we reconstruct the mass of the resonances in the setting of the ATLAS detector of the Large Hardon Collider running at 7 TeV COM energy. For detector effects and reconstruction we use Atlas Fast Simulation Program ATLFAST++ [30], which is a ROOT-based standalone C++ program. Before showing our results for the three models, in figure 2 we compare the lepton resolution in ATLFAST++ with full simulation in ATLAS [57] and another simplified detector simulator, DELPHES [58]. In the left panel, we plot the dielectron mass reconstruction of a Z boson in ATLFAST++. The resolution in this channel agrees with the results from a full-simulation analysis reported by ATLAS [57], and with the mass reconstruction using 2010 data, See figure 3 in [59]. While the agreement with [57] is encouraging, our signal is not the SM Z boson, but a heavy resonance which decays into high-p T leptons. Therefore, as a sanity check we use Thorough studies have been performed on the resolution of electrons and muons coming from dilepton resonances. As seen in table II of ATLAS note ref. [54], the electron channel usually leads to the best resolution. Therefore in this paper we will concentrate on results from the dielectron channel alone. We expect similar, though slightly weaker conclusions using the dimuon channel. The signal is generated using the Madgraph event generator [60] for each of the models with the lowest mass resonance set to 700 GeV. These generator events are then passed through Pythia [61] for hadronization and parton showering. The events from Pythia are then passed through ATLFAST++ to simulate the detector effects. We then sort the electrons in ascending order in p T and select the two highest p T electrons for our invariant mass reconstruction. We apply a p T cut of 25 GeV or higher to the two electrons to reduce the background from QCD fakes. The left side of figure 3 shows the mass of the lightest resonance for all the three models in the dielectron channel. The two nearby mass peaks for HTC are well separated within experimental resolution. Note that the mass resolution for TCSM and HTC is dominated by experimental effects while for CHL it is dominated by the theoretical prediction. In HTC, the resonances are comparatively broad due to enhanced decays to tops -the coupling of the resonances to top quarks is large due to partial compositeness of the top. The nearby presence of two narrow resonances in HTC is a prediction tied to electroweak precision measurements [31][32][33]. This prediction is easily tested in a clean dilepton channel. In CHL, a second tier of resonances should show up at m 2nd ∼ 1.6m 1st . In the right part of figure 3 we show a larger mass range in the dileptons, where the second tier is visible. Obviously, the discovery of this second resonance would require a larger luminosity. Regarding backgrounds, the main contribution comes from SM Drell-Yan processes with an intermediate off-shell photon or Z boson. Smaller contributions come from tt, dibosons and cosmic rays. Those have been studied in [54][55][56], and taken into account when drawing a limit on the total cross section in eq. 3.1. To reduce the backgrounds from DY processes, one would apply the invariant mass and lepton p T . In this paper we do not show the DY backgrounds in the signal region (m e + e − 700 GeV) because we are not setting the overall normalization of the signal, but rather assume the signal total production would be below the current limit, see section 3. We have simulated SM Z bosons decaying leptonically with ALPGEN [62]. With a cut on lepton p T > 25 GeV and a cut on the invariant mass of 650 GeV < m < 750 GeV, this background at the LHC 7 TeV is about 4 fb. Assuming an integrated luminosity (with both experiments) of about 50 fb −1 at the end of 2012, a 5-sigma significance could be obtained with a signal of order few fb's (after cuts). Using the charge asymmetry to probe vector meson dominance Dilepton final states provide an excellent energy-momentum resolution, but also a precise charge identification. Therefore, besides the mass spectrum, one can obtain a rather accurate measurement of the lepton charge asymmetry in the events. As we will detail JHEP01(2012)092 below, this asymmetry provides a measurement of the chirality of the resonance coupling to light fermions. The measurement of the chirality of couplings to fermions is a test of the assumption of vector meson dominance (VMD) [63], often used in models of strong electroweak symmetry breaking. In a vector meson dominance scenario the first resonance: 1.) has vector couplings to fermions (before electroweak symmetry breaking), and 2.) is well separated from the next tiers. CHL and TCSM are models of VMD, whereas HTC addresses the electroweak precision tests problem by largely differing from VMD. Also note that after electroweak symmetry breaking, the resonances mix with the electroweak gauge bosons, which further modifies the chirality of the heavy mass eigenstate. Therefore, for all the models here, measuring the chirality of the coupling is a combined measurement of the VMD assumption and the mixing with the SM electroweak gauge bosons. To obtain an expression for the asymmetry, let us write the couplings of the resonance to fermions as where Z is a new resonance, V = L+R √ 2 and A = L−R √ 2 are the axial and vector couplings, and L, R are the couplings to left-handed (LH) and right-handed (RH) fermions. As we mentioned before, the chiral structure of this coupling is especially important in HTC, where one expects the two nearby resonances to be an admixture of vector an axial interaction states. In CHL and TCSM, one also expects an admixture of V, A couplings of Z to fermions, but the admixture is purely due to mixing with the SM Z boson, and is therefore suppressed as (m Z /m Z ) 2 [64]. To gain information on the chirality of the couplings, we first focus on the parton level process At the Z pole, one would write First, note that the Z production is qq initiated, and there is no contamination from gluon-initiated processes as discussed in [65]. But LHC is a p p collider, and identifying the direction of the incoming q orq is not straightforward. Fortunately, q is a valence parton at LHC, whereasq is a sea parton. When one convolutes the parton level asymmetry in eq. 5.3 with the distribution functions, one realizes that the quarks tend to have higher momentum than the anti-quarks. Therefore, the whole qq system is boosted in the direction of the incoming quark. We use this fact to obtain a charge asymmetry which is also proportional to the vector and axial couplings. In this paper we propose a related, but different measurement. Instead of measuring the forward-backward asymmetry, we define a charge asymmetry 2 where ∆η = |η + | − |η − | (5.8) Note that this asymmetry has been used before by the authors of ref. [68] to characterize s-channel resonances. The charge asymmetry is proportional to the asymmetry defined in eq. 5.3, and therefore provides information on the vector and axial couplings, as we will show below. The simulation in figure 4 confirms this expectation. In figure 4 we plot the distribution of ∆η in the laboratory frame at 7 TeV for the decay of a 700 GeV resonance. The simulation of the asymmetry is done at parton level because the measured charge and η values are close to the true value for the electrons we have simulated, with cuts of p T > 25 GeV and |η| < 2.5. Indeed, the electron charge misidentification rate is very low, of JHEP01(2012)092 order 6 × 10 −3 , and the η resolution is of order 3 × 10 −2 ([69], see figures 6.21 and 7.2 for the charge misidentification rates). Hence a full simulation is not necessary to get an accurate estimate of the charge asymmetry. The solid distribution corresponds to a vector or axial coupling (L = R), and the blue and black distributions correspond to chiral (L = 0 or R = 0) and an intermediate case (R = L/3). In the pure vector or axial case, the distribution is symmetric. We can also compute the total asymmetry and compare it with our theoretical expectation of the dependence with the couplings from eq. 5.6. The agreement is excellent and leads to a fit which accounts for all the parton distribution functions and the effect of cuts which are not encoded in the parton level eq. 5.3. For a 700 GeV resonance at 7 TeV COM energy, the fit leads to (5.9) Once the mass of the resonance is obtained from the dilepton invariant mass spectrum, one can use a Monte Carlo simulation, the experimental value of the charge asymmetry and the dilepton rate (which is proportional to V 2 + A 2 ) to obtain the couplings of the resonance to light fermions. Although the results in this section are model-independent, let us mention the expectations for the three models considered in the previous section. CHL is the most predictive model in terms of couplings to fermions, as they have to be adjusted to suppress the S parameter. In CHL, the measurement of the mass and rate in the dilepton channel can easily be inverted as a prediction for the couplings, and hence checked against the measurement of the asymmetry. In HTC, both resonances are an admixture of pure vector and axial, even before electroweak symmetry breaking, hence we expect V ∼ A. In TCSM, the lightest resonance before electroweak symmetry breaking is a pure vector, but ends up varying from this expectation by 30% after the mixing with the SM gauge bosons. The charge asymmetry is a measurement which could be done before the forwardbackward asymmetry as it requires smaller statistics than a full differential angular distribution. The charge asymmetry and the forward-backward asymmetry are simply related because of angular momentum conservation. For two incoming RH (LH) particles, the initial state has +1 (-1) unit of angular momentum, while, as we are dealing with massless fermions, all amplitudes with an initial or final state with zero angular momentum are zero. For a LH initial state, a LH final state will be produced with amplitude (1 + cos θ) , where we denote θ as the angle between positively charged lepton and the beam axis; the amplitude is zero when the final angular momentum vector points opposite the initial. A RH initial state and RH final state gives the same distribution, while a LH (RH) initial and RH (LH) final state produces (1 − cos θ). When a purely vector or purely axial particle is produced, all four combinations of helicity amplitudes contribute and sum to ∝ (1+cos 2 θ). When a chiral (RH = 0 or LH = 0) resonance is produced, only one sub-amplitude contributes (LH → LH or RH → RH) and the distribution in the differential cross section is (1 + cos θ) 2 . Therefore, the positively charged lepton from a chiral resonance sits preferentially at smaller scattering angle, or higher rapidity, while a vector/axial resonance JHEP01(2012)092 produces forward/backwards leptons (or high/low rapidity) symmetrically. The preference for forward leptons in the chiral resonance case is what leads to the shift in |η + | − |η − |. If the resonance couplings are purely vector or purely axial, then there is not information to be extracted from the charge asymmetry, but instead one has to resort to interference effects with the photon and Z boson [70]. The interference effect leads to a distribution where θ is the angle of the electron with the beam, see ref. [70] for details. But this measurement would require large statistics. Indeed, in an s-channel production of a resonance, s ∼ m Z , leading to a suppressed interference effect. Conclusions Scenarios of strong electroweak symmetry breaking are an attractive alternative to the fundamental Higgs mechanism. In these scenarios new resonances show up as composite objects of the strong dynamics. The common prediction to all these models is that the resonances would couple strongly to W, Z bosons and help in the unitarization of W W scattering. Unfortunately, a direct measurement of these couplings is difficult since it relies on the vector boson fusion channel, requiring a large luminosity and the capacity of forward jet tagging. We take a different approach in this paper. Even if the resonance couplings to light fermions is suppressed, the s-channel production may turn out to be the discovery channel. Resonances can be produced through quarks in the proton, and it opens the possibility of dilepton final states, which are very clean and provide charge correlations. Indeed, many scenarios predict sizable s-channel production. In this paper, we focus on three distinct scenarios based on warped extra-dimensions and technicolor scenarios, and use dileptons as the discovery and also characterization channel. The models considered here are Cured Higgsless (CHL), Holographic Technicolor (HTC) and Low-Scale Technicolor (LSTC), and we outlined their main characteristics in the text. We first look at the dilepton invariant mass distribution. HTC has a very characteristic spectrum, with two nearby resonances. This degeneracy can be resolved experimentally, and it is a consequence of the viability of the scenario when confronted to precision tests. CHL displays two separated resonances with a ratio in mass fixed by the AdS geometry. LSTC just assumes a low-lying resonance, but makes no further assumptions. We have shown in this paper that one can distinguish between these spectra, which themselves imply very definite ways of addressing the electroweak precision constraints. We then turn our attention to the charge information provided in the dilepton final state. We use this charge to construct a charge asymmetry which, combined with a rate in the dilepton channel, can be used to extract the chirality of the couplings of the resonance to the light fermions. This measurement would further help on setting apart different models, as it is related to the mixing of the resonance with the Z boson and, in HTC, to JHEP01(2012)092 the cancellation in the S parameter. Moreover, the charge asymmetry is a measurement which can be done before the forward-backward asymmetry as it requires smaller statistics. Again, this measurement yields information about the underlying structure of EWSB, and the way the model fulfills the requirements of electroweak precision measurements.
6,486.2
2012-01-01T00:00:00.000
[ "Physics" ]
Data Amplification for Bearing Remaining Useful Life Prediction Based on Generative Adversarial Network To deal with the di ffi culty in bearing remaining useful life prediction caused by the lack of history data, a data ampli fi cation method based on the generative adversarial network (GAN) is proposed in this paper, and the parameters of generator and discriminator in the GAN are determined by grid search algorithm. The proposed method is veri fi ed by the XJTU-SY bearing data sets from Xi ’ an Jiaotong University. First, 15 time-domain features related to the bearing life are extracted as the training data of the GAN to generate virtual data that can be used to build bearing life prediction models. Then, support vector regression and the radial basis function neural network are used to construct the bearing prognostic model based on real data, generated data, and mixed data. The results show that the proposed method can make up for the de fi ciency of data and improve the accuracy of bearing remaining useful life prediction. Introduction Bearings are extremely important components in rotating machinery, and precisely predicting their remaining life is of vital significance for improving the reliability and safety of mechanical systems. It can assist engineers to take reasonable measures and reduce economic losses, which thus has been attracting the attention of more and more researchers. The data acquisition of bearing vibration signals requires huge amount of economic and time costs, so the full life cycle data for bearing life prediction is limited. It greatly restricted the development and application of bearing life prediction methods. Generative adversarial networks (GAN), using an unsupervised learning method for training, are equipped with powerful capabilities of data generating. It can be widely used in both semisupervised and unsupervised learning without complex Markov chains. Compared with all other models, GAN can produce clearer and more realis-tic samples, and it has been successfully applied in many fields. Ledig et al. used GAN for image super-resolution and implemented the first framework competent in inferring realistic natural images from original ones accordingly for an upscaling factor of 4 [1]. Moreover, Bai et al. used GAN to directly generate faces in high resolution based on blurred small ones to solve the problems of insufficient information and ambiguous features caused by small sizes in face detection technology [2]. GAN were originally created to solve image problems, yet image model training requires a large amount of data sets, which will be quite costly if operations of collecting and labeling are performed by human beings entirely, whereas GAN are capable of generating data sets by themselves so that it can provide low-cost training data. GAN are applied to solve tricky puzzles of stock market forecasting, order processing, image generation, semantic segmentation, health care, privacy protection, etc. according to references. Zhang et al. proposed a novel adversarial network architecture for stock market forecasting using multilayer perceptrons as discriminators as well as long-term and short-term memory as generators for predicting stock closing prices [3]. Kumar et al. proposed a kind of GAN for orders on e-commerce websites to explore and process all ambiguous orders [4]. Tirupattur et al. take advantage of the malleability of adversarial learning by designing a conditional GAN, taking the encoded EEG signal as input, and generating the corresponding image [5]. Gecer et al. rebuilt facial texture and shape from a single image by GAN and deep convolutional neural networks (CNN), in which GAN is utilized for training a very powerful generator for facial texture in ultraviolet space [6]. Souly et al. used GAN for semisupervised semantic segmentation and proposed a semisupervised framework to force real samples to be close to the feature space by adding large fake visual data [7]. Goel et al. realized automatic screening for the coronavirus, using an optimized GAN, able to generate more CT images [8]. Liu et al. applied GAN to privacy protection, adding designed noise in model learning to make privacy differentiation, and to improving model stability and compatibility by controlling loss of privacy [9]. Pascual et al. mainly used GAN to learn complex functions from a large number of data sets for speech enhancement [10]. This paper mainly illuminates the model of extracting 15 time-domain features from a large amount of bearing vibration data to expand the dimension of data and optimizing life prediction, which exploit GAN, considered as the superior one of most approaches in many cases, to accomplish the data extension duty. For life prediction of bearings, experts at home and abroad have carried out a lot of research on it and achieved certain results. Lu et al., who conducted research of the relationship between bearing clearance and load distribution under interference fit, studied the effect of bearing installation dimensional accuracy and surface machining accuracy of surrounding structural components on fatigue life and bearing load-carrying characteristics by establishing the model of the low-speed spindle drive system of the fan [11]. Shen et al. proposed a new method for predicting remaining life based on relative characteristics and multivariable support vector machines (SVMs). Shen et al. proposed a new method for predicting remaining life based on relative features and multivariate support vector machines [12]. This method evaluates the decline rule of bearing performance, which is not affected by individual bearing difference. The correlation analysis is used to select sensitive features as input to construct a model that combines the dual advantages of multivariable regression and small sample prediction to predict the remaining life of bearings. Although the characteristics and methods are different, the idea is very similar to the general idea of this paper. Aiming at the shortcomings of the traditional life prediction method, that is, the inability to predict the life of space rolling bearings, Dong studied and used the support vector machine method. The phase space reconstruction method is used to select the input parameters of the support vector machine, the particle swarm algorithm is used to optimize the internal parameters of the support vector machine, and a degradation trend prediction model based on the optimized parameters is established to predict the degradation trend and remaining life of space rolling bearings [13]. However, our research is to establish a prediction model based on extended dimension and quantity of parameters, which has the same purpose. In the prediction of the remaining life of bearings, neural networks have been widely used, of which one typical method is long short-term memory (LSTM) [14][15][16]. But due to the computational complexity of this method being large and time-consuming, this paper uses two other methods: the support vector regression (SVR) method and the radial basis function neural network (RBFNN) method. In order to solve the problems of difficulty in establishing the model of bearing life prediction and low accuracy due to insufficient historical data for the input of the life prediction model, a bearing life prediction method based on condition monitoring data is proposed in this paper. First, multiple time-domain features of vibration signals related to bearing life are extracted and used as training data for GAN; then, the training data is input into GAN, and adversarial optimization training is performed; then, the generated virtual time-domain feature data is used for life. Based on the generated data for prediction, finally, the prediction of remaining life is carried out by two methods, SVR and RBFNN. Generative Adversarial Network 2.1. Generative Adversarial Network Theory. The generative adversarial network is a typical generating algorithm model. GAN's idea is inspired by the two-man zero-sum game in game theory. It has two modules called the Generator and the Discriminator. They can learn from each other to produce better and better output [17]. The basic framework is shown in Figure 1. The generator receives a random noise, through which new samples are generated. The discriminator network is a binary discriminator. Training samples and generated samples are taken as inputs to distinguish whether the current input is from the training sample or the generated sample, so as to judge the generation quality of the current generator. When entering the training sample data, the expected output of the discriminator is true. When entering the generated sample data, the expected output of the discriminator is false. The generator needs to make the discriminator expected output true as much as possible, making it consistent with the performance of the training sample, thus forming an anticompetition relationship. The optimization process of alternating training between two models can be regarded as a minimax game problem. Through the adversarial learning mechanism, the performance of the discriminator and the generator is continuously improved. After much training, the discriminator and generator can reach a balance, which is known as the Nash equilibrium. After the GAN training is completed, the generator can well estimate the distribution of training samples and generate new data consistent with the distribution of training samples, so as 2 Wireless Communications and Mobile Computing to achieve the purpose of expanding the data [18]. Its objective function is shown in the following formula: In the formula, Eð * Þ represents the expected value of the distribution function, P data ðxÞ represents the distribution of the real samples, and P noise ðzÞ is defined in the lowdimensional noise distribution. By mapping the generator with the parameter θ g to the high-dimensional data space, we can get P g = Gðz, θ g Þ. Optimization of Generative Adversarial Networks. To optimize both networks to generate generated samples that are as consistent as the training sample distribution, if the generator and discriminator are optimized simultaneously, the implementation principle is complex and most likely not the desired effect. Therefore, the discriminator and generator are optimized alternately: First, fix the generator, so that the discrimination accuracy of the discriminator reaches the maximum, that is, the most accurate discrimination of the discriminator. Then, the discriminator is fixed, so that the minimum discrimination accuracy of the discriminator discrimination is the most inaccurate; that is, the generator generation accuracy is the largest. During the training process, both are constantly optimized to improve the accuracy of their respective model fault identification and parameter generation ability, until the discriminator and generator reach the Nash equilibrium, and finally complete the GAN training. The grid search method is an exhaustive search method for specifying parameter values. By optimizing the parameters of the estimation function with cross-validation methods, we can get the optimal learning algorithm. The grid search method is used to find a better generator and discriminator node number setting, which can not only optimize the neural network parameters of the neural network as much as possible, but also make the training number and time as little as possible. While improving the quality, it also improves the training efficiency of GAN. Feature Extraction. We used the data collected from the Xi'an Jiaotong University experiment-XJTU-SY Rolling Bearing Acceleration Life Test Data Set (data set 1: 35 Hz, 12 kN, bearing 1_1; data set 2: 37.5 Hz, 11 kN, bearing 2_ 1). It collects 32,769 data points per minute, and for the full life cycle, data set 1 was collected for 123 min and data set 2 was collected for 491 min. Select the vertical direction of the vibration signal therein. In addition, because each bearing is trained separately, the accuracy of the bearing life prediction results has no direct relationship with the vibration frequency and force of the bearing and has nothing to do with whether it is the same bearing or not. For a segment of the vibration signal x = ½x min , x max , ⋯, L, the 15 time-domain features are calculated using the following formula: x min = min x n ð Þ, x n j j, In the formula, x min is the minimum, x max is the maximum, x is the mean, j xj is the absolute average, δ is the variance, σ x is the standard deviation, S is skewed, K is steep, W is the waveform index, x r is the root amplitude, x rms is the root mean square, C is the peak indicator, I is the index of pulses, and L is the margin index. Bearing Life Prediction Method 3.1. Support Vector Regression. In traditional regression models, such as the simplest linear regression, the loss is calculated by the difference between the f ðxÞ of the model output and the y-value of the real output. Support vector regression (SVR) [19] assumes that the model can tolerate an eps size deviation between the output f ðxÞ and the real y-value. It means that as long as the predicted value of the sample falls on the interval band where the absolute value difference between the f ðxÞ sides in the y-axis direction is less than the eps, the prediction is correct. If a sample falls 3 Wireless Communications and Mobile Computing into the interval band, the sample that falls into the interval band does not calculate a loss; that is, only the support vector has an impact on its function model. By minimizing the total loss and maximizing the intervals, we can get the optimized model, as shown in Figure 2. Compared to traditional regression, SVR has the advantages of low generalization error, low computational complexity, and ease of interpretation and can solve highdimensional problems. Therefore, this paper uses the SVR method for life prediction. In the SVR algorithm, the kernel function adds new features through feature transformation, making the linearly inseparable problem in low dimensional space transform to the linear separable problem in high dimensional space. Thus, the choice of the appropriate kernel function has a very large impact on the regressiveness of the SVM and the final results [20]. Life Span Prediction Method Based on Support Vector Regression. First, the sample data is divided into the training sample data and test sample data. Then, the SVR model is trained using the training sample data, so that the model has good parameters. Finally, the test sample data is fed into the SVR model already trained to obtain the results of predicted life. During the implementation, most of the sample data was taken out as the training data. The SVR model is trained with the training data first. After the training is completed, the test data is imported, and the sliding window is adopted to predict the value of the next moment, so as to achieve the purpose of life prediction. Radial Basis Function Neural Network. With regard to the radial basis function neural network (RBFNN) [21], it has three layers: the first layer is the input layer, the second layer is the hidden layer, and the third layer is the output layer. The weights between the input layer and the hidden layer are all 1. The neurons of the activation function of the hidden layer are radial basis functions. Radial basis function is a real-valued function whose value depends only on the distance from the origin and is commonly used by a Gaussian radial basis function [22]. The hidden layer and the output layer is the connection relationship of ordinary neural networks, and the weight between them can be changed through training. The linear output layer weights the output of the hidden layer nodes, and the number of neurons in the linear output layer is the same as the output vector dimension [21]. Radial basis function neural network is an efficient feed forward neural network with global features and the best approximation performance. Its training speed Wireless Communications and Mobile Computing is fast with a simple structure. So this method is also used to predict the life of bearings. The role of the RBFNN hidden layer is to nonlinear transform the input vector, transforming the sample point from the input space to a high-dimensional feature space. Use a linear model in the feature space to model the training samples, or make the training sample linearly separable in the high-dimensional feature space. Figure 3 shows the topology of the radial basis function neural network. RBFNN has been demonstrated by experts in the related fields that its radial basis function has the best approximation performance. As long as there are enough hidden layer nodes, we can approximate the multivariate nonlinear continuous function with arbitrary precision. At present, RBFNN has been widely used in the fields of information processing, fault diagnosis, physical modeling, judgment and recognition, and image processing. Life Span Prediction Methods Based on Radial Basis Function Neural Networks. The key to RBFNN lies in the determination of the radial basis function. The function value of a point is only related to the distance of the point from the central point, so the position of the center point, the choice of radial basis width, and the number of radial basis functions will all affect the effect of the neural network. This paper determines the center of the radial basis function by the orthogonal least square method. Its training method is the same as the ordinary neural networks. The difference is that the ordinary neural network only trains the weights between the neural networks, while the RBFNN also trains the activation function. The hidden layer uses the radial basis function as the neuron of the activation function, and the connection between the hidden layer and the output layer is identical with that of ordinary neural networks; that is, the weight between them can be changed by training. The width vector affects the range of action of the neuron on the input information: the smaller the width, the narrower the shape of the action function of the corresponding hidden layer neuron. Determination of width σ: it is artificially stipulated that σ = d max / ffiffiffiffiffiffi ffi 2m p , where d max is the maximum distance between the centers, and m is the number of hidden layer nodes, that is, the number of basis functions. Wireless Communications and Mobile Computing nodes. The input is 15-dimensional raw data, and the output is 5 * 15-dimensional generated data, which is used for data dimension expansion to make life prediction better. First, initialize the generator and discriminator. During each iteration, the generator is fixed first, and only the parameters of the discriminator are updated. 90% of the data from the original data set and 90% from the output of the generator are selected, which means that the discriminator is prepared with two sorts of inputs. The discriminator's goal of learning is that if the input is from the real data set, the output is 1; if it is the data generated by the generator, the output is 0, which can be regarded as a regression problem. Wireless Communications and Mobile Computing Next, fix the parameters of the discriminator and update the generator. Input the original data into the generator, get an output, throw the output into the discriminator, and then get a decimal. The generator needs to adjust its parameters to make the output decimal as large as possible, which is because larger means better. Life Prediction Using SVR. Three groups of prediction experiments were carried out: using real raw data to predict life, using generated data to predict life and using mixed data to predict life. In experiment 1, 80% of the real original data (1 * 15 dimensions) were evenly extracted as training sample data. In experiment 2, 80% of the evenly spaced samples from the generated data (5 * 15 dimensions) were extracted as training sample data. In experiment 3, evenly spaced 80% of the real original data and generated data (6 * 15 dimensions) were extracted as training data [23]. The SVR model was trained with the above three sets of training data. After the training was completed, the test sample data was imported to predict the value at the next moment using the sliding window to extract the features, so that the life prediction could be carried out. In order to quantitatively measure the effect of the proposed method, the mean absolute error (MAE) and root mean square error (RMSE) of the three predictions were calculated and compared. MAE can represent the mean of the absolute error between the simulated values and the observed values; all individual differences were equally weighted on the mean. RMSE is used to measure the deviation between the observed value and its simulated values. MAE is the simplest and most easily explained evaluation index, which reflects the real error. RMSE has the same scale as MAE, but after we find the results, we will find that RMSE is somewhat larger than MAE. This is because RMSE is squared first and then square, it actually amplifies the gap between the larger errors. Therefore, the smaller the value of RMSE, the smaller the significance in the measurement, because its value reflects that its maximum error is also (3) and (4), respectively: The smaller the RMSE, the better the fitting effect; similarly, the smaller the MAE, the better the fitting effect. Because less data is collected in Table 1, the life prediction lacks accuracy. Using the generated adversarial network to amplify the data according to the distribution of the raw data, which greatly saves the experimental cost and time cost. And the prediction results of generated data and mixed data are significantly better than those of real data, which verifies the effectiveness of this method. Life prediction using the method of SVR is performed in Table 2, and the results are shown in Figures 7-9. There is much data in data set 2, and the life prediction results of SVR show that the prediction quality of the generated data and the mixed data is not much different from the real data, which can also show that the generated adversarial network has a strong ability to generate data. Radial Basis Function Neural Network Lifetime Prediction. In the experiment, the raw sample data were normalized, and 15 vibration time-domain signal characteristic parameters were divided into two groups. Taking the first 80% of the data as a training sample of the RBFNN prediction model, and being used to determine the model structure and the training network parameters, real data were used as test samples to test the model prediction accuracy and divided into three groups of experiments. Experiment 1: take 80% of the evenly spaced extraction in the real raw data (1 * 15 dimensions) as the training sample data. Experiment 2: 80% of the evenly spaced extraction in the generated data (5 * 15 dimensions) was used as the training sample data. Experiment 3 : 80% of the evenly spaced extraction from real raw and generated data (6 * 15 dimensions) was used as training data [23]. Set the dimension of the input array of RBFNN prediction model is M = 15, and the dimension of the output array is N = 1. The RMSE (root mean square error) and MAE (average absolute error) of the three predictions are calculated for quantitative comparison. Life prediction is performed on data set 1 using the method of RBFNN, and the results are shown in Figures 10-12. Table 3 lacks data, so the data dimension can be expanded by using generating adversarial network methods, which greatly improves the accuracy of data life prediction. When we performed life prediction on data 2 with the method of RBFNN, the results are found in the following table. Table 4 has a lot of data, and the prediction quality of the generated data and the mixed data is not much different from the real data, which can reflect that the generated adversarial network can generate the generated data close to the original distribution, and can also reflect that the generated adversarial network has a strong ability to generate data. To sum up, we can see that the life prediction of SVR is better than that of RBFNN. Conclusion (1) When the lack of raw data leads to inaccurate life prediction results, the use of generated adversarial network for data amplification can obviously optimize the results of model life prediction, so that the model prediction life of mixed data and generated data is more accurate than that of real data. This result shows that the proposed method can compensate for the data deficiency and improve the accuracy of bearing residual life prediction (2) Using the vertical vibration signal of these two data sets, the vibration signal is extracted into 15 timedomain features as the GAN training data, and the support vector regression and radial basis function neural network are used to predict the bearing life; the support vector regression method is better than the method of radial basis function neural network Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest The authors declare no conflicts of interest.
5,751.2
2022-08-08T00:00:00.000
[ "Engineering", "Computer Science" ]
OnTARi: an ontology for factors influencing therapy adherence to rehabilitation Background Adherence and motivation are key factors for successful treatment of patients with chronic diseases, especially in long-term care processes like rehabilitation. However, only a few patients achieve good treatment adherence. The causes are manifold. Adherence-influencing factors vary depending on indications, therapies, and individuals. Positive and negative effects are rarely confirmed or even contradictory. An ontology seems to be convenient to represent existing knowledge in this domain and to make it available for information retrieval. Methods First, a manual data extraction of current knowledge in the domain of treatment adherence in rehabilitation was conducted. Data was retrieved from various sources, including basic literature, scientific publications, and health behavior models. Second, all adherence and motivation factors identified were formalized according to the ontology development methodology METHONTOLOGY. This comprises the specification, conceptualization, formalization, and implementation of the ontology “Ontology for factors influencing therapy adherence to rehabilitation” (OnTARi) in Protégé. A taxonomy-oriented evaluation was conducted by two domain experts. Results OnTARi includes 281 classes implemented in ontology web language, ten object properties, 22 data properties, 1440 logical axioms, 244 individuals, and 1023 annotations. Six higher-level classes are differentiated: (1) Adherence, (2) AdherenceFactors, (3) AdherenceFactorCategory, (4) Rehabilitation, (5) RehabilitationForm, and (6) RehabilitationType. By means of the class AdherenceFactors 227 adherence factors, thereof 49 hard factors, are represented. Each factor involves a proper description, synonyms, possibly existing acronyms, and a German translation. OnTARi illustrates links between adherence factors through 160 influences-relations. Description logic queries implemented in Protégé allow multiple targeted requests, e.g., for the extraction of adherence factors in a specific rehabilitation area. Conclusions With OnTARi, a generic reference model was built to represent potential adherence and motivation factors and their interrelations in rehabilitation of patients with chronic diseases. In terms of information retrieval, this formalization can serve as a basis for implementation and adaptation of conventional rehabilitative measures, taking into account (patient-specific) adherence factors. OnTARi also enables the development of medical assistance systems to increase motivation and adherence in rehabilitation processes. Supplementary Information The online version contains supplementary material available at 10.1186/s12911-021-01512-y. rehabilitation are chronic neurological, cardiological, musculoskeletal, and psychiatric diseases [2]. The therapeutic measures applied are manifold. They range from movement therapy, physical therapy, and pain management to psychological treatment and social counseling to complementary medicine, which includes both naturopathic and alternative medicine treatments [3,4]. Such therapeutic measures are usually deployed initially for a limited period of time, e.g., over four to six weeks during an full day outpatient or inpatient medical or vocational rehabilitation [3]. For a successful and long-term achievement of therapeutic objectives, however, both a sustainable implementation of behavioral and lifestyle changes practiced in medical or vocational rehabilitation and a long-term provision of subsequent rehabilitation services are crucial [5]. On average, though, only 50% of patients with chronic diseases achieve good treatment adherence [6]. Generally, adherence can be described as "[…] the extent to which a person's behavior -taking medication, following a diet, and/or executing lifestyle changes, corresponds with agreed recommendations from a health care provider" [6]. In rehabilitation, which is not only characterized by individual measures, but passes various phases, it seems advisable to extend this definition with regard to the recommended measures. Accordingly, not only the extent to which a physician's or therapist's recommendations are followed should be classified as therapy adherence, but rather the adherence to general measures for the implementation of an effective therapy, "[…] regardless of who recommended it in a specific case" [7]. Thereby, a multitude of factors exist, which influence the adherence of patients with chronic diseases, either positively or negatively [8]. In 2003, the World Health Organization (WHO) was able to identify 173 different predictors of adherence in nine different indications [6]. Pursuant to their analyzes five categories need to be distinguished: (1) patient-related, (2) social-and economicrelated, (3) therapy-related, (4) condition-related, and (5) health system/healthcare team-related factors [6]. It should be noted that the evidence of individual adherence factors varies depending on indications and therapies [8]. Furthermore, concrete effects are rarely proven [9]. Consequently, it is quite difficult to systematically address and overcome such adherence predictors for a specific patient. Altogether, it becomes clear that there is an enormous number of adherence factors important in rehabilitation processes. These show a high heterogeneity, especially with regard to individual patients. Positive or negative effects on adherence are thus rarely confirmed or even contradicted. Ontologies have been proven in related areas to be convenient to present the existing knowledge on adherence factors in rehabilitation processes from different sources and to make them available for information retrieval, without having to explicitly address the effects of individual adherence factors. For example, the domain ontology OPTImAL successfully formalize predictors that may influence adherence to physical activity and exercise training in the context of rehabilitative treatment of cardiovascular diseases [10]. Ontologies are also frequently used to directly support rehabilitative processes, such as the standard care for rehabilitation of knee conditions [11] or for planning and adapting physiotherapeutic exercises in rehabilitation of musculoskeletal shoulder disorders [12]. However, no ontology seems to exist that provides an overview of possible adherence factors in rehabilitation processes in general, independent of the underlying disease. Therefore, the objective of this research is firstly, to extract the existing knowledge in this domain and secondly, to formalize it in an ontology. Knowledge extraction Knowledge extraction was done manually by retrieving data from various sources, including textbooks and scientific publications (non-ontology knowledge). Initially, a MEDLINE-search via PubMed was carried out. Thereby, reviews dealing with the collection or analysis of adherence and motivation factors in rehabilitation processes should be identified. ' . A restriction was made via the PubMed-interface with the publication type -'Scoping Review' , 'Systematic Review' , and 'Meta-Analysis' . Titles and abstracts were screened by two independent reviewers. Full texts were analyzed using a qualitative content analysis based on five inductive categories: (1) Indication/population, (2) area of application, (3) rehabilitation phase, (4) adherence factors, and (5) relations. Single phrases were assigned to these categories and documented in tabular form. Adherence factors and relations clearly recognizable as redundant were removed afterwards. For a more detailed description of patient-related adherence factors, models and theories of health and motivation research were analyzed. In principle, these models serve to understand, explain, and predict the health behavior of individuals by investigating influencing variables and effectiveness mechanisms [13,14]. By means of health behavior models, not only further patient-related adherence factors could be collected, but also their interactions could be described. Thereby, motivational models seemed to be of particular relevance [14]. The aim of such models is to identify individual factors, so-called predictors, from which it can be deduced whether a person is willing to change his or her behavior or not [14]. Therefore, the most familiar motivational models were analyzed and modelled as concept maps for formalizing the knowledge contained later on. Given the variability and low evidence regarding the effects of individual factors on patient adherence and motivation, a pure collection of possible factors was made without considering their actual effects [8,9]. Ontology specification The ontology "Ontology for factors influencing therapy adherence to rehabilitation" (OnTARi) was developed following the ontology development methodology METHONTOLOGY [15]. This method is particularly well suited to construct new ontologies without building upon existing ones. Apart from activities and techniques for ontology management, METHONTOLOGY summarizes separate steps for the developement of an ontology, like "[…] the specification, the conceptualization, the formalization, the implementation and the maintenance […]" [15]. A theoretical perspective on ontologies was also considered [16,17]. OnTARi is intended to represent potential adherence factors and their interaction in rehabilitation processes to address them in conventional interventions or by medical assistance systems. This makes OnTARi beneficial not only for healthcare professionals, e.g., in the preparation of treatment programs, but also for (medical) computer scientists and software developers. For the purposes of information retrieval, OnTARi should be able to answer the following questions: 1. Which adherence factors exist in a certain adherence dimension? 2. Which adherence factors are particularly relevant in rehabilitation processes, that is hard adherence factors? 3. Which hard adherence factors exist in a certain adherence dimension? 4. Which (hard) adherence factors are particularly relevant in a certain rehabilitation field? 5. Which adherence factors are influenced by a certain other adherence factor? Further details on the ontology specification are summarized in Table 1. Ontology conceptualization and formalization In accordance with METHONTOLOGY the conceptualization of OnTARi was based on four successive steps. First, a simple glossary of terms was formed, which contains concepts, individuals, and relations as well as their descriptions. Classes and taxonomies were created in a second step to systematize the identified concepts. Step three dealed with the definition of binary relations. Finally, the dictionary of concepts was created. Glossary of terms The glossary of terms was derived from the preceding knowledge and term extraction. It consists of the identified adherence and motivation factors as well as associated relations. Single statements were summarized to a term or a short phrase and documented with a proper description, synonyms, type of concepts, possibly existing acronyms, and German translations. Classes and taxonomies To create OnTARi's taxonomy, a middle-out approach based on the preceding theoretical analyzes for term extraction was chosen. Classifications from medicine, psychology, and socioeconomics as well as taxonomies of other domain ontologies were used to enable reusability and interoperability of OnTARi (see Table 2). First, a division into three dimensions as main classes was made: Adherence, AdherenceFactors, and Rehabilitation. Thereby, the dimension Adherence was defined according to the domain ontology OPTImAL [10]. The taxonomy for AdherenceFactors was created successively, in a threelevel hierarchy: top-, middle-, and bottom-level. The toplevel classification served to represent the five adherence dimensions defined by the WHO [6]. For standardized documentation of patient-related adherence factors, the 'International Classification of Functioning, Disability and Health' (ICF) in combination with the 'Dimensions of Treatment Motivation' (DTM) was applied [18]. Socioeconomic adherence factors were characterized by the [19], and the specification of funcional capacity by Patterson et al. [20] were used. From an economic perspective healthcare team/system-related adherence factors can be described as resources. For this reason the so-called 'Resource-Based View' , a theory recognized among economists, was used for categorization of such factors [21]. As supplement, the perspective of social sciences, i.e. the 'Taxonomy of Resources' , was included to depict interpersonal resources in addition to economic ones [22]. The dimension Rehabilitation was intended to assign individual adherence factors to a rehabilitation area. This way it should be possible to constitute which adherence factors are of special relevance in a certain rehabilitation area. Thereby, a subdivision into typical areas of rehabilitation, e.g., neurological, internistic, and orthopedic rehabilitation, was done. Definition of binary relations For the definition of binary relations object and data properties must be differentiated. While object properties represent relations between two classes, data properties can be seen as attributes of a class. Object properties were defined following the principles of object-oriented programming. This means, that as many standardized relations as possible should be implemented, e.g., inheritances and object compositions. Only a few relations should be created specifically for OnTARi. Also data properties should be defined as generically as possible to be applicable in multiple classes. Dictionary of concepts The dictionary of concepts was mainly built on the adherence predictors identified in literature. Adherence predictors categorized in line with the bottom-up approach used were defined as individuals in OnTARi. For example, the adherence predictor forgetfulness is an instance of the class MemoryAbility, and the factor lack of clear instructions from the healthcare professionals an instance of the class TrainingAndGuidanceOfPatients. The dictionary was extended by typical expressions of a class by using existing classifications and taxonomies. LOINC-Codes, for instance, were used to add the individuals married, living in a partnership, separated, unmarried, divorced, and widowed to the class MaritalStatus. Individuals initially not defined must be supplemented later on with patient profiles stored in an associated database. The dictionary also specifies attributes. This includes unique names of attributes, names of concepts to which an attribute is assigned, types, value ranges, and cardinalities. For example, the attribute has_severity of the class Comorbidity was defined as a string with the value set extremely mild, mild, moderate, severe, extremely severe and cardinality 1. Evaluation and re-design A taxonomy-oriented evaluation by two domain experts -a medical informatician and a physical therapistwas used to verify OnTARi's conciseness, consistency, and completeness in terms of classes, object properties, and instances before implementation [23]. Each expert received OnTARi's taxonomy as an Excel spreadsheet and a web ontology language (OWL) file, the associated ontology specification, and an individual evaluation form to document conceptualization errors. Here, eight types of errors in four categories were documented: (1) inconsistency (circularity errors, semantic inconsistency, and overlaps), (2) incompleteness (incomplete concepts and partitioning errors), (3) redundancy (redundant concepts and identical definitions), (4) expression errors (incorrect/unambiguous formulations of concepts). In total, 42 inconsistencies, 43 incompletions, four redundancies and six expression errors could be detected and adjusted accordingly (re-design). More details on the expert evaluation can be found in Additional file 1. Implementation The implementation of OnTARi was realized in OWL 2 by using the ontology editor Protégé in version 5.5.0 [24]. Interclass relations were implemented as 'object restrictions' using the predefined 'object properties' . Acronyms, synonyms, definitions, and German translations were embedded via 'annotation properties' as subclass of 'rdfs:comments' with the datatype 'rdfs:Literal' . Knowledge base There is a variety of different health and motivation theories dealing with the analysis and description of behavior and the facilitation of behavioral change. Also in specialist literature, both textbooks and scientific publications, more and more work on therapy adherence and patient motivation can be found, focusing on a wide range of indications and thus care processes. Adherence factors in textbooks Treatment adherence in rehabilitation research is a fairly new discipline with few concrete research to date [25]. Probably the best known and most comprehensive work stems from the WHO project "Adherence to Long-term Therapies", launched in 2001. Their report "Adherence to Long-term Therapies: Evidence for actions" [6] from 2003, provides a collection of adherence factors as well as a list of possible interventions for individual indications, patients, and settings to increase treatment adherence. Indication-specific adherence factors were identified through individual reviews and assigned to five adherence dimensions: (1) patient-related (n = 51), (2) social-and economic-related (n = 41), (3) therapyrelated (n = 27), (4) condition-related (n = 23), and (5) health system/healthcare team-related adherence factors (n = 30). Apart from asthma, cancer, depression, and diabetes, the report also includes reviews on epilepsy, HIV/ AIDS, hypertension, tuberculosis, and tobacco control. Altogether 173 different adherence predictors could be determined. Frequently mentioned and therefore easy to generalize factors having a negative impact on adherence include complex treatments, side effects, poor workingalliance between healthcare professionals and patients, high frequency of treatments or therapeutic sessions, mental comorbidities, lack of social support and family problems, forgetfulness, and poor understanding of disease and symptoms. Adherence factors in scientific publications Based on the MEDLINE-search described above, 12 reviews on adherence in rehabilitation could be identified [9,[26][27][28][29][30][31][32][33][34][35][36]. The majority deals with the analysis of treatment adherence in terms of cardiovascular diseases (n = 6), especially in acute myocardial infarction [26][27][28][29][30][31]. Two additional reviews focus on adherence and patient compliance in neurodegenerative diseases [32,33]. A single one addresses the identification of adherence factors in the outpatient care of cancer [34]. The other three reviews do not have a specific target group [9,35,36]. Hall et al. [36] and Essery et al. [9] examine cardiological, neurodegenerative, and musculosceletal diseases together. According to Essery et al. [9], many influencing adherence factors are transferable to other indications or rehabilitation in general (generalizability). This applies in particular to adherence factors that could be found in different indications and rehabilitation processes, such as intention, intrinsic motivation, self-efficacy, previous adherence behavior, and social support. In total, the analysis revealed 205 different adherence factors. Thereby, the focus is on patient-related factors. Healthcare team and system-related adherence factors are considered only marginally, with recommendations from healthcare professionals (n = 3) and referrals from physicians (n = 4) being repeated aspects. The most common mentioned socio-economic adherence factors are social support from family and friends (n = 8), access to treatment, i.e. distance, location, and accessibility of treatment (n = 7), as well as employment status (n = 5). Concerning the (general) health status of an individual, adherence factors such as depression (n = 7), smoking (n = 6), body mass index (n = 4), and physical activity and fitness (n = 4) can be identified as influences. However, it should be emphasized that individuals with basically good physical, mental, and emotional health are more likely to be adherent than individuals who have comorbidities or feel too ill to participate in treatment. Alongside age (n = 6) and gender (n = 6), anxiety and fear (n = 7) as well as self-efficacy (n = 6) and motivation (n = 5) are among the most frequently mentioned patient-related adherence factors. Even if the extent of influences on adherence is difficult to determine and varies from individual to individual, it can be stated that social support by family or friends, intention to carry out therapeutic measures, intrinsic motivation and adherence (history) to date are among the strongest predictors. Relations, i.e. dependencies and correlations between adherence factors, are rarely analyzed in literature. Most of the 103 relations determined are based on general statements or assumptions without evidence-based proof. Adherence factors in health and motivation theories Motivational models of health behavior assume that positive behavioural changes are all the more likely the more influencing factors are present [14]. One of the first health behavior models is the Health Belief Model (HBM) shown in Fig. 1 [13]. It proceeds from the basic assumption that the probability for a healthy behavior of an individual becomes the more likely, the more this person estimates its perceived health threat. The level of personal health threat is estimated to depend on various demographic and psychological variables. Likewise a cost-benefit balance individually noticed as positively increases the probability for a behavior change. Other factors mentioned in the HBM are health motivation and incentives to act, such as the opinion of relatives or the severity of self-perceived symptoms. An extension of the HBM is the Protection Motivation Theory (PMT) [13,37]. Here, fear appeals play a central role. Although they do not have a direct effect on a person's behavior, they address the so-called protection motivation, which is better known as intention to change a behavior. Intention depends essentially on two parallel processes, perceived health threat and coping appraisal. In contrast to the HBM, perceived health threat is not only based on perceived severity of the disease and perceived vulnerability, but also of intrinsic and extrinsic rewards. While health threats increases with perceived vulnerability and severity, they may decrease with higher intrinsic rewards for unhealthy behavior or an already manifested positive experience with such a behavior -'I feel better when I am not on a diet' . Another widely used theory of health behavior is the Theory of Planned Behavior (TPB) [13,37]. The TPB focuses on the analysis of competence awareness. This includes the self-efficacy already known from PMT, here called perceived behavior control. An essential assumption is that self-efficacy no longer only has an effect on the intention to act but also directly influences the behavior of an individual. The TPB adds factors influencing self-efficacy, such as control beliefs and a person's subjective strength. Attitudes can have a reinforcing or mitigating effect on intention. For example, attitudes describe either positive or negative ratings of target behaviors -'Healthy, vegetarian nutrition is in vogue, […] is only for ecologicals, […] is fun' . A theory very similar to the TPB, but with a stronger focus on social components of behavior, is the Social-Cognitive Theory (SCT) [10]. It assumes that every person who has problems receives, more or less, help from outside. Thus, socio-cultural factors, such as social support, also influence the objectives an individual (intention) wants to achieve. However, the strongest predictor remains self-efficacy. It depends on one's own experiences (strongest predictor), observational learning, and verbal persuasion. Description of OnTARi OnTARi includes 281 classes implemented in OWL 2, ten object properties, 22 data properties, 1440 logical axioms, 244 individuals, and 1023 annotations. Thus, 227 different adherence factors are described and assigned to an AdherenceFactorCategory (see Fig. 2). Even if the effects of adherence factors differ from individual to individual, a differentiation between HardFactors and SoftFactors can be made to indicate tendencies. Adherence factors that are particularly likely to have an influence on a person's adherence are classified as hard factors (n HardFactor = 49), all others as soft factors (n SoftFactor = 178). To represent relations and dependencies among adherence factors, 160 influences-and 15 associated_with-properties are modeled. Classes and class hierarchy As seen in OnTARi's metamodel (Fig. 2), a differentiation between six higher-level classes is made: (1) Adherence, (2) AdherenceFactors, (3) AdherenceFactorCategory, (4) Rehabilitation, (5) RehabilitationForm, and (6) Rehabili-tationType. According to the analyzed domain ontology OPTImAL, the class Adherence, is composed of the level and quality of adherence (attributes) [10]. While a person's level of adherence can be measured comparatively clearly, for example by the frequency of performing a therapeutic measure, the quality of adherence refers to an abstract multidimensional construct which is hard to determine. At the top-level of the class AdherenceFactors there is a distinctions between the five adherence dimensions defined by the WHO (see Fig. 3). Patient-related adherence factors are classified according to ICF as DemographicCharacteristics, HealthAttitudesAndBeliefs, BehaviouralFactors, and PsychologicalFactors. Psychological factors include many aspects known from health behavior models, such as intention, intrinsic and extrinsic motivation, self-efficacy, self-esteem, and coping appraisal. Socioeconomic adherence factors are categorized using individual aspects of a patient's Socio-economicStatus, including Education, EmploymentStatus, WorkSituation, FinancialSituation, FamilyStructure, and MaritalStatus. Likewise, SocialSupport from family, Fig. 1 Concept map for the Health Beliefs Model [13] friends, co-workers, and networks, as well as the TypeOf-Support -informational, emotional, instrumental -play an essential role here. Therapy-related adherence factors can be classified according to specific therapeutic measures, i.e. exercises, medication, and surgery. Thereby, not only the taste of medications, dosage of medication, and co-prescribings are among the influencing factors, but also the components of exercises, number of exercises to be done, and the extent of surgery. In addition, there are also generic therapy-related factors describing the PlanningAndImplementationOfTherapy, such as the target of treatment, required lifestyle changes, complexity of treatment, duration of treatment, and the format of therapy sessions. Condition-related adherence factors essentially include the mental and physical health status of patients as well as a variety of FunctionalFactors describing the physical, psychological, and cognitive functioning of patients. But also signs-and symptomrelated, comorbidity-related, and primary disease-related factors are relevant. Healthcare team-and system-related factors consists of interpersonal, human, sociocultural, and financial factors. Especially important are the Ther-apeuticRelationship, the TrustInHealthCareProfessionals, the TrainingOfHealthCareProfessionals, and the ReferralByPhysicians. The class Rehabilitation is composed of the is_part_of classes RehabilitationType, RehabilitationForm, and RehabilitationPhase. The class RehabilitationType contains various subclasses covering typical areas of rehabilitation. This way, it is possible to constitute which factors are of special relevance in a certain rehabilitation area. A subdivision into neurological (n = 84), orthopedic (n = 45), psychosomatic and psychological (n = 9) incl. addiction (n = 15), pediatric, geriatric, gynecological, and internistic rehabilitation is provided. Internistic rehabilitation is once again divided into single sub-classes: cardiological (n = 103), gastroenterological, metabolic (n = 18), oncological (n = 45), and pulmonary (n = 17) rehabilitation. Psychological distress, for example, is a particularly relevant factor in metabolic and oncological rehabilitation. An excerpt of the implemented class hierarchy as well as annotations and relations are shown in Fig. 4. Object properties As seen in Fig. 5, OnTARi defines two standard object properties: is_a and is_part_of. The is_a-relation represents a conventional inheritance between a basic class and the corresponding subclass in the sense of objectoriented programming and knowledge representation. Also the is_part_of-relation originates from object-oriented programming, the so-called object compositions. In OnTARi, they are used to model attributes of a class as independent classes without losing the logical structure of information, i.e. the attribute remains an integral part of the state of a class. This procedure is necessary if an attribute represents an adherence factor itself. Five object properties specially defined for OnTARi are influences, is_associated_with, affects, has_factor_category and is_particular_relevant_in. The influencesrelation is used to express that one adherence factor can influence another factor, either positively or negatively. In addition, the is_associated_with-relation defines unspecific relationships between adherence factors. For example, there is a correlation between the age and the occurrence of comorbidities. However, this does not mean that the age influences comorbidities, only that the probability of occurrence increases with an older age. Data properties OnTARi specifies four generic data properties reusable in multiple classes: has_status, has_type, has_quality, and has_level. However, such reuse is not always possible. For this reason, there are a number of other individual data properties, such as has_job_class for a more detailed description of occupational situations or children_in_ household for defining family structures. An overview of data properties implemented is shown in Fig. 5. Use of OnTARi According to the objectives defined in advance, OnTARi is intended, among other things, to serve as a basis for the development of medical assistance systems that can address patient-specific adherence factors as precisely as possible. Requests are made directly via Protégé using simple predicate logic, the so-called description logic queries (DL queries). Using the ELK 0.4.3 Reasoner, OnTARi answers questions like those listed in Table 3. The query 'PatientRelatedFactors and has_factor_category some HardFactor' , for example, returns all 32 hard factors of the adherence dimension patient-related factors -19 direct subclasses, 13 indirect subclasses. A part of this query including results is shown in Fig. 6. Discussion Based on a broad data extraction from promising textbooks, established models of health behavior, and systematic reviews the ontology OnTARi was successfully developed according to the method METHONTOLOGY. Implemented in OWL 2 via Protégé, OnTARi includes a total of 281 classes representing 227 different adherence factors. Thereby, a differentiation between hard and soft factors can be made to indicate tendencies of effects. With OnTARi potential adherence factors of individual patients or patient groups can be easily identified via DL queries and finally targeted. For example, it can be observed that the adherence factor perceived health threat is influenced, among other things, by the personality of an individual. This makes clear that sensitive individuals are more likely to perceive a situation as threatening than individuals with a more self-aware and down-to-earth nature. In addition to textbooks and theories of health behavior, systematic reviews and meta-analyses were included in the identification of potential adherence factors. There was no separate analysis of studies on adherence and motivation in rehabilitation. During the title and abstract screening in PubMed, it became apparent that there were further isolated studies not covered by the identified reviews. However, it can be assumed that these studies provide few new insights, i.e. additional previously unidentified adherence factors, due to the large database of reviews already available (data saturation). In line with El-Sappagh et al. [38], multiple quality criteria have to be taken into account when developing ontologies. Thus, OnTARi was also systematically developed based on standard knowledge using existing terminologies. However, like many other ontologies, OnTARi is neither based on a consolidated top-level ontology, nor does it take into account inter-ontology interoperability [38,39]. This hinders the reuse of OnTARi in combination with other ontologies. Also in terms of completeness, OnTARi has weaknesses, especially with regard to the implemented individuals. Whereas completeness was verified for the implemented classes by two independent reviewers, no such evaluation took place for individuals. From the first, only individuals and typical characteristics of a class identified in literature were modeled, e.g., for age, gender, and nationality. However, no claim to completeness was made, as initially only potential adherence factors should be represented. Actual patient profiles are to be added accordingly in future work. With regard to relations between adherence factors implemented in OnTARi, it can be stated that only relations and dependencies explicitly mentioned in the identified literature were modeled. No supplementation of own or obvious relations between the influencing factors took place to ensure evidence. Yet, at the same time, this reduces the informative value. For example, it is not clear whether an implemented influences-relation constitutes a positive or negative effect on therapy adherence. Therefore, it would be conceivable to extend this relation in future work by the sub-relations increases and decreases, even though they are harder to model. If the effects of an adherence factor are known, e.g., can be derived from scientific theories and studies, a differentiation of predictors and promoting factors in rehabilitation process on the level of individuals would be possible. Given its generic nature, OnTARi currently only allows implicit mapping of patient profiles. Obviously, the more information is available about a patient, especially about patient-related adherence factors, the more specific the patient profile can be elaborated and the more targeted the identified adherence factors can be addressed. Conversely, this also implies that a minimal set of information about a patient must be available to enable patient-specific queries. This includes demographic information, such as age and gender, indication-specific information, such as diagnosis, duration of therapy and previously perceived measures, as well as the current adherence and motivation levels. Information retrieval is currently only possible via DL queries. The use of these simple predicate logical expressions makes it possible to quickly and easily obtain an overview of potential adherence factors, especially hard adherence factors in specific rehabilitation areas. More powerful query languages, such as the Protocol And Resource Description Framework Query Language (SPARQL), also offer the possibility of making queries taking into account the individual patient profile. However, using such queries in Protégé is exceedingly complex, particularly for non-computer scientists. Hence, in future work, a suitable graphical user interface should be implemented to allow easier access to OnTARi. By means of this user interface, patient data describing the patient profile could be documented step by step and easily queried. Conclusion A multitude of factors may influence treatment adherence of patients with chronic diseases in rehabilitation, either positively or negatively. The effects, if any, of such factors always depend on the individual patient and are therefore rarely evidenced. The developed ontology OnTARi serves as a generic reference model providing a comprehensive overview of potential adherence and motivation factors and their interrelations in rehabilitation of patients with chronic diseases. Based on the literature review conducted, single adherence factors can be assigned to typical rehabilitation areas, such as cardiological, neurological, or othopedic rehabilitation. This formalization can serve as a basis for implementation and adaptation of conventional rehabilitative measures, taking into account (patient-specific) adherence factors. In addition to direct use as a knowledge base in Protégé, OnTARi can also be used as an information retrieval system or even as a knowledge manager in medical assistance systems to increase motivation and adherence in rehabilitation processes.
7,212.8
2021-05-11T00:00:00.000
[ "Medicine", "Computer Science" ]
Bond Strength Properties of a Dental Adhesive with a Novel Dendrimer—G-IEMA † : The objectives of this study were to characterize the microtensile bond strength to enamel of two experimental adhesive systems, one containing a novel monomer and the other having the same composition as commercial adhesive systems, and comparing them to commercial materials. Two experimental adhesive systems were developed in the lab, one with Bisphenol A diglycidyl methacrylate (Bis-GMA) and the other with G(2)-isocyanatoethyl methacrylate (G-IEMA) as a substitute for Bis-GMA. Twenty healthy human permanent molars were cut into halves and randomly divided into eight groups based on the application mode. The experimental universal adhesive system without Bis-GMA demonstrated comparable adhesive strength to enamel as the other universal adhesive systems containing Bis-GMA. Introduction The major paradigm shift in adhesion to tooth structure occurred in 1955 with the introduction of phosphoric acid etching to enamel by Buonocore. Since then, the evolution of adhesive techniques has allowed dentists to adopt a minimally invasive philosophy in clinical practice [1,2]. Regarding material choice, currently, there is a wide offering of adhesive systems available for bonding to tooth tissues, using different adhesive strategies [3]. To simplify the use of these materials, universal adhesive (UA) systems were introduced, allowing clinicians to choose the best adhesive strategy according to different clinical scenarios, whether it is an etch-and-rinse, self-etch or selective enamel etching approach [4][5][6]. Replacing Bis-GMA while still improving the physicochemical and mechanical properties of adhesives has been researched in recent studies [7,8]. Specifically, some authors have examined the introduction of dendrimers as base constituents UAs. A second-generation dendrimer derived from ethyl methacrylate isocyanate, G-IEMA, has been investigated as a candidate for potential replacement of Bis-GMA-based systems [9,10]. Traditional linear crosslinking monomers can be replaced successfully by dendrimer G-IEMA without influencing the resulting properties. Not only did this monomer significantly improve the experimental UA's degree of conversion, but it was also responsible for reducing the co-polymer shrinkage and controlling water sorption [11]. Further to this, the same authors also observed that G-IEMA formulations could increase the bond strength to dentin, and later on, showed that they have promising interfacial properties [11,12]. However, to research and prove the beneficial applicability of G-IEMA, further studies are needed. It is therefore important to investigate the role of G-IEMA-based systems on the bond strength to enamel using two different adhesive strategies, while also evaluating their impact on the contact angle of enamel surfaces, which is as of yet unknown. Materials and Methods Two experimental adhesive systems, one with Bis-GMA (EM1) and another with G-IEMA, as a substitute for Bis-GMA (EM2), were developed in our lab. Two commercial adhesives, Futurabond ® M+ (VOCO) (FUT) and ScotchbondTM Universal (3M ESPE) (SBU) were chosen as controls. Twenty healthy human permanent molars, obtained with informed consent (approved by the Ethics Committee of Egas Moniz School of Health & Science), were cut into halves and randomly divided into eight groups (n = 5) according to the application mode (self-etch or etch-and-rinse): FUT_ER, FUT_SE, SBU_ER, SBU_SE, EM1_ER, EM1_SE, EM2_ER, and EM2_SE. Afterwards, each specimen was polished with a 600 SiC grit paper for smear layer simulation and the adhesives were applied according manufacturer's directions. The etch-and-rinse method employed Octacid orthophosphoric acid (37%) (Clarben). Resin build-ups were conducted using the Schmidt Composite Nanohybrid (MADESPA), with increments shaped as rectangular prisms. The resin buildups were applied in 2 mm increments, achieving a total height of 6 mm. The materials were light-cured using the EliparTM DeepCure-S (3M ESPE) system, following the manufacturer's instructions, with a 40 s cure time specifically applied to the experimental universal adhesive systems. This system employs blue LEDs for the light-curing process. Its peak irradiance was 1200 mW/cm 2 which was confirmed with a radiometer. After processing, the specimens were kept in distilled water for 24 h at 37 • C. Beams (1 ± 0.2 mm 2 ) were obtained through additional sectioning and tested using a universal testing machine (µTBS) with a cross head speed of 0.5 mm/min until failure. The data analysis was performed using SPSS (version 28.0, IBM Corp., Armonk, NY, USA) with linear mixed models (LMMs) incorporating fixed effects, while maintaining a significance level of 5%. Results The effects of adhesive (p = 0.033) and method, or protocol, (p < 0.001) on the microtensile bond strength were significant and independent of one another. There was no interaction between the adhesive used and the technique adopted (p = 0.985) ( Table 1). Independent of application procedure, SBU displayed a considerably greater µTBS than the experimental EM2 (p = 0.031). No variations were found between any other adhesive pairings. Discussion Ongoing concerns about the use of biocompatible dental materials have called into question the incorporation of Bis-GMA in resin-based restorative materials, due to its Bisphenol A (BPA) content, which can elute and have systemic health implications [10]. The demand for new materials without Bis-GMA has emerged as a preventive measure to reduce exposure to Bisphenol A [13]. The same adhesive systems who formulated and subsequently evaluated the physicochemical properties and adhesion to dentin experimental universal adhesive systems without Bis-GMA, patented in Portugal (holder: Egas Moniz School of Health & Science). This study followed the same line of research, assessing the adhesion to enamel, which has not been studied until now [11,12]. According to the current scientific evidence, for enamel, the best adhesive strategy continues to be the use of orthophosphoric acid prior to the application of the adhesive system [13,14]. Several in vitro studies demonstrated that universal adhesive systems have higher values of adhesive resistance to enamel when used according to the etchand-rinse (versus self-etch) protocol, as was observed in this study [5,15]. These results are justified by the reduced demineralization capacity of a universal adhesive system compared to orthophosphoric acid, resulting in an incomplete creation of microporosities, which inevitably reduces the micro-retention of the adhesive [2]. The chemical composition of the adhesive systems can also contribute to the differences observed in this study because, although the experimental adhesive systems were formulated according to the chemical composition of the commercial adhesives, the specific percentage of each component was not discriminated. In this study, most of the adhesive systems used had a mild pH (Futurabond ® M + and experimental adhesives), with Scotchbond™ Universal registering a more basic pH, falling within the ultra-mild category. Although a good adhesive behavior to dentin is associated with mild self-etching (pH ∼ = 2), these solutions are unable to effectively condition the enamel, leading to increased microleakage; a mild pH is essential prior to etching dentin to obtain a micromechanical retention effective [16,17]. The pH of a universal adhesive system is a relatively important property because, while an acidic medium is required for the dissolution of the smear layer and smear plugs (opening the dentinal tubules), an excessively acidic adhesive system can remove excess calcium, decreasing its ability to adhere to 10-MDP, which becomes particularly important in adhering to dentin [16,17]. It is essential to select adhesive systems that contain 10-MDP, taking into account their molecular structure, their hydrophobic behavior and the characteristics of the adhesive interface that favor adhesion [18]. The microtensile bond strength to dentin using the same adhesive systems and protocols as the present study, there were no significant differences between the adhesive systems studied, suggesting that the experimental universal adhesive system without Bis-GMA could be used effectively in dentin [12]. Conclusions The four universal adhesive systems examined (Futurabond ® M+, ScotchbondTM Universal, an experimental universal adhesive system with Bis-GMA, and an experimental universal adhesive system without Bis-GMA) showed no statistically significant differences in adhesive strength to enamel when using either the etch-and-rinse or self-etch adhesive strategies. The experimental universal adhesive system without Bis-GMA exhibited comparable adhesive strength to enamel as the other universal adhesive systems containing Bis-GMA. The promising behavior of the experimental Bis-GMA-free universal adhesive system indicates the need for further investigations. These studies should focus on exploring the potential of the G-IEMA dendrimer as a substitute for Bis-GMA in the composition of adhesive systems. Patents This work resulted in a national patent, registered under No. 115,064-Formulation for a universal dental adhesive system containing a second-generation dendritic cross-linking monomer Data Availability Statement: Data may be available upon request from the corresponding author.
1,916.6
2023-08-14T00:00:00.000
[ "Materials Science", "Medicine" ]
Small Heat Shock Protein αA-crystallin Regulates Epithelial Sodium Channel Expression* Integral membrane proteins are synthesized on the cytoplasmic face of the endoplasmic reticulum (ER). After being translocated or inserted into the ER, they fold and undergo post-translational modifications. Within the ER, proteins are also subjected to quality control checkpoints, during which misfolded proteins may be degraded by proteasomes via a process known as ER-associated degradation. Molecular chaperones, including the small heat shock protein αA-crystallin, have recently been shown to play a role in this process. We have now found that αA-crystallin is expressed in cultured mouse collecting duct cells, where apical Na+ transport is mediated by epithelial Na+ channels (ENaC). ENaC-mediated Na+ currents in Xenopus oocytes were reduced by co-expression of αA-crystallin. This reduction in ENaC activity reflected a decrease in the number of channels expressed at the cell surface. Furthermore, we observed that the rate of ENaC delivery to the cell surface of Xenopus oocytes was significantly reduced by co-expression of αA-crystallin, whereas the rate of channel retrieval remained unchanged. We also observed that αA-crystallin and ENaC co-immunoprecipitate. These data are consistent with the hypothesis that small heat shock proteins recognize ENaC subunits at ER quality control checkpoints and can target ENaC subunits for ER-associated degradation. folding and post-translational modifications occur. Within the ER, proteins are also subjected to quality control checkpoints to ensure that only properly folded proteins mature beyond the ER. If folding is inefficient, the misfolded protein may be degraded by proteasomes via a process known as ER-associated degradation (ERAD). This prevents the accumulation of abnormal proteins in the ER, which, left unchecked, may form toxic protein aggregates. Molecular chaperones, which can bind to exposed, hydrophobic motifs in unfolded proteins, play a key role in selecting substrates for this process. ENaC is expressed at the apical membranes of Na ϩ absorptive epithelia. There, in conjunction with the basolateral Na ϩ /K ϩ ATPase, ENaC facilitates transepithelial Na ϩ transport (1). ENaC is found in a variety of tissues, including the lung airway and alveoli and the distal nephron, where ENaC influences mucociliary clearance and extracellular fluid Na ϩ and volume regulation, respectively (1)(2)(3). ENaC is comprised of three homologous subunits, ␣, ␤, and ␥, although the stoichiometry of the functional channel remains controversial (4 -7). Each subunit has short cytoplasmic amino-and carboxyl-terminal domains (50 -110 residues), two transmembrane segments, and a large extracellular domain (ϳ450 residues) (8 -10). Like most other molecular chaperones, small heat shock proteins (sHsps) can bind unfolded, aggregation-prone substrates to retain them in solution. In addition, sHsps have been implicated in proteasome function and the ubiquitin-proteosome pathway. We recently observed that ␣-crystallin domain-containing sHsps facilitate the ERAD of CFTR (11). Although the structure and assembly of CFTR and ENaC are distinct, we hypothesized that these chaperones may also be involved in ENaC quality control. To this end, we established an ENaC expression system in yeast and discovered that sHsps facilitate ENaC ␣ subunit degradation. We also found that the sHsp ␣A-crystallin is expressed in a cortical collecting duct (CCD) cell line and that ␣A-crystallin and the ENaC ␣ subunit form stable complexes. Finally, we determined the functional effect of ␣A-crystallin expression on ENaC in a heterologous system and found that ␣A-crystallin lowered the number of functional channels because of a decrease in the rate of channel delivery to the cell surface. These data demonstrate a broader role for sHsps in ERAD and represent only the second chaperone class that has been found to impact ENaC biogenesis. Protein Degradation Assays in Yeast-The cDNA encoding HA ␣ V5 (12) was inserted into the yeast pGPD426 constitutive expression vector at the HindIII and ClaI restriction sites. Plasmids were introduced into the indicated strains using lithium acetate-mediated transformation (14). Transformants were selected by growth on ScϪUra medium containing glucose (15). To assess the rate of ENaC ␣ subunit ERAD, cycloheximide chase experiments were performed as described previously (16). The immunoblots were incubated with an anti-V5 antibody conjugated to horseradish peroxidase (Invitrogen) to detect ENaC ␣ subunit and with a rabbit anti-Sec61p antibody (13) as a loading control. The blots were visualized by chemiluminescence as described below and quantified with a Versadoc (Bio-Rad). Reverse Transcription-PCR-Mouse kidney total RNA was from Clontech Laboratories, Inc. (Mountain View, CA). Total RNA from cultured CCD cells was isolated using an RNAqueous 4PCR kit (Ambion, Austin, TX). One microgram total RNA was reverse transcribed using either random hexamers or oligo(dT) as primers using SuperScript II reverse transcriptase (Invitrogen). Nested PCR was performed on control "no reverse transcriptase" template and random-primed or oligo(dT)primed cDNA templates. Immunoblotting-CCD and MDCK cells were maintained as previously described (17,18) and transiently transfected with ENaC subunits and ␣A-crystallin as indicated using Lipofectamine 2000 (Invitrogen) for use the following day (12). The immunoblotting methods were adapted from Hughey et al. (12). Briefly, cytoplasmic extracts of CCD cells were produced by scraping cells into phosphate-buffered saline (137 mM NaCl, 2.6 mM KCl, 15.2 mM Na 2 HPO 4 , 1.47 mM KH 2 PO 4 ) with protease inhibitor mixture set III (EMD Biosciences, San Diego, CA) and passing them 20 times through a 22-gauge needle. Whole cell lysates were produced by scraping cells into 100 mM NaCl, 5 mM EDTA, 50 mM triethanolamine, pH 8.6, and protease inhibitor mixture set III followed by the addition of SDS (Bio-Rad) to 0.5% (w/v) and incubation at 95°C for 5 min as described previously (19). Triethanolamine (final concentration, 75 mM) and Triton X-100 (final concentration, 1.25% (v/v)) were then added, after which samples were agitated for 15 min at 4°C. MDCK cells were extracted into 0.3 ml of lysis buffer (0.4% sodium deoxycholate, 1% Nonidet P-40, 63 mM EDTA, and 50 mM Tris-HCl, pH 8) supplemented with protease inhibitor set III and phosphatase inhibitor set I (Calbiochem, San Diego, CA). The extracts were centrifuged to remove insoluble material. For immunoprecipitation experiments, the supernatants were incubated overnight at 4°C on a rotating wheel with antibodies as indicated and 25 l of protein A immobilized on Sepharose 4B beads (Zymed Laboratories Inc., S. San Francisco, CA). Immunoprecipitates on beads were washed and then subjected to SDS-PAGE (4 -15% gradient gel) and transferred to an Immobilon-NC membrane. The membranes were incubated with antibody as indicated overnight at 4°C, followed by peroxidase-conjugated secondary antibody when needed. Purified bovine ␣A-crystallin (Assay Designs) and Precision Protein Standards (Bio-Rad) were used as markers. The blots were developed using Western Lightning Chemiluminescence reagent (PerkinElmer Life Science) and visualized with either BioMax MR film (Kodak, New Haven, CT) or using a Versadoc (Bio-Rad). ENaC Activity Measurements-ENaC mediated Na ϩ currents were measured in Xenopus oocytes by the two-electrode voltage clamp technique (TEV). Stage V and VI oocytes free of follicle cell layers were injected with cRNA encoding wild type ␣, ␤, and ␥ ENaC subunits and varying amounts of cRNA encoding ␣A-crystallin (11), Hsc70 (20), or ␥-glutamyl transpeptidase (21), as indicated. The oocytes were maintained in modified Barth's saline (MBS; 88 mM NaCl, 1 mM KCl, 2.4 mM NaHCO 3 , 15 mM HEPES, 0.3 mM Ca(NO 3 ) 2 , 0.41 mM CaCl 2 , 0.82 mM MgSO 4 , 10 g/ml sodium penicillin, 10 g/ml streptomycin sulfate, 100 g/ml gentamicin sulfate, pH 7.4). TEV was performed 24 h after injection using a DigiData 1320A interface and a GeneClamp 500B Voltage Clamp amplifier (Axon Instruments, Foster City, CA). In experiments with MG-132 (Calbiochem), oocytes were incubated in MBS supplemented with 6 M MG-132 for 3-4 h prior to current measurements (22). Data acquisition and analyses were performed using pClamp 8.2 software (Axon Instruments). The pipettes were pulled from borosilicate glass capillaries (World Precision Instruments, Sarasota, FL) with a micropipette puller (Sutter Instrument Co., Novato, CA) and had resistance of 0.5-5 megaohms when filled with 3 M KCl and inserted into the bath solution. The oocytes were maintained in a recording chamber (Automate Scientific, San Francisco, CA) with 20 l of bath solution and continuously perfused with bath solution at a flow rate of 4 -5 ml/min. The bath solution contained 110 mM NaCl, 2 mM KCl, 2 mM CaCl 2 , 10 mM HEPES, pH 7.4. All of the experiments were performed at 20 -24°C. ENaC exocytosis rates were determined as previously described (23,24) from oocytes injected with cRNAs for ␣S583C mutant and wild type ␤ and ␥ ENaC subunits either in the presence or absence of 6 ng of cRNA for ␣A-crystallin. 24 -36 h after injection, channels at the cell surface were blocked with 1 mM MTSET for 4 min, resulting in a covalent modification of the channel that causes an ϳ80% reduction of current. After removal of MTSET from the bath, Na ϩ current was measured by TEV at Ϫ100 mV every 30 s for 10 min to monitor current recovery. The initial rates of reappearance of amiloride-sensitive currents were determined from the linear portion of the current recovery curve (0 -2 min). ENaC endocytosis rates were determined from oocytes injected with cRNA for wild type ␣, ␤, and ␥ ENaC subunits either in the presence or absence of 6 ng of cRNA for ␣A-crystallin. 24 -36 h after injection, amiloride-sensitive currents were determined by TEV before and after 2, 4, and 6 h of incubation with 5 M brefeldin A. Amiloride-sensitive currents were expressed relative to the initial amiloride-sensitive current, and data for the current declines were compared. ENaC Surface Expression Measurements-The experiments were performed essentially as described (25). Xenopus oocytes were injected with varying amounts of cRNA encoding ␣Acrystallin and 2 ng each of an extracellular FLAG epitope-tagged ␤ ENaC subunit (␤ F ) and wild type ␣ and ␥ ENaC subunits. Negative controls utilized wild type ENaC ␤ subunit with no epitope tag (no tag). Two days after injection, the oocytes were blocked for 30 min in MBS supplement with 10 mg/ml bovine serum albumin (MBS/BSA) and then incubated for 1 h with MBS/BSA supplement with 1 g/ml of a mouse monoclonal anti-FLAG antibody (M2) (Sigma-Aldrich) at 4°C. The oocytes were then washed at 4°C for 1 h in MBS/BSA and incubated with MBS/ BSA supplemented with 1 g/ml secondary antibody for 1 h at 4°C (peroxidase-conjugated AffiniPure F(abЈ2) fragment goat anti-mouse IgG; Jackson ImmunoResearch, West Grove, PA). The oocytes were extensively washed (12 times over 2 h) at 4°C and transferred to MBS without BSA. Individual oocytes were placed in 100 l of SuperSignal Elisa Femto maximum sensitivity substrate (Pierce) and incubated at room temperature for 1 min. Chemiluminescence was quantified in a TD-20/20 luminometer (Turner Designs, Sunnyvale, CA). Statistical Analysis-Normalized data from oocytes were normalized within an individual batch of oocytes as previously described (26). All of the data are presented as the means Ϯ S.E. p values were determined by a Student's t test performed with Excel 2003 (Microsoft Corp., Redmond, WA) or a one-way ANOVA with a Newman-Keuls post hoc test performed with Igor Pro 5.05A (Wavemetrics, Lake Oswego, OR), as indicated. A p value Յ0.05 was considered significant. Hsp26 and Hsp42 Facilitate ENaC ␣ Subunit Degradation in Yeast-We recently demonstrated that ␣-crystallin domaincontaining sHsps select aberrant forms of CFTR in both yeast and mammalian cells for ERAD (11). This chloride channel is co-expressed with ENaC in lung epithelia. Together these channels regulate lung fluid volume, which affects mucociliary clearance (27). We therefore hypothesized that chaperones with ␣-crystallin domains may also be involved in ENaC quality control. To test this hypothesis, we inserted the ␣ subunit of mouse ENaC into a yeast vector engineered for constitutive expression. Degradation in both wild type yeast and a variety of yeast mutants was measured by a cycloheximide chase, and the results were quantified after Western blot analysis. The first mutant examined lacked two members of the sHsp family, Hsp26 and Hsp42. Although these two proteins largely recognize the same target proteins, Hsp42 is constitutively present at high levels, whereas Hsp26 expression is temperature-sensitive (28,29). We found that degradation of ␣-ENaC was reduced in the hsp26⌬hsp42⌬ mutant strain compared with wild type yeast (Fig. 1, p Ͻ 0.05 at 20 and 60 min). To verify that the ENaC ␣ subunit is an ERAD substrate, we examined the effect on ENaC ␣ subunit degradation when Ufd1p was mutated. Ufd1p is a component of the Cdc48p/Ufd1p/Npl4p AAA ATPase complex that actively transfers ubiquitinated substrates from the ER to the proteasome (30). The absence of functional Ufd1p significantly decreased ␣ subunit degradation (Fig. 1, p Ͻ 0.05 at 60 min). To confirm this observation, we examined the effect of mutating a component of the proteasome "cap," Cim3p, on ENaC ␣ subunit degradation. Cim3p (also known as Rpt6p) is also a AAA ATPase, and the proteasome cap is thought to drive substrates into the proteasome core for degradation (31). Indeed, we found that the degradation of the ENaC ␣ subunit in the cim3-1 mutant was significantly reduced compared with wild type yeast (Fig. 1, p Ͻ 0.05 at 90 min). These results indi-FIGURE 1. ERAD of ENaC ␣ subunit in yeast requires sHsps for maximal efficiency. ENaC HA ␣ V5 degradation was measured by lysing cells at various times during a chase following cycloheximide addition, and protein levels were quantified by Western analysis and normalized to Sec61 levels. ENaC ␣ subunit degradation was measured for both wild type (E) and mutant (F) yeast, as indicated. n ϭ 6 (hsp26⌬hsp42⌬) or 4 (ufd1-1, cim3-1). *, p Ͻ 0.05 versus wild type yeast, determined by Student's t test. SEPTEMBER 21, 2007 • VOLUME 282 • NUMBER 38 cate that the ENaC ␣ subunit is a bona fide ERAD substrate in yeast and that the sHsps promote ENaC ␣ subunit clearance in this organism. ␣A-crystallin Regulates ENaC ␣A-crystallin Is Expressed in the Kidney and in Cultured Collecting Duct Cells-In addition to its function in the lung, the role of ENaC in the distal nephron of the kidney remains a focus of research and clinical interest. Because ENaC is expressed in the CCD of the distal nephron (32) and because ␣A-crystallin is the closest mammalian homolog to Hsp26 in yeast, we determined whether ␣A-crystallin is also expressed in mouse kidney and specifically in the CCD. Reverse transcription-PCR using RNA isolated from both whole mouse kidney and a mouse CCD cell line demonstrated a product at the predicted size of 175 bp using nested ␣A-crystallin primers (Fig. 2, A and B). The identity of the 175-bp product was confirmed by nucleotide sequencing. We confirmed that ␣A-crystallin protein was present in CCD cells by Western blot analysis. Cell lysates of cultured murine CCD cells were probed with anti-␣A-crystallin, and a product at ϳ20 kDa was evident (Fig. 2C). This species was similar to the molecular masses of purified bovine ␣A-crystallin protein alone or when added to lysate. This result confirms that CCD cells endogenously express ␣A-crystallin. ␣A-crystallin and ENaC Form Complexes-To determine whether ␣A-crystallin and ENaC subunits interact, we next performed co-immunoprecipitation experiments. MDCK cells were used for these experiments because of the absence of endogenously expressed ENaC (18). In experiments where untransfected cells were both immunoprecipitated and immunoblotted with an anti-␣A-crystallin antibody, we found that MDCK cells endogenously expressed ␣A-crystallin (Fig. 2C). We performed experiments where ␣A-crystallin along with the ␤, ␥, and HA epitope-tagged ␣ (␣ HA ) ENaC subunits were co-transfected. When lysates of these cells were immunoprecipitated with anti-HA antibody and immunoblotted with anti-␣A-crystallin antibody, we observed that ␣A-crystallin co-immunoprecipitated with the ␣ HA ENaC subunit (Fig. 2C). In control experiments in which either (i) ENaC subunits were not co-transfected, (ii) antibody was omitted from the immunoprecipitation, or (iii) a V5 epitope-tagged ␣ subunit was used, little or no ␣A-crystallin was apparent in the precipitate. We also examined whether endogenous ␣A-crystallin co-immunoprecipitated with the ENaC ␣ HA subunit. When lysates of cells co-transfected with ␤, ␥, and ␣ HA ENaC subunits were immunoprecipitated with anti-␣A-crystallin antibody and subsequently immunoblotted with anti-HA antibody, we found that the ␣ HA ENaC subunit co-immunoprecipitated with endogenously expressed ␣A-crystallin (Fig. 2D). In control experiments where a V5-epitope tagged ␣ subunit was used or antibody was omitted from the immunoprecipitation step, no FIGURE 2. ␣A-crystallin is expressed in CCD cells and co-immunoprecipitates with ENaC ␣ subunit. ␣A-crystallin is expressed in mouse kidney tissue and cultured CCD cells (A and B). One microgram of total RNA isolated from native mouse kidney tissue (A) or cultured CCD cells (B) was reverse transcribed using oligo dT (dT) or random hexamers (dN6, CCD cells only) as primers (n ϭ 2). Negative controls were performed in reactions lacking reverse transcriptase (No RT). Predicted reverse transcription-PCR product sizes were 224 bp for ␣A-crystallin outer primers, 175 bp for ␣A-crystallin nested primers, and 174 bp for ␤-actin primers. Note that only nested primers yielded a strong signal of the expected size. C, cultured CCD cell lysates and cytoplasmic extracts were immunoblotted (IB) with mouse anti-␣A-crystallin (n ϭ 2). Purified bovine ␣A-crystallin protein was added as indicated. MDCK cell lysates were immunoprecipitated and immunoblotted with anti-␣A-crystallin (n ϭ 3). ␣A-crystallin is indicated with an arrow. D, ENaC ␣ subunit and ␣A-crystallin co-immunoprecipitate. MDCK cell extracts were transfected with vectors engineered for the expression of ␣A-crystallin, ENaC ␣ HA or ␣ V5 subunit, and ENaC ␤ and ␥ subunits as indicated. Extracts were immunoprecipitated with the indicated amounts of either anti-HA antibody or anti-␣Acrystallin antibody. IB: ␣A-crystallin, n ϭ 4; IB: HA, n ϭ 2. ␣A-crystallin is indicated with an arrow. The furin-cleaved (65 kDa, ‹) and uncleaved (95 kDa, Ͼ) ENaC ␣ subunit products are also indicated. ␣A-crystallin Regulates ENaC ENaC ␣ subunit was observed in the precipitate. Both the furinmediated cleaved and uncleaved bands of the ␣ HA subunit were co-immunoprecipitated with ␣A-crystallin. We have recently shown that proteolytic processing of ENaC by furin in the trans-Golgi network is required for channel activation (12,33). This result suggests that ␣A-crystallin could associate with the ␣ subunit of ENaC before and after trans-Golgi network-dependent processing events. ␣A-crystallin Reduces ENaC Expression in Oocytes-To determine whether ␣A-crystallin affects the functional expression of ENaC, we examined the effect of co-expressing human ␣A-crystallin with mouse ENaC on channel activity in Xenopus oocytes. Like other sHsps, ␣A-crystallin associates with unfolded proteins to prevent their aggregation and maintain solubility, but continued association may target the bound protein for ERAD (11). We injected cRNAs encoding the ␣, ␤, and ␥ mouse ENaC subunits into Xenopus oocytes along with water or various amounts of cRNA encoding ␣A-crystallin or with cRNA encoding ␥-glutamyl transpeptidase as a control. 24 h after injection, we measured the amiloride-sensitive Na ϩ currents for each group by TEV (Fig. 3A). We found that co-expression of ␣A-crystallin with ENaC in oocytes reduced ENaCmediated Na ϩ currents in a dose-dependent manner, whereas ␥-glutamyl transpeptidase cRNA had no effect. We then tested our hypothesis that ␣A-crystallin inhibition of ENaC currents results from enhanced ERAD in Xenopus oocytes. Using the proteasomal inhibitor MG-132 (34), we determined whether proteasomal activity was required for the inhibitory effect of ␣A-crystallin on ENaC currents (Fig. 3B). We found that currents from oocytes expressing ENaC alone were not affected by MG-132. However, in oocytes expressing ENaC and ␣A-crystallin, MG-132 abolished the inhibitory effect of ␣A-crystallin on ENaC current. This result demonstrates that the inhibitory effect of ␣A-crystallin on ENaC current depends on the activity of the proteasome, an essential ERAD component. Because of the role of ␣A-crystallin in facilitating the ERAD of CFTR, we hypothesized that ␣A-crystallin reduced ENaC current in oocytes by decreasing the number of channels at the cell surface. We therefore measured ENaC surface expression with ␤ subunit having a FLAG epitope tag in the extracellular loop (␤ F ) co-expressed with wild type ␣ and ␥ subunits using a chemiluminescence-based assay (24,25). We found that coexpression of ␣A-crystallin with ENaC reduced channel surface expression in a dose-dependent manner. Moreover, the extent of ENaC surface expression reduction when each amount of the injected ␣A-crystallin was used paralleled the extent of the reduction in ENaC-mediated Na ϩ currents (Fig. 3C). These results are consistent with ␣A-crystallin inhibiting ENaC function by reducing the number of channels at the cell surface rather than affecting single channel properties. ␣A-crystallin Retards the Rate of ENaC Delivery to the Cell Surface-A reduction in the number of channels at the cell surface can be achieved either by a decrease in the rate of chan- (n ϭ 15). *, p Ͻ 0.05 versus all other groups, determined by ANOVA. C, surface expression of oocytes co-expressing various amounts of ␣A-crystallin along with wild type ␣, ␥, and ␤ F subunits was determined using anti-FLAG antibodies and a chemiluminescence assay. The oocytes expressing wild type ␣, ␤, and ␥ subunits (no tag) were used to measure background (n Ն 20 for all groups). *, p Ͻ 0.001 versus 0 ␣A-crystallin, determined by ANOVA. SEPTEMBER 21, 2007 • VOLUME 282 • NUMBER 38 JOURNAL OF BIOLOGICAL CHEMISTRY 28153 nel delivery, by an increase in the rate of channel retrieval, or by a combination of both mechanisms. We hypothesized that ␣A-crystallin reduces ENaC surface expression by reducing the number of channels available for delivery to the membrane, which would be consistent with enhanced ERAD (Fig. 1). We tested this hypothesis by measuring both the rate of channel delivery and the rate of channel retrieval. ␣A-crystallin Regulates ENaC To measure the rate of channel delivery to the cell surface, we took advantage of the ␣S583C mutant. This mutation is located at the outer "mouth" of the channel pore, where covalent modification by the water-soluble sulfhydryl reactive reagent MTSET blocks ϳ80% of the current originating from ENaC at the cell surface (35). Recovery of amiloride-sensitive current subsequent to MTSET washout can be attributed to the delivery of new, unblocked channels to the cell surface. 24 h after injecting oocytes with cRNAs encoding ␣S583C, wild type ␤ and ␥ subunits and either water or cRNA for ␣A-crystallin, we measured ENaC current recovery by TEV after applying 1 mM MTSET for 4 min (Fig. 4A). We observed that current recovery was significantly faster in oocytes injected with water than those injected with ␣A-crystallin. To measure the rate of cell surface channel retrieval, we halted the delivery of newly synthesized ENaC to the cell surface using brefeldin A. This antibiotic inhibits the formation of transport vesicles that mediate protein transport from the ER to the Golgi apparatus, thus inhibiting integral membrane protein transport to the plasma membrane (36). Having prevented new channel delivery, it becomes possible to measure the half-lives of channels at the cell surface. We therefore injected oocytes with cRNAs encoding wild type ENaC and either water or ␣A-crystallin. Two days after injection, each group was incubated in a buffer supplemented with 5 M brefeldin A, and the current was measured every 2 h (Fig. 4B). After the 6-h time point, amiloride was added to determine whether a leak had developed over the course of the experiment. In both the presence or absence of ␣A-crystallin, the current decreased with a half-life of ϳ3 h. Control experiments performed in parallel without the addition of brefeldin A showed that the ENaC current remained stable over 4 h, with a ϳ20% decrease at 6 h (data not shown). Together, these results indicate that ␣A-crystallin decreased ENaC functional expression by reducing the rate of channel delivery to the cell surface but did not affect the rate of channel retrieval from the surface. Overexpression of Hsc70 Potentiates the Effect of ␣A-crystallin in Reducing ENaC-mediated Current-We recently reported that overexpression of Hsc70 reduces ENaC expression and current in oocytes (26). Because eukaryotic sHsps have been suggested to release their substrates to ATP-dependent chaperones, such as Hsc70, and may function in the same pathway (37,38), we hypothesized that ␣A-crystallin overexpression would exacerbate the Hsc70-mediated reduction in ENaC expression. We tested this hypothesis by measuring the amiloride-sensitive currents of oocytes expressing ENaC in the presence of messages encoding either one or both of these chaperones (Fig. 5). We found that Hsc70 and ␣A-crystallin reduced ENaC currents by 44 and 38%, respectively. When expressed together, these chaperones reduced ENaC current by 64%, a greater reduction in current than observed with either chaper-one alone. Because the combined effect of these two chaperones was not significantly different from the additive effect calculated from these chaperones acting independently, we cannot differentiate between these chaperones acting in independent pathways or acting cooperatively within the same pathway. Nevertheless, these data verify that the ␣A-crystallin and Hsc70 chaperones play an important role during ENaC biogenesis. DISCUSSION Many of the events during ENaC biogenesis have been described, including Asn-linked glycosylation in the ER, proteolytic cleavage in the trans-Golgi network, and Nedd4-2 FIGURE 4. Effect of ␣A-crystallin co-expression on the rates of ENaC cell surface delivery and retrieval in Xenopus oocytes. A, surface delivery rates were determined from oocytes injected with cRNA encoding ␣S583C, wild type ␤ and ␥ ENaC subunits, and either water (E) or 6 ng of ␣A-crystallin cRNA (F). 24 h after injection, the oocytes were treated with MTSET for 4 min, and the currents were measured by TEV at Ϫ100 mV every 30 s for 5 min (n ϭ 10). The initial rates were determined by linear regression from the first 2 min for each oocyte. The rates were Ϫ290 Ϯ 19 and Ϫ170 Ϯ 33 nA/min in the absence and presence of ␣A-crystallin, respectively (p Ͻ 0.05, determined by Student's t test). Wild type base-line current was Ϫ7.3 A. B, surface retrieval rates were determined from oocytes injected with cRNA encoding wild type ␣, ␤, and ␥ ENaC subunits and either water (E) or 6 ng of ␣A-crystallin cRNA (F). The oocytes were incubated in bath solution alone (data not shown) or bath solution supplemented with 5 M brefeldin A. The currents were measured every 2 h by TEV at Ϫ100 mV (n ϭ 8). The wild type base-line current was Ϫ3.3 A. ␣A-crystallin Regulates ENaC mediated ubiquitination at or near the cell surface leading to channel retrieval and degradation (12,33,39,40). In this work, we have begun to characterize the quality control mechanisms to which ENaC subunits are subjected at early steps during their biosynthesis. We have shown that mutation of genes required for ERAD in yeast decreased ENaC ␣ subunit degradation and that the deletion of genes encoding two functionally redundant sHsps, Hsp26 and Hsp42, attenuated ENaC ␣ subunit degradation. These ␣-crystallin domain-containing chaperones help maintain the solubility of aggregation-prone proteins in yeast (28). We have also shown that the sHsp ␣A-crystallin is expressed in mouse kidney tissue and in cortical collecting ducts cells derived from mouse kidney that express ENaC. Others have found that ␣A-crystallin is expressed in the liver and lung where ENaC subunits also reside (41,42). Further, we have demonstrated that ␣A-crystallin and the ENaC ␣ subunit coimmunoprecipitate, indicating direct or indirect physical interaction between the two proteins. A functional interaction can be inferred by reduced channel activity when human ␣A-crystallin and wild type mouse ENaC are co-expressed in oocytes. This effect depends on an active proteasome and was due to a decrease in the number of channels at the cell surface caused by a reduction in the rate of channel insertion. Based on our data, we propose that ENaC subunits are subject to ER quality control via ERAD and that this process is facilitated by ␣-crystallin domain-containing sHsps. It was previously observed that sHsps selectively accelerate the degradation of ⌬F508-CFTR and that these sHsps may distinguish terminally misfolded forms of ⌬F508-CFTR from the wild type protein (11). In this study, we have utilized wild type ENaC ␣, ␤, and ␥ subunits. The question then arises: why are sHsps targeting wild type ENaC for ERAD? In yeast expressing endogenous sHsps, we observed that deletion of these sHsps resulted in a modest but significant reduction of ENaC ␣ subunit degradation. In Xenopus oocytes where human ␣A-crystallin and mouse ␣, ␤, and ␥ ENaC subunits were overexpressed, we observed a dramatic and dose-dependent decrease in ENaC expression. The simplest explanation for these data is an increased degradation rate. Here we note that unlike CFTR, which is comprised of a single polypeptide chain, ENaC is comprised of three different subunits. We also note that ENaC likely exits the ER as an assembled channel (43). Thus, prior to and during ENaC subunit assembly, there may be solvent-exposed hydrophobic subunit interfaces with the potential to nonspecifically aggregate. This may account for the co-immunoprecipitation of uncleaved ENaC ␣ subunit with ␣A-crystallin. We therefore propose that sHsps maintain ENaC subunit solubility during intermediate stages of folding or during subunit assembly. We also observed that the cleaved form of the ENaC ␣ subunit co-immunoprecipitated with ␣A-crystallin. Because the ␣ subunit is cleaved only after the channel has been assembled and transits through the trans-Golgi network, this result suggests that ␣A-crystallin also interacts with the ␣ subunit in other compartments. Although this phenomenon may be related to ENaC overexpression in these experiments, it is evidence that ␣A-crystallin may play additional roles in ENaC ␣ subunit biogenesis or trafficking. Small heat shock proteins interact with a wide spectrum of cellular proteins (28), and thus the question arises as to the specificity of action of distinct members in this family of molecular chaperones. In other words, it is reasonable to ask whether the overexpression of ␣A-crystallin will affect the biogenesis of every polytopic membrane protein that ultimately resides at the cell surface. We recently showed that ␣A-crystallin overexpression in HEK293 cells only decreases the stability of the ⌬F508 mutant form of CFTR but had no effect on the maturation of wild type CFTR (11). Although the nature of this specificity remains unclear, the data indicate that this sHsp exhibits some specificity of action; thus, not every membrane protein that transits through the secretory pathway is altered by ␣A-crystallin overexpression. Given our recent finding that Hsc70 reduced ENaC functional expression (26), we tested the hypothesis that Hsc70 augments the ␣A-crystallin effect on ERAD. We found that coexpressed Hsc70 and ␣A-crystallin attenuated ENaC-mediated current to a larger extent than did ␣A-crystallin alone. These data support our model that sHsps function at an ER quality control branch point, where sHsp binding and stabilization leads either to native channel folding and assembly or to ERAD. We propose further that ENaC subunits are stabilized in the pre-native state by sHsps and that Hsp70 catalyzes conversion to the native state, consistent with our recent observation that moderate overexpression of Hsp70 increases ENaC currents (26). These processes depend on the ability of sHsps to efficiently disassemble and release the target protein, which has been shown to be required for effective chaperone activity (44 -47). Although sHsps must dissociate from their targets at a finite rate, it remains unclear whether dissociation is spontaneous or is catalyzed by the binding of other chaperones. In either case, sHsp overexpression would drive sHsp⅐target complex formation, which we suggest accounts for the dose-dependent decrease in ENaC functional expression we observed in oocytes. It has been shown that 80 -99% of synthesized ENaC subunits do not reach the cell surface where they can participate in transepithelial Na ϩ transport (43,48). Our results suggest that the ERAD pathway, as one component of ER quality control, probably accounts for a significant fraction of the degraded ENaC subunits. Although ENaC biosynthesis seems inherently wasteful, the "waste" attributed to ERAD results in a higher quality product and blocks the formation of potentially toxic species. In the future, it will be critical to identify other modulators of ENaC ERAD, a pursuit that might have therapeutic consequences.
7,336
2007-09-21T00:00:00.000
[ "Biology", "Medicine" ]
Resonant Stratification in Titan’s Global Ocean Titan’s ice shell floats on top of a global ocean, as revealed by the large tidal Love number k 2 = 0.616 ± 0.067 registered by Cassini. The Cassini observation exceeds the predicted k 2 by one order of magnitude in the absence of an ocean, and is 3σ away from the predicted k 2 if the ocean is pure water resting on top of a rigid ocean floor. Previous studies demonstrate that an ocean heavily enriched in salts (salinity S ≳ 200 g kg−1) can explain the 3σ signal in k 2. Here we revisit previous interpretations of Titan’s large k 2 using simple physical arguments and propose a new interpretation based on the dynamic tidal response of a stably stratified ocean in resonance with eccentricity tides raised by Saturn. Our models include inertial effects from a full consideration of the Coriolis force and the radial stratification of the ocean, typically neglected or approximated elsewhere. The stratification of the ocean emerges from a salinity profile where the salt concentration linearly increases with depth. We find multiple salinity profiles that lead to the k 2 required by Cassini. In contrast with previous interpretations that neglect stratification, resonant stratification reduces the bulk salinity required by observations by an order of magnitude, reaching a salinity for Titan’s ocean that is compatible with that of Earth’s oceans and close to Enceladus’ plumes. Consequently, no special process is required to enrich Titan’s ocean to a high salinity as previously suggested. INTRODUCTION Recent decades of space exploration have revealed a solar system populated with internally heated icy worlds where large reservoirs of liquid water accumulate in subsurface global oceans (Nimmo & Pappalardo 2016).These worlds signal a possibility for life beyond Earth in a location that is accessible to future in-situ space exploration.A global ocean plays a fundamental role in determining the potential habitability of these icy worlds because water is required for life as we know it.Beyond detection, however, these global oceans remain poorly understood.Ocean thickness is typically known within broad limits ranging in the tens of percent of the icy world radius (Sohl et al. 2003;Grindrod et al. 2008), preventing an accurate assessment of the satellite's liquid water inventory and thermal history.On Earth, ocean dynamics modulate the distribution of nutrients and energy sources required by life (Uchida et al. 2020, e.g.), but on icy worlds the type of convection or lack thereof remains poorly known (Jansen et al. 2023).Here we argue in favor of the stratification of Titan's ocean based on a new interpretation of Cassini gravity measurements where internal gravity waves in Titan's ocean become resonantly excited by tides raised by Saturn (see also Luan (2019)).We hereafter refer to this proposed scenario as resonant stratification. Titan is the second largest solar system icy satellite and the best characterized from the perspective of gravity measurements (i.e., moment of inertia, J 2 , C 22 , and k 2 (Durante et al. 2019)), offering us a unique opportunity to reveal the hidden interior structure and dynamics of the global ocean within.The Cassini spacecraft unambiguously signaled the existence of a global ocean from the observed tidal response registered in the Love number k 2 = 0.616 ± 0.067 (Durante et al. 2019).The observation is an order of magnitude larger than the predicted k 2 ≈ 0.03 when the ocean is absent (Rappaport et al. 2008).Ignoring the ice shell, a global ocean of pure liquid water produces a k 2 ≈ 0.468 independent of ocean thickness if the high-pressure ice and silicates beneath the ocean behave approximately rigidly and the total mass of the satellite is conserved (Section 2.1.2).The presence of an overlying elastic ice shell further restricts the motion of the ocean beneath.Estimates of Titan's ice shell thickness yield d ∼ 100 km (Sohl et al. 2003;Nimmo & Bills 2010;Luan 2019); an ice shell this thick reduces tides down to k 2 ≈ 0.42 (Section 2.1.3),which is roughly 3σ away from the Cassini observation.A thinner ice shell provides a k 2 closer to the observation, but then the heat conducted across the ice shell exceeds the interior heat production expected from radiogenic and tidal heating (Sohl et al. 2003;Luan 2019), leading to thickening of the ice shell by freezing over time.A thinner shell is also more difficult to reconcile with the observed topography (Nimmo & Bills 2010).The resonant stratification presented here can self-consistently explain the large k 2 observed by Cassini by introducing a positive dynamical fractional correction to the non-resonant hydrostatic k 2 ≈ 0.42.Resonant stratification enhances k 2 by dynamic amplification of the vertical displacement of Titan's surface (Fig. 1).Ocean waves can produce significant dynamic surface displacement when resonantly excited by Saturn's eccentricity tides, namely when Titan's orbital frequency (ω s = 4.561 × 10 −6 s −1 ) is close to a match with a normal mode emerging from an aggregation of Titan's ocean waves.Amplification results from the constructive addition of ocean waves over many cycles of resonant excitation, until balanced by dissipation.The effect produces dynamic gravity that can be registered by the tracking system of a nearby passing spacecraft (i.e., Cassini).A vertical gradient in ocean salinity promotes ocean stratification and the emergence of internal gravity waves.These waves are restored by buoyancy forces and organize in a spectrum of low-frequency normal modes that asymptotically approach zero frequency from an upper bound frequency ω 2 ≲ N 2 , where N 2 is the Brunt-Vaisala frequency that typically describes the strength of ocean stratification (Section 2.2 and Appendix A).A typical value N 2 ≳ 10 −8 s −2 ≳ ω 2 s suggests a spectrum of internal gravity waves capable of resonance with eccentricity tides, and roughly corresponds to a modest vertical salinity gradient where salt concentration increases by ≳ + 1 g/kg every ∼100 km of ocean thickness (the salinity of Earth's oceans is roughly 35 g/kg).Crucially, our mechanism works for a range of mean ocean salinity values, including Earth's and that inferred for Enceladus (Postberg et al. 2009). When the ocean is fully convective (i.e., unstratified) and homogeneously mixed, ocean waves require an unphysically thin ocean (H < 1 km) to resonate with eccentricity tides (Matsuyama et al. 2018).An ocean this thin is not compatible with the thermal history of Titan as expected from radiogenic and tidal heating (Tobie et al. 2006;Grindrod et al. 2008).Only the high-frequency tides from moon-to-moon interactions can resonantly excite a fully mixed ocean of realistic thickness (H ∼ 100 km) (Hay et al. 2020), but those signals are relatively small and thus beyond Cassini's detection threshold. Alternative scenarios can enhance k 2 to the value required by Cassini (Fig. 1), but require either high salinity of the ocean or low rigidity of the solid Titan.Previous calculations (Rappaport et al. 2008;Waite et al. 2017) show that a heavy ocean produces the required extra gravity when its density is increased by a high concentration of dissolved salts (salinity S ∼ 200 g/kg).The required concentration of salts is an order of magnitude higher than that in Earth's oceans, Enceladus' plumes, and that predicted from water-rock interactions on Titan's ocean floor (Postberg et al. 2009;Leitner & Lunine 2019).On the other hand, a low rigidity or low viscosity ocean floor can allow large vertical displacements of the ocean floor that produce an extra gravity signal that constructively adds to the gravity from tides at the surface (Durante et al. 2019).In practice, reasonable estimates of the rigidity and viscosity for silicates and high-pressure ice at tidal timescales produce a negligible ocean floor displacement.For example, the contribution to k 2 from the elastic icy ocean floor displacement can be estimated to be negligible following k 2,ice /k 2 × (ρ hp-ice − ρ)/ρ ∼ 2%, where k 2,ice /k 2 is the ratio between the Love number when the ocean is excluded/included, respectively, ρ hp-ice is the density of high-pressure ice, and ρ the ocean density.The previous estimate assumes that the tidal response of the ocean floor can be crudely approximated by the tidal response of an oceanless Titan model after introducing a correction for the relatively higher surface density of high-pressure ice.A viscous icy ocean floor can introduce a non-negligible component to k 2 if the viscosity of high-pressure ice is lower than ∼10 15 Pa s (Rappaport et al. 2008).The viscosity of high-pressure ice is typically comparable to or greater than this value when the temperature of the ice is ∼10% lower than the melting temperature (Durham et al. 1998), a reasonable assumption at the ocean floor. Since resonantly excited internal gravity waves can produce the required dynamic gravity, the main challenge for resonant stratification is how to establish a long-lived tidal resonance in an icy satellite.The possibility of resonant tidal excitation is suggested by the tendency of Titan's ocean to freeze due to secular cooling, which imposes a continuous evolution on the frequency of ocean waves by a change in the stratification profile of the ocean.At some point in Titan's freezing history, the normal frequency of ocean waves crosses the orbital frequency.After first being encountered, a tidal resonance can be sustained over long geological timescales provided that a stable fixed point is reached between orbital and interior evolution, analogously to ideas previously applied to tidal resonant locking in Jupiter and Saturn (Fuller et al. 2016;Luan et al. 2018;Lainey et al. 2020;Idini & Stevenson 2022).The onset of resonant ocean waves enhances Titan's tidal heating in the ocean (Tyler 2011(Tyler , 2014;;Rovira-Navarro et al. 2019), with the additional heating slowing or halting the freezing of the ocean. BASIC EQUATIONS The tidal Love number k ℓm represents a normalization of the gravitational tidal response by the gravitational excitation, where the eccentric gravitational excitation ϕ e ℓm is derived in Appendix B and the tidal response ϕ ′ ℓm due to eccentricity tides is calculated numerically and analytically in this section in the context of various models.We concentrate in the ℓ = m = 2 Love number k 2 , which matches the spherical harmonic of the Cassini observation.We do not discuss obliquity tides in this manuscript because they excite m = 1 spherical harmonics that do not contribute to the Cassini k 2 observation. Equilibrium tide Here we reproduce previously known results of the tidal response of icy satellites with oceans using simple principles.Our objective is to illuminate the effects contributing to the amplitude of the equilibrium tide and to produce an estimate for Titan within a few percent of accuracy.We start by deriving the hydrostatic k 2 in a homogeneous fluid body.This derivation shows how all the tidal gravity in k 2 emerges from a thin layer of displaced fluid near the surface.However, Titan strongly departs from this estimate because its density profile is not homogeneous and its tidal response is not fully hydrostatic.Our second derivation introduces the effects of a nonhomogeneous density profile and an inner core that responds rigidly to diurnal tides (i.e., tidal frequency ω = ω s ) rather than hydrostatically.Finally, our last derivation shows how an elastic ice shell covering the ocean reduces the tidal response and provides a rough estimate that agrees with more sophisticated modeling.After considering all these effects acting jointly, the result is what we consider the diurnal equilibrium tide of Titan k 2 ≈ 0.42; a tide restored purely by static forces where inertial forces are neglected (i.e., no dynamics). Hydrostatic tides in a homogeneous fluid body We first revisit the classical problem of tides in a homogeneous incompressible body that satisfies hydrostatic equilibrium using basic principles.The linearized equation for the conservation of momentum is The tidal response of the icy satellite produces adiabatic perturbations represented in the gravitational potential ϕ ′ and pressure p ′ .In this expression, the potential of the gravitational pull ϕ e and the tidal gravitational potential ϕ ′ are combined into φ′ = ϕ e + ϕ ′ for analytical simplicity. In an incompressible body, the density ρ remains constant regardless of forcing and the only change in the gravity field comes about from the radial displacement ξ r of the surface.This diplacement is typically small compared to the radius R. Thus, we can calculate the gravity field from the integration over the volume of a thin shell of fluid with thickness ξ r , where g is the surface gravity acceleration calculated in hydrostatic equilibrium g = 4πGρR/3, and G the gravitational constant. The boundary condition on the surface indicates that the fluid is free from external pressure (i.e., δp = 0).This statement translates into turning the tidal displacement into the sole source for the pressure perturbation, formally expressed as Next, we can go back to equation ( 2) and obtain an expression for the tidal forcing, (5) We evaluate the tidal response and forcing at the surface (r = R) to obtain the tidal Love number from the ratio between them, which agrees with the classical result k 2 = 1.5 of the fluid Love number of a uniform density body in hydrostatic equilibrium (Munk et al. 1977, e.g.). A global ocean with a rigid ocean floor We now consider a body with a rigid core of density ρ c and radius R c overlaid by an ocean of density ρ by introducing vertical differentiation in the density profile.The main change compared to the homogeneous case is a change in the expression for the surface gravity acceleration g = 4πG ρR/3, where ρ is the body's mean density.The tidal gravity field still emerges solely from the radial displacement ξ rℓm of a thin region near the surface, Following the same steps as before, the resulting tidal Love number is independent of core properties, This equation yields k 2 = 0.468 when using Titan's mean density ρ ≈ 1.88 g cm −3 and ρ = 1 g cm −3 .Equation (8) reproduces numerical results (within 5%) obtained previously for Europa when the icy shell is ignored (Moore & Schubert 2000), namely k 2 ≈ 0.249 when using ρ ≈ 3.01 g cm −3 .Analogous versions of Equation ( 8) can be found in the literature when following an alternative derivation (Dermott 1979;Murray & Dermott 1999;Beuthe 2015, e.g.). Ice shell thickness An elastic ice shell constrains the radial displacement of the ocean that produces the tidal response registered in k 2 .The weight of the ocean trying to reach an equipotential surface is balanced by the resistance of the ice shell to deformation (Kamata et al. 2015, e.g.).Here we provide a simple argument to evaluate the role that the ice shell thickness plays in reducing the amplitude of the tidal response. Consider a global ocean of density ρ that is trying to reach the equipotential surface defined by the radial displacement ξ req .The pressure that the weight of the ocean exerts on the base of the ice shell can be expressed by where g is the gravity acceleration and ξ r is the radial tidal displacement.The shear stress τ on the ice shell can be expressed as where µ is the shear modulus of the ice shell. Next, we consider the equilibrium of forces over a meridional section dividing the satellite into two hemispheres.The ocean pressure integrated over the projected area of the ice shell base must be balanced by the shear stress integrated over the section of the ice shell, namely where d is the ice shell thickness and R is the icy satellite radius.We can rearrange the previous expression to discover the fractional change in k 2 due to a rigid ice shell, which represents a competition between elastic and gravitational energy (see also equation ( 11) in Goldreich & Mitchell (2010)).An alternative derivation of equation ( 12), including all the correct numerical factors, can be found in Beuthe (2015).We may use Titan's radius R ≈ 2575 km, surface gravity acceleration g ≈ 135 cm s −2 , ocean density ρ ≈ 1 g cm −3 , and shear modulus of ice I and methane clathrates µ ∼ 4 GPa, and obtain An ice shell thickness d ∼ 100 km balances radiogenic heating with heat conduction (Appendix D) and produces k 2 ≈ 0.42 (the hydrostatic response without an ice shell is k 2 = 0.468).We use this k 2 value as the reference hydrostatic tidal response when computing the fractional change ∆k 2 due to various effects, which is compatible with previous estimates for Titan models without salinity (Rappaport et al. 2008).The result is valid for a shell made of ice or methane clathrates given that the elastic modulus is similar in both cases.Instead of being purely elastic, the ice shell covering the ocean can be viscous near the melting temperature.Viscous ice can in principle flow at certain timescale and reduce the resistance that the ice shell imposes on the tidally excited ocean motion.In practice, this effect is negligible at the tidal timescale ∼16 days unless large portions of the d ∼ 100 km ice shell thickness is in convection and thus have relatively low viscosity (∼10 14 Pa s).The compensated long-wavelength topography of Titan suggests that the ice shell is unlikely to be convecting (Nimmo & Bills 2010). Dynamical tides We now solve the problem of a rotating ocean world with a stratified ocean subjected to the full action of the Coriolis effect.The equation of conservation of momentum and continuity are where v is the 3D incompressible tidal flow, Ω is the icy satellite spin vector, ρ is the ocean density, p is pressure, and γ is the linear damping coefficient that represents dissipation in the ocean.In this paper, we consider at all times that Titan is in synchronous rotation (i.e.Ω = ω s ). We derive the linearized form of the conservation of momentum equation by introducing linear perturbations of the form φ ≈ ϕ 0 + φ′ , where quantities with the zero subindex indicate the background state and primed quantities indicate perturbations due to tides.In the following, we drop the zero subindex for convenience, thus all nonprimed quantities indicate the background state.The resulting linearized equation of conservation of momentum is The timescale of tidal motion of ∼15 days is orders of magnitude shorter than the timescale required to transport heat by diffusion or convection.Tides are consequently adiabatic and the associated Lagrangian tidal perturbations in density and pressure satisfy where Γ is the adiabatic index.We can relate the Lagrangian perturbations to the Eulerian perturbations in equation ( 17) using the definitions where ξ is the tidal displacement.Moving forward, we put together the latter equations to arrive to where N 2 is the Brunt-Vaisala (BV) frequency defined in general by The BV frequency represents the salinity stratification of the ocean.Here the only relevant component in N 2 is the radial direction, given that we consider the background state to be spherically symmetric. When rN 2 > 0, the ocean is vertically stratified.A stratified ocean parcel will develop static stability once displaced out of its equilibrium position.According to equation ( 21), an Eulerian change in ocean density may emerge from either the adiabatic response of the ocean fluid (first term in the right-hand side) or the buoyancy of the stratified ocean parcel (second term in the right-hand side).In the incompressible approximation used here, Eulerian perturbations to density uniquely emerge from the buoyancy of the stratified ocean parcel.This results from the adiabatic index Γ ≡ (∂ log p/∂ log ρ) S tending to infinity: the expected changes in pressure keep density unperturbed.As a consequence, the BV frequency reduces to where the radial variation in density comes from the addition of a small amount of salts organized in a vertical salinity gradient.As we can see from equation ( 23), stratification permits local density perturbations in the ocean regardless of its incompressibility.Equation ( 23) shows a direct relationship between stratification in N 2 and the salinity gradient in ∂ r ρ.When we consider constant stratification throughout the ocean, we can write ∂ r ρ = ∆ρ/H, where ∆ρ is the change in density between top and bottom of the ocean.Our models have constant density, thus the ∆ρ represents a virtual change in density due to the addition of salts.The concentration of added salts S is described in g of salts per kg of water (g/kg). Next, we rewrite the linearized momentum and continuity equations as where ω = ω + iγ is a complex frequency that accounts for tidal dissipation, ω = ω s is the tidal frequency of eccentricity tides, and the tidal displacement ξ is periodic with a time dependency ∝ e −iωt .We have used v = ∂ t ξ = −iωξ.This set of equations is traditionally known as the Boussinesq approximation. A typical method of solution of equation ( 24) involves applying the curl to reach the vorticity equation (Rieutord 1987) A rigid core implies that waves produce no radial displacement near the bottom of the ocean.We set the rigid boundary condition at the ocean bottom to The free boundary at the ocean top prescribes a zero Lagrangian perturbation of pressure (e.g., Goodman & Lackner (2009)), which leads to The tidal displacement field ξ is the only quantity to be determined, which is forced by the ocean top boundary condition on the radial component of the displacement.Previous work (Rovira-Navarro et al. 2019, e.g.) typically sets ξ•r at the surface to be equal to the equilibrium tide radial displacement (i.e., no-slip boundary conditions) and/or neglects the self gravitation of the tide (e.g., Cowling approximation).Given that we are interested in precise tidal gravity estimates, we have retained the self gravitation of the equilibrium tide and allowed the ocean top to displace dynamically beyond the equilibrium tide, relaxing the no-slip assumption.No-slip boundary conditions explicitly set the dynamical part of ξ • r to zero at the surface, removing by definition the dynamical enhancement of k 2 that we calculate here. The resulting set of equations is an infinite system of equations coupled in degree by the Coriolis effect (Rieutord 1987;Rieutord & Valdettaro 1997;Lockitch & Friedman 1999;Rieutord et al. 2001;Ogilvie & Lin 2004;Ogilvie 2009;Rovira-Navarro et al. 2019;Idini & Stevenson 2021, e.g.).We solve this system by the traditional method of projection onto vectorial spherical harmonics (Appendix C), followed by a pseudo-spectral discretization on the radial functions based on the analytically-tractable Chebyshev polynomials and Gauss-Lobatto collocation points (Boyd 2001, e.g.).We truncate the infinite set of equations at degree ℓ max = 100 and use N max = 100 Chebyshev polynomials to represent each radial function at each degree ℓ ≤ ℓ max .The Love number can then be obtained from the tidal displacement evaluated at the surface (equation ( 3)), which is the only source of tidal gravity in a homogeneous ocean. Tidal heating rate in the ocean The heating rate in the ocean can be fully determined by volume integration of the work done by the dissipative force in the equation of motion (equation ( 14)).This work per unit volume follows where dl is an infinitesimal line element along a fluid parcel motion.We can use dl = vdt and integrate ẇ across the ocean volume to obtain the time averaged ocean heating rate (Chen et al. 2014;Rovira-Navarro et al. 2023, e.g.) The linear damping coefficient γ is unconstrained in icy worlds with global oceans.The range of possible estimates spans from γ ∼ 10 −11 s −1 on Enceladus to the better constrained γ ∼ 10 −5 s −1 on Earth (Matsuyama et al. 2018).The projection of equation ( 32) onto vectorial spherical harmonics is shown in Appendix C. NUMERICAL RESULTS 3.1.The predicted tidal response k 2 as a function of ocean structure We use perturbation theory and numerical methods to calculate the dynamic gravity produced by resonant stratification in a stratified and rotating ocean (Section 2).We concentrate on fractional changes ∆k 2 to the non-resonant hydrostatic k 2 ≈ 0.42 obtained in a pure water ocean with a d ∼ 100 km ice shell resting on top.Our method of solution considers the Coriolis force in full and avoids the thin shell approximation typically used in tidal studies of icy satellites (Beuthe 2016;Matsuyama et al. 2018;Rovira-Navarro et al. 2023).The additional effort of relaxing the thin-shell approximation allows us to study the dynamic gravity of internal gravity waves that result from the mixing of rotational and stratification effects.We simplify the structure of stratification by assuming a constant N 2 throughout the ocean (equation ( 23)), which translates into a linear increase in salt concentration with depth starting from zero at the ocean surface.More complicated salinity distributions are in principle possible and lead to additional uncertainty in the inferred ocean salinity structure.Our models show that resonant stratification can explain the k 2 enhancement observed by Cassini.We observe an enhancement ∆k 2 beyond +15% when overtones composed of internal gravity waves are resonantly excited by eccentricity tides (Fig. 2).This brings k 2 from 3σ to 2σ away from the mean value of the observation (Fig. 2).In this case, we have used the conservative linear damping γ = 10 −9 s −1 , but our models predict a resonant ∆k 2 beyond +45% when using a still realistic γ = 3.3 × 10 −10 s −1 (Fig. 2), an enhancement that puts k 2 at the mean value of the observation at the saturation point of the resonance.Resonances occur at various H and N 2 , preventing us from identifying a unique ocean thickness and stratification profile based solely on k 2 .In resonant stratification, the salt concentration near the ocean floor can be as low as < 5 g/kg in the simplified model we use here (Fig. 2; equation ( 23)).The mean salinity can then be less than that for the oceans of Earth or Enceladus (Postberg et al. 2009), while still producing a resonant response.As a general rule, internal gravity waves become resonant when the nodes in the radial displacement of the ocean wave perfectly fit the thickness of the stratified ocean cavity (Fig. 3).We can achieve this resonant fit by adjusting N 2 and changing the radial wavelength of internal gravity waves, or by adjusting the thickness of the stratified ocean H.The number of radial nodes is directly proportional to overtone order and inversely proportional to mode frequency, with lower frequency internal gravity waves having more radial nodes (Unno et al. 1979).Increasing the strength of stratification N 2 shifts the spectrum of g-modes toward high frequency and allows higher order gravity modes to become resonant with the fixed orbital frequency (Fig. 4). Surface radial displacement and interior nonlinear wave breaking The dynamical enhancement ∆k 2 emerges from a tens-of-meter enhancement on the radial displacement of the ocean surface.In our models, this fractional enhancement to ocean surface radial displacement ∆h 2 equals ∆k 2 for any given combination of ocean parameters.We derive this result from combining equations ( 1 resulting in the relationship A direct inspection of equation ( 34) leads to ∆h 2 = ∆k 2 after substitution of the h ℓm and k ℓm that include a base value and a fractional correction.The Love number h ℓm describes the radial displacement of the ocean surface as a function of the equilibrium tide.A h ℓm = 1 implies that the surface follows the shape of the equilibrium tide when the self gravity of the tide is ignored.In a homogeneous fluid body, the average density is ρ = ρ and equation (34) recovers the classical result h 2 = 5/2.In the case of simple Europa interior models, equation (34) reproduces previous numerical results for h 2 when the core/mantle are approximately rigid (Moore & Schubert 2000).In general, equation ( 34) is valid for simple models where the tidal gravity originates entirely from the radial displacement of the surface, as is approximately the case for icy satellites with global oceans overlying roughly rigid rocky interiors.When we consider that Titan's equilibrium tide produces a surface displacement |ξ r | ≈ 26 m, the dynamical enhancement from resonant stratification shown in Fig. 3 leads to tides with surface displacement below |ξ r | ⪅ 40 m in the most dramatic case.We can calculate Titan's equilibrium tide Love number h ℓm using equations ( 8) and (33), Titan responds to Saturn's gravity with an equilibrium tide h 2 ≈ 1.47, which is reduced by −10% when a d ∼ 100 km ice shell is included (Section 2.1.3).We can obtain the radial displacement of Titan's equilibrium tide from equations ( 33), (B13), and (B24), where M is Saturn's mass and m s is Titan's mass.Below the ocean surface, resonant internal gravity waves produce negligible gravity but attain a radial displacement |ξ r | ∼ 1 km in the case of γ = 10 −9 s −1 (Fig. 3).A lower dissipation γ produces even larger resonant amplitudes of tidal motion interior to the ocean (Fig. 5), with γ = 10 −10 s −1 reaching |ξ r | ∼ 10 km.This large radial displacement is accompanied by a horizontal displacement that is typically ξ ⊥ /ξ r ∼ nR/H ∼ 50 (equation ( C31)) and allows the flow to preserve continuity in an incompressible fluid.Despite the large tidal displacement, our models of resonant tidal excitation remain far from experiencing nonlinear wave breaking when γ ≳ 10 −10 s −1 , as stipulated in |ξk| ≲ 1, the typical criterion to avoid nonlinear wave breaking, where k is the wavenumber.For the radial displacement, this criterion translates to |ξ r | ≲ H/n.In the case of the horizontal displacement, the criterion stipulates ξ ⊥ ≲ πR/m, where m is the azimuthal order.All g-modes shown in Figs. 2 and 3 avoid nonlinear wave breaking by at least one order of magnitude. Heating rate at saturation point Our models also show that resonant stratification produces enough heat to compete with the radiogenic heating generated by decaying isotopes inside solid Titan (Fig. 4).This result is key to allow Titan to reach a stable fixed point that may prolong the crossing of a resonance as the ocean freezes.When in thermal steady state, the heat transported across the ice shell must balance all interior heat sources.If heat is transported by conduction, we can write d ∼ (3 × 10 13 )/ Ėint km (Appendix D), where Ėint is the interior heating rate in W and d is the ice shell thickness.When the ocean freezes until balance with radiogenic heating, Ėint ∼ 3 × 10 11 W (Appendix D) and the ice shell thickness grows to d ∼ 100 km.In this scenario, the ocean plays a negligible role in heating the interior.Resonant stratification changes this picture by introducing an additional heating source when the ocean is still rapidly freezing from secular cooling.In resonant stratification with γ = 10 −9 s −1 , for example, we get a total Ėint ∼ 6 × 10 11 W at saturation point (Fig. 4) and the ice shell thickness becomes d ∼ 50 km while in steady state with internal heat from radiogenic heating and ocean tidal heating combined. The model above is a simple conductive model, but it provides a plausible argument in favor of catching resonant stratification via ocean freezing.Thermal steady state cannot be attained when the ice shell thickness is very thin near at the onset of ice shell formation (d ≲ 50 km for γ = 10 −9 s −1 ); the resonance saturates at peak heating rate (Fig. 4) before producing enough heat to balance conduction across the thin ice shell.Ice shell growth by secular cooling stops at d ∼ 100 km, hence resonant stratification via ocean freezing must catch a resonance before then.This last requirement is not significantly changed when a high abundance of ammonia in the ocean is considered.Ammonia can lower the freezing temperature of the ocean to the eutectic at T ∼ 180 K if present in concentration ∼15 wt.% (Lunine & Stevenson 1987;Grasset & Sotin 1996), reducing the d required for thermal steady state in half independently of whether resonant stratification is in place or not (see equation (D51)). A lower γ = 10 −10 s −1 increases the heating rate at saturation point to an order of magnitude above Titan's radiogenic heating rate, and allows k 2 to grow higher (Fig. 5).A larger γ leads to the damping of resonances, reducing the amplitude of the resonant ∆k 2 and the resonant Ė (Fig. 5).The hydrostatic flow is not much affected by γ, thus the heating rate is typically increased by a larger γ when far from resonances.Our results are relevant in the range γ = 10 −9 -10 −10 s −1 ; a γ larger than this range damps ∆k 2 below the signal registered by Cassini, whereas a γ lower than the range results in nonlinear breaking of internal gravity waves, an effect not included in our models.In addition to thermal steady state, ocean freezing requires an approach toward the resonance from the branch that produces a positive ∆k 2 (Fig. 2).Approaching the resonance from the negative ∆k 2 branch would depart from the gravity enhancement required by Cassini.A simple freezing model of a fully stratified ocean fails to satisfy this criterion.In this freezing model, the ocean freezes leaving salts in the liquid phase and redistributing them into a linear salinity profile that conserves the total mass of salts M s .Following equation ( 23), the ocean stratification in this case follows Despite its attractive simplicity, a fully stratified ocean freezes following a trajectory that never converges toward a g-mode resonance, independently of the M s assumed (Fig. 6). In an alternative freezing model, a diffusive layer of thickness δ is sandwiched between two layers of constant density, where the top layer has roughly no salinity and the bottom layer is high salinity.Internal gravity waves are excited in the stratified diffusive layer instead of the entire ocean.In this alternative case, the ocean freezes without changing the diffusive layer thickness δ, but increasing the density contrast δρ and consequently increasing N 2 (equation ( 23)), according to where h b is the thickness of the bottom layer with high salinity.The freezing of an ocean with a diffusive layer (i.e., reducing h b in equation ( 38) at constant δ) suggest the possibility of crossing the ∆k 2 resonance from the positive branch (Fig. 6).Further theoretical development is required to determine the effects of a diffusive layer in the ∆k 2 results reported here. A stable fixed point is then established once resonant stratification succeeds at reaching thermal steady state by halting ocean freezing with the additional heating rate provided by the resonance (Tyler 2011(Tyler , 2014)).The secular expansion of Titan's orbit pushes the orbital frequency away from resonance, reducing the heating rate and promoting further ice thickening until ocean waves tune again with the new orbital frequency.This can happen because ice shell thickening by secular cooling can keep up with the orbital evolution.For instance, the thermal adjustment timescale for a 100 km thick shell is d 2 /π 2 κ ≈ 30 Myr; over this timescale the orbital frequency will have changed by 0.5% (Section 4.2).The alternative to a stable fixed point is that Titan has by chance encountered a resonance that will last until further orbital evolution pushes the orbital frequency out of resonance. The two ocean freezing scenarios described above assume a radially isotropic distribution of salts that linearly increases with depth over certain radial distance, either the ocean thickness H or the diffusive layer thickness δ.More complicated distributions of salts are in principle possible.The distribution of salts can show lateral variations in the presence of nonuniform ice shell thickness due to alternating regions of melting and freezing below the ice shell (Ashkenazy et al. 2018;Lobo et al. 2021;Kang et al. 2022;Kang 2023).Future investigations are required to better understand the impact of various distributions of salts in the dynamics of tidally excited internal gravity waves and the ocean freezing path that leads to a stable resonance. Resonant stratification via orbital evolution In the absence of a stable fixed point, resonant stratification can still be established by orbital evolution.Titan orbits Saturn at a slower rate than Saturn's rotation, leading to outward migration from tidal torques that arise after tidal dissipation occurs inside Saturn (Lainey et al. 2020).This outward migration imposes a slow drift in the excitation frequency of eccentricity tides.At some point during orbital migration, the frequency of eccentricity tides can match the frequency of a g-mode overtone trapped in the stratified ocean, setting resonant stratification (Fig. 7).This mechanism operates without requiring any specific freezing history for Titan's ocean.The resulting resonance, however, is short lived.Continuous orbital migration will further break the resonance after pushing Titan through the resonance width δa ∼ 0.02R s , where R s is Saturn's radius (Fig. 7) and we have assumed the reasonable γ = 10 −9 s −1 (i.e., the resonance width depends on dissipation, as shown in Fig. 5).At Titan's current migration rate ȧ/a ∼ 10 −10 yr −1 (Lainey et al. 2020), the time it would take Titan to cover the g 6 -mode resonance width would be δa/ ȧ ∼ 10 Myr. The probability that Titan is simply passing over an ocean resonance is slim as a result of this fast orbital migration.The astrometric observation of Titan's migration (Lainey et al. 2020) indicates that Titan's orbital period has increased ∼3 times by orbital expansion over the lifetime of the solar system λ ∼ 4.5 × 10 9 yr, or Idini & Nimmo f r e e z i n g o c e a n : f u l l y s t r a t i fi e d freezing ocean: diffusive layer where P s is Titan's current orbital period, and ∆P s and ∆a represent orbital parameter changes on the timescale λ.The period spacing of g-modes (Appendix A) allows us to estimate the number of g-modes that Titan crosses over the timescale λ, where ∆P g is the g-mode spacing.Cassini registered Titan's enhanced k 2 at no particular time within Titan's interior evolution.The probability that Cassini observed a g-mode resonance motivated Same as Fig. 2, but as a function of orbital semi-major axis a.The shaded vertical line represents Titan's current semi-major axis a = 21R s , where R s is Saturn's radius.The ocean is fixed to H = 300 km and N 2 = 1.4454 × 10 −8 s −2 (see also Fig. 3).The tidal frequency is equal to the orbital frequency and ω 0 is Titan's current orbital frequency ω s .The eccentricity is fixed to Titan's current e and we assume synchronous rotation at all times (i.e., Ω = ω s ). uniquely by orbital evolution (i.e., no stable fixed-point) is We obtain # g ∼ 5 and P(g-mode) ∼ 1% when using the reasonable ocean parameters in Fig. 7. Equation ( 41) is only valid for # g ≳ 2. The low P(g-mode) indicates that resonant stratification is unlikely to be a result of pure orbital evolution (i.e., no ocean freezing involved), yet not impossible. A mildly salty ocean versus a heavy ocean When compared to previous studies, resonant stratification only requires a mild concentration of solute dissolved in the ocean to explain Titan's k 2 (Fig. 8).With γ = 10 −9 s −1 , resonant stratification yields a k 2 value 1σ closer to the measured central value starting from the hydrostatic k 2 = 0.42.When γ = 3.3 × 10 −10 s −1 , the enhancement over the hydrostatic k 2 is 3σ, crossing the mean value of the Cassini k 2 observation at low salinity (S < 10 g/kg).In the absence of resonant stratification, a heavy convective ocean requires a salt concentration that is on average S ∼ 100 − 200 g/kg higher to produce a similar effect on k 2 , depending on the exact γ (Fig. 8).Previous studies have argued in favor of a heavy convective ocean that holds S ∼ 200 g/kg in salts to obtain a 1σ agreement with k 2 observations.However, water-rock interactions at the bottom of Titan's ocean are expected to produce a limited S ≲ 10 g/kg of salts (Leitner & Lunine 2019). Instead of water-rock interactions, the heavy convective ocean relies on a special event to attain its relatively high salt concentration.Ammonium sulfate ((NH 4 ) 2 SO 4 ) could form in the ocean from reactions between water-ammonia and brine leaching upwards from a core experiencing hydration (Fortes et al. 2007), contributing to S ∼ 200 g/kg of dissolved solute.Unfortunately, this scenario leads to predicted surficial expressions on Titan that failed to be observed during the Cassini mission (Leitner & Lunine 2019).Alternatively, magnesium sulfates can be incorporated into a heavy ocean that is thermodynamically consistent (Vance & Brown 2013) via a late-delivery of salt-rich carbonaceous chondrites (Hogenboom et al. 1995).However, it is not clear whether this delivery mechanism can provide the required salt concentration. Water-rock interactions on Titan's ocean floor (Leitner & Lunine 2019) produce enough salts (S ∼ 10 g/kg) to enable resonant stratification.At S ∼ 10 g/kg salinity, the predicted k 2 is in a 2σ agreement with the Cassini observation when γ = 10 −9 s −1 and within 1σ agreement when γ = 3.3 × 10 −10 s −1 (Fig. 8).Both γ values are realistic and prevent nonlinear wave breaking.This bulk concentration of salts (S ∼ 10 g/kg) is compatible with the average salinity of Earth's oceans and the salinity inferred for Enceladus' ocean from direct sampling of E-ring particles provided by Enceladus' plumes (Postberg et al. 2009).This aspect favors resonant stratification over a heavy ocean because water-rock interactions are better understood than the special event required to explain a high concentration of salts, namely early hydration during the interior differentiation of ices and silicates (Fortes et al. 2007) or a late delivery of carbonaceous chondrites with a special composition (Hogenboom et al. 1995). Ocean stability to overturning convection The heat flux at the ocean floor provided by interior heating threatens the stability of a weakly stratified ocean.A thermal gradient can counter the stratification produced by a chemical gradient and lead to unstable overturning convection.Convection introduces mixing that can further erase the chemical gradient over time when no salinity forcing is introduced.From a simple balance of the density profile including thermal effects and a salinity gradient (i.e., the Ledoux instability criterion), we require across an ocean thickness H ∼ 300 km to preserve the stability of the mild stratification N 2 ∼ 1 × 10 −8 s −2 discussed before, where α ∼ 2 × 10 −4 K −1 is the thermal expansivity and g ≈ 135 cm s −2 is the surface gravity.This temperature gradient is equivalent to a heat transfer ≲2 × 10 9 W across the ocean by thermal conduction, two orders of magnitude lower than the heating rate expected from radiogenic heating (Appendix D). One might then presume that the heat flux from radiogenic heating should destroy any mild stable stratification.However, this need not necessarily be the case.For example, plumes from hydrothermal vents or volcanoes on the surface of the silicate core may pierce through the stratified ocean in a Rayleigh-Taylor instability (Collins & Goodman 2007) and allow the radiogenic heat from the solid interior to escape outward without triggering overturning convection at the scale of the entire ocean.In this hypothetical scenario, heat passes across the ocean in small lengthscale plumes that do not disturb the large lengthscale structure of stratification required by resonant stratification. Double-diffusive convection (Radko 2013) constitutes another hypothetical scenario that may permit maintenance of the radiogenic heat flux without erasing the compositional gradient.This regime is typically observed when the temperature profile is steeper than the convective temperature profile and less steep than the Ledoux instability criterion (Leconte & Chabrier 2012, e.g.).The convective temperature profile prescribes a ∆T ∼ αHgT /c p ∼ 5.2 K (Turcotte & Schubert 2002) over an ocean with the same properties used in equation ( 42), where c p ∼ 4 J g −1 K −1 is the heat capacity.Assuming thermal equilibrium between conduction through the ocean thickness and the radiogenic heat flux (Appendix D), we require to maintain Ledoux stability (equation ( 42)) of the ocean salinity gradient, where κ is the thermal diffusivity of the ocean.In regimes typical of gas giant planets, numerical simulations suggest that double-diffusive convection can increase the efficiency of heat transport by up to a factor of ∼50 compared to heat conduction (Rosenblum et al. 2011;Mirouh et al. 2012).This enhancement can be thought of as being roughly equivalent to an increase in κ that allows a salinity gradient with N 2 ∼ 10 −8 s −2 to remain Ledoux stable (equation ( 43)).Numerical simulations confirm a strong enhancement of heat transport by double-diffusive convection in the regime of icy satellites (Wong et al. 2022), where, contrary to the gas giant planets, the kinematic viscosity ν is typically larger than the thermal diffusivity κ.Double-diffusive convection is typically accompanied by the evolution of the compositional gradient (Radko 2013;Wong et al. 2022), thus further studies are required to better understand the timescales imposed on the evolution of salinity profiles in icy satellites. Predictions and future tests Water-rock interactions can occur at the ocean bottom of other icy satellites, thus an enhancement of k 2 by resonant stratification is also possible on Ganymede and Europa.NASA's Europa Clipper mission (Phillips & Pappalardo 2014;Howell & Pappalardo 2020) will measure Europa's k 2 with an expected accuracy of 2% (Mazarico et al. 2023), providing us with a new opportunity to study the interior of an icy satellite.If resonant stratification is a common mode of operation for icy satellites, we should also observe an important enhancement in Europa's k 2 given that the ice shell elasticity only provides small resistance to vertical tidal displacements.ESA's JUICE mission (Grasset et al. 2013) will measure Ganymede's k 2 with even greater precision due to the orbital design of the mission and the improved K-band antenna onboard the spacecraft.Ganymede's tidal response will be measured at the excitation frequency of the various moon-to-moon tides present in the Galilean moon system (De Marchi et al. 2022), in addition to the conventional eccentricity tide raised by Jupiter.This tidal response spectrum will provide a unique opportunity to measure the potential stratification of an ocean regardless of whether resonant stratification is in place or not, in addition to providing further constraints to ocean thickness from sampling high-frequency moon-to-moon tides.One quantity to look for in the tidal response spectrum is the g-mode spacing, which is both sensitive to the degree of stratification and the thickness of the stratified cavity (Appendix A). CONCLUSIONS We calculated Titan's tidal response to eccentricity tides using a new theoretical framework that includes the dynamical effects of tidally excited waves trapped in the ocean.Our results present a new interpretation of Cassini's observation of Titan's Love number k 2 = 0.616 ± 0.067, which is 3 − σ away from the predicted k 2 in an ocean of pure water resting on top a rigid ocean floor.If Titan's ocean is stably stratified, its measured tidal response can be fully reproduced using plausible dissipation factors (γ) without requiring a salinity greater than those of Earth or Enceladus's oceans (Fig. 8).This enhanced response requires the ocean to be set in resonance with the period of the current tidal excitation, namely Titan's orbital period.In one possible scenario, this resonance is encountered as the ocean progressively freezes and develops a deep salty layer (Fig. 6); this situation yields a long-term stable thermal equilibrium with conduction across the ice shell. Studies on the extent to which stably stratified oceans can be maintained against convective mixing would form a valuable theoretical addition to the current work.Processes similar to those hypothesized to be operating at Titan could be in play at Ganymede or Europa, and may be tested with future spacecraft missions Europa Clipper and JUICE.The seismometer expected on the Dragonfly mission (Barnes et al. 2021), or a future Titan orbiter (Sotin et al. 2017, e.g.), might similarly be able to look for evidence of a resonantly-excited ocean. The g-mode frequency spacing constitutes a diagnostic quantity typically used to characterize stratified cavities inside planets and stars (Aerts et al. 2010;Mankovich & Fuller 2021, e.g.).Following our simple model of constant N across a stratified cavity of thickness H, we obtain the mode spacing ) This expression can be used to characterize the stratification of oceans in icy satellites when a multi-frequency k 2 is available to observation, as currently expected from JUICE measurements of moon-to-moon tides on Ganymede (De Marchi et al. 2022).Future icy satellite seismology could provide an alternative observation of mode spacing from the recording of free oscillations on the satellite's surface (Marusiak et al. 2021), assuming that the normal modes are excited beyond the detection threshold. B. THE TIDAL FORCING OF ECCENTRICITY TIDES The gravitational potential ϕ T experienced by an observer at r from a concentrated mass located at r ′ , is inversely proportional to the distance between them where α is the angle between the two position vectors, ℓ is degree, and P ℓ are the m = 0 case of the associated Legendre polynomials where m is azimuthal order.When the concentrated mass M is that of a planet orbiting at a semi-major axis a, the gravitational excitation assumes the form The two lowest degree harmonics, ℓ = 0 and ℓ = 1, are discarded as they do not disturb the shape of the icy satellite. The addition theorem allows us to express the gravitational excitation in spherical coordinates, following cos α = cos θ cos θ p + sin φ sin φ p cos(φ − φ p ), (B7) where rθφ are spherical polar coordinates in a corotating frame fixed to the icy satellite and the subscript p denotes the position of the planet.The gravitational excitation is now ) where we use the conventional definition of spherical harmonics ) From fundamental identities of spherical harmonics, it can be shown that The tidal forcing potential for ℓm then becomes In the standard case of a coplanar circular orbit, we have cos θ p = φ p = 0, leading to with the normalization constant The effect of eccentricity imposes a change in the semimajor axis and a libration in the position of the planet as seen from the corotating frame on the icy satellite, The resulting tidal excitation potential in a planar orbit is The circular tide becomes static (i.e., no time dependence) in a synchronous corotating frame where the spin of the icy world matches the orbital frequency of the planet.Eccentricity tides propagate at the diurnal frequency in both west and east directions for a given m.We observe a perfect superposition between the east m > 0 tide and the west m < 0 tides.As a result, the contributions are typically added to obtain where the upper index on y(r) denotes degree instead of exponent.This equation sets a requirement for the radial and spheroidal displacement fields.The toroidal displacement field is automatically continuous given that it constitutes a rotor on displacement rather than a relocation of fluid. Ψ. (C42) heating rate from terrestrial rock samples if we assume a homogeneous distribution of radiogenic elements along heliocentric distance.The abundances of Th-U-K isotopes and their radiogenic decays prescribe a heating rate H ≈ 7.4 × 10 −12 W/kg on Earth's mantle (Turcotte & Schubert 2002).The resulting radiogenic heat production is Ėint ∼ 0.5Hm s ∼ 5 × 10 11 W. Titan's core composition most likely departs from the composition of Earth's mantle due to the presence of undifferentiated iron, in which case the previous estimate is an upper bound.An alternative to the previous estimate comes from using the radiogenic heat production of CV chondrites H ≈ 4.5 × 10 −12 W/kg (Grasset & Sotin 1996;Spohn & Schubert 2003), which results in the smaller Ėint ∼ 3 × 10 11 W. The balance between internal radiogenic heating Ėint and conduction across the ice shell determines the ice shell thermal equilibrium thickness.Titan's icy surface is at T s ∼ 90 K.In a pure water ocean, we can use k ∼ 2 W m −1 K −1 , Titan radius R ≈ 2575 km, and ∆T ∼ 180 K to obtain (Luan 2019) d ∼ 4πR 2 k∆T Ėint ∼ 3 × 10 13 Ėint km. (D51) Titan most likely contains a considerable amount of ammonia dissolved in its global ocean (Lunine & Stevenson 1987;Stevenson 1992;Grasset & Sotin 1996).When dissolved in water, ammonia behaves like an anti-freeze, reducing the temperature of the eutectic to T ∼ 180K in an ocean with 15%wt.ammonia (Grasset & Sotin 1996).In this scenario, the temperature gradient across the ice shell diminishes to ∆T ∼ 90 K and the equilibrium ice shell thickness can be reduced in half compared to the pure water ocean (equation (D51)). Figure 1 . Figure 1.Proposed explanations to Titan's large k 2 as observed by Cassini.The schematic of Titan's interior model has been extracted from de Kleer et al. (2019). Figure 2 . Figure 2. Titan's k 2 enhancement (fractional correction ∆k 2 ) from dynamic gravity as a function of (a) ocean thickness H and (b) the Brunt-Vaisala frequency N 2 .Peaks represent resonances with ocean normal modes.The ocean top salinity is zero and increases linearly with depth.The ocean bottom salinity ranges 0.8 − 1.5 g/kg (H = 100 km) and 2.3 − 4.4 g/kg (H = 300 km) for the range of N 2 shown in (b).Frictional dissipation in (b) is γ = 10 −9 s −1 .The reference hydrostatic Love number is k 2 = 0.42, a pure water ocean with an elastic ice shell with thickness d ∼ 100 km.The tidal frequency is equal to the rotational frequency and the orbital frequency, ω = Ω = ω s = 2π/T orb , where T orb = 15.945days is Titan's orbital period. Figure 3 . Figure 3. Meridional cross section of the radial displacement on Titan's ocean due to tides for selected internal gravity wave normal modes of increasing radial order shown in Fig. 2. Frictional dissipation is γ = 10 −9 s −1 . Figure 4 . Figure 4. Heating rate produced by tidally excited internal gravity waves in the stratified ocean, as a function of (a) ocean thickness H and (b) Brunt-Vaisala frequency N 2 .The dashed line indicates Titan's radiogenic heating Ėint ∼ 3 × 10 11 W (see Apprendix D).The frictional damping is γ = 10 −9 s −1 .Overtones of g-modes are labeled with a subindex representing the mode radial order n.The smaller peaks without a label represent dissipation from resonant inertial wave modes/attractors not discussed here (see Rovira-Navarro et al. (2019), e.g.). Figure 5 . Figure 5. Resonant dynamical gravity signal and heat production as a function of dissipation for g-mode g 6 (ℓ = m = 2 and n = 6).Ocean thickness is H = 300 km.The dashed lines are the same as in Fig. 2, and Fig. 4, respectively for (a) and (b). Approach to a long-lived resonance via ocean freezing Figure 6 . Figure 6.Model parameters that produce tidally excited resonances with internal gravity waves (solid lines; see equation (A2)).The positive branch of the ∆k 2 resonance extends toward the bottom-left of each solid line.The red crosses represent individual resonances identified in our numerical simulations and shown in Fig. 3.The dashed lines represent simplified freezing trajectories for different models of ocean stratification.The fully stratified ocean model assumes M s ≈ 8.3 × 10 21 g, which is equivalent to a δρ ∼ 0.001 g cm −3 over an ocean thickness H ∼ 200 km. Figure 7.Same as Fig.2, but as a function of orbital semi-major axis a.The shaded vertical line represents Titan's current semi-major axis a = 21R s , where R s is Saturn's radius.The ocean is fixed to H = 300 km and N 2 = 1.4454 × 10 −8 s −2 (see also Fig.3).The tidal frequency is equal to the orbital frequency and ω 0 is Titan's current orbital frequency ω s .The eccentricity is fixed to Titan's current e and we assume synchronous rotation at all times (i.e., Ω = ω s ). a = a 0 (1 − e cos ωt), (B14) φ p = 2e sin ωt.(B15) Our strategy is now to expand equation (B11) to first order in e.The semimajor axis dependency expands of the planet expands as e −imφp ≈ 1 − 2ime sin ωt.≈ 1 − em(e iωt − e −iωt ).(B21) of the fact that the direction of tides can be flipped by either changing the sign in φ or the tidal frequency ω s , reducing the eccentricity tidal potential toϕ e ℓm ≈ e (ℓ + 1 + 2m) U ℓm r R ℓ Y m ℓ (θ, φ)e −iωt .(B24)In this paper, all quantities mimic the time dependence of the tidal forcing.East tides correspond to m > 0 and west tides to m < 0. East tides propagate in the direction of rotation, whereas west tides are counter-rotation.Notice that the amplitude of eccentricity tides is 7e fold compared to the amplitude of the static tides in the case ℓ = m = 2.C.PROJECTION OF DYNAMICAL TIDES ONTO VECTORIAL SPHERICAL HARMONICSWe project our equations onto vectorial spherical harmonics following the standard decompositionξ = y 1 Y + y 2 Ψ + y 3 Φ,(C25)where YΨΦ constitute an orthonormal base for the projection of vectorial fields in spherical polar coordinates.VSH relate to scalar spherical harmonics (equation (B9)) asY = Y r, (C26) Ψ = r∇Y , (C27) Φ = r∇ × Y,(C28)where spherical harmonics satisfy r 2 ∇ 2 Y = −ℓ(ℓ + 1)Y .We further project the gravity response and pressure-gravity potentials, respectively,
13,428.8
2023-12-10T00:00:00.000
[ "Environmental Science", "Physics" ]
A Proposed Technique to Resolve Transportation Problem by Trapezoidal Fuzzy Numbers Objectives: To find the best optimal solution of transportation problem in fuzzy environment Method: We proposed a new method to find the optimal solution. Findings: This study introduces a Median method. By applying the same we transform the fuzzy transportation problem to an exquisite val-ued one and subsequently into a new proposed process to uncover the fuzzy realistic solution. Also, we find a minimum transportation cost. Novelty: The numerical illustration demonstrates that the new projected method for man-aging the transportation problems on fuzzy algorithms. Introduction The transportation problem is found globally in solving certain real-world problems. This manuscript might be one of the novel methodologies that bring down the optimal solution value. Here I compared with existing methods named North -West Corner method, the Least cost method, and VAM method. Considerã i the number of items that available at the sourceĩ andb j the number of items that necessary at the destination j.Considerα i j as the price of transferring one item from sourceĩ to end terminal j andX i j as the amount of item carried from sourceĩto terminal end j. A fuzzy transportation problem is a progressive method in that we can get the expenditure of the transportation, Demand a d supply facts are fuzzy quantities. The first introduced fuzzy set concept by Zadeh (1) . Zimmerman (2) devised fuzzy linear programming. Srinivasan et al., (3)(4)(5) recommended a novel algorithm to crack fuzzy transportation problems. Ghosh et al., (6) introduced a genetic algorithm to solve fully Intuitionistic fuzzy fixed-charge solid transportation problems. Bharati (7) proposed a new algorithm namely, the impact of a new ranking. Progress in Artificial Intelligence https://www.indjst.org/ for finding a ranking of a fuzzy number. Muhammad Saman (8) described a new fuzzy transportation algorithm for finding the fuzzy optimal solutions. Srinivasan et al., (9) have explored a two-stage cost-minimizing fuzzy transportation problem where supply and demand are trapezoidal fuzzy numbers using a stricture approach to reach a fuzzy solution. Karthikeyan and Mohamed (10) proposed a novel algorithm to crack the fuzzy transportation problem for Trapezoidal fuzzy numbers. The proposed algorithm is to unravel a strong solution by using fuzzy transportation problems taking an account of supply, demand, and item transportation price as trapezoidal fuzzy numbers. In this manuscript, an unsullied way is recommended for the Median of fuzzy trapezoidal numbers in a simplified way. To demonstrate this proposed method, a case is conferred. As the suggested process is awfully straight and effortless to understand and apply it is undemanding to make out the fuzzy most select viable outcome of fuzzy transportation troubles take place in the factual conditions. This manuscript is sorted out as follows: In division 2 it is centered the crucial description of fuzzy figures. In section 3, a getting Median practice is initiated and show on a novel algorithm to resolve the transportation problem by fuzzy. In section 4, is to reveal the projected method a numerical design is solved. In section 5, a conclusion element is also encompassed. Preliminaries Here, in this division, we describe some crucial descriptions the same will be applied in this manuscript by Geetha and Selvakumari (11) . Definition: Fuzzy Set A is a fuzzy set on R is defined as a set of ordered pairs where µÃ (x 0 ) is said to be the membership function. Definition: Fuzzy Number A is a fuzzy set on R likely bound to the stated conditions given beneath 1. µÃ (x 0 ) is part by part continuous 2. There exist at least one x 0 ∈ ℜ with µÃ (x 0 ) = 1 3.à is regular and convex Definition: Trapezoidal Fuzzy Number A fuzzy numberà is a trapezoidal fuzzy number which is named as Fuzzy transportation problem utilizing Mathematical formulation A transportation problem can be declared in mathematical form as follows: The fuzzy transportation problem is explicitly represented by the fuzzy transportation table: Recommended algorithm Step -1: Verify given problem is stabled or not. If unstable, change into a stabled one by introducing a model source or model destination utilizing zero fuzzy item transportation expenses. Step -2: The median value is imparted to transform both demand and supply. Step -3: From the first row and first column, the minimum quantity of the fuzzy price is selected. Step -4: Finding the smallest amount of Inventory and Requirement and allocate it. Step -5: Follow the Third and fourth steps, till (s + t − 1) groups are allocated. Numerical example A resolution that we affirm to fuzzy transportation problem which involves transportation cost, customer needs and demands and existence of products using trapezoidal Fuzzy figures. Observe the following transportation problem by Dipankar De (12) . Then find the minimum of the resultant values from the first row and column and allocate the particular cost cell of the given problem. If we have more than one minimum resultant value, we can choose anyone. Solution The same procedure will be followed again and again until we reach the final allocation. Finally, using the new proposed algorithm obtained gives the best possible resolutions are as follows. https://www.indjst.org/ Result Here (4+4-1) = 7 cells are allocated. Next, we can get the optimal solution by means proposed algorithm. Discussion The Conclusion A large number of transportation problems with different levels of sophistication have been studied in the literature. However, some of these problems have limited real-life applications because the conventional transportation problems generally assume crisp data for the transportation cost, the values of supplies and demands. Contrary to the conventional transportation problems, we investigated imprecise data in the real-life transportation problems and developed an alternative method that is simple and yet addresses these shortfalls in the existing models in the literature. In the FTP considered in this study, the values of transportation costs are represented by generalized trapezoidal fuzzy numbers and the values of supply and demand of products are represented by real numbers. Here we concluded that once the ranking function is chosen, the FTP is converted into a crisp one, which is easily solved by the standard transportation algorithms. Therefore, further research on extending the proposed method to overcome these shortcomings is an interesting stream future research. We shall report the significant results of these ongoing projects in the near future.
1,470
2021-05-25T00:00:00.000
[ "Mathematics", "Computer Science" ]
Constrained Optimal Consensus in Dynamical Networks In this paper, an optimal consensus problem with local inequality constraints is studied for a network of single-integrator agents. The goal is that a group of single-integrator a gents rendezvous at the optimal point of the sum of local convex objective functions. The local objective functions are only available to the corresponding agents that only need to know their relative distances from their neighbors in order to seek the final optimal point. This point is supposed to be confined by some local inequality constraints. To tackle this problem, we integrate the primal dual gradient-based optimization algorithm with a consensus protocol to drive the agents toward the agreed point that satisfies KKT conditions. The asymptotic convergence of the solution of the optimization problem is proven with the help of LaSalle's invariance principle for hybrid systems. A numerical example is presented to show the effectiveness of our protocol. 1. Introduction. Over the last decade, cooperative control in a network of autonomous agents have been considered in scientific communities by virtue of big breakthroughs in wireless communication technology. Among these problems, consensus in dynamical networks is a central problem that has been studied from many aspects [15,2,3,16]. In particular, the problem of optimal consensus among networked agents has recently gained considerable attention. In this setup, the final consensus value is required to minimize the sum of individual uncoupled convex functions. For instance, the paper [14] resolved the optimal consensus problem over a network of single-integrator agents with time-varying objective function under the confining condition that Hessians associated with all local convex functions being The results of the current paper provides further developments compared to the existing literature in this area. i) Compared to [18,5,20], in the present approach the agents do not need to exchange the information of their dual variables and can reach optimal consensus by only knowing their relative positions with respect to their neighbors. ii) From design perspective, the penalty-based protocol studied in [18,5,8,21,20,13] only admits linear consensus paradigm. This restricts the protocol illustrated in these references from adopting nonlinear consensus strategies that can in turn deliver fast convergence outcomes, see e.g. [14]. Besides, in the case of high order dynamics, this approach does not work, see e.g. [14,19]. The algorithm introduced here does not have such limitation. iii) Even though the problem studied here is closely related to that of in [14,19,10,8], unlike the current paper, these references only addressed unconstrained optimization. iv) While the references [12,9] explored constrained optimization problems with convex set constraints, the projection operator utilized therein is difficult to handle in real-time specially when a large number of constraints are involved. Since a closed convex set can be approximated by a polyhedron set that is constituted by a set of linear equalities and inequalities, one can cast the optimization problem of [12,9] into the present formulation and adopt an easy-to-handle gradient-based primal-dual method discussed here to resolve it. v) The proposed algorithm does not achieve spiral trajectories toward the final point compared to the existing penalty-based algorithms (see Section 4). This paper is organized as follows. The problem formulation is given Section 2. Then, our proposed solution is presented in Section 3. A numerical example is presented in Section 4. Finally, the concluding remarks and suggestions for future studies are given in Section 5. 2. Problem Formulation. Consider N physical agents over a network with timeinvariant undirected graph G = (N , E, A), where N = {1, . . . , N } is the node set, E ⊆ N × N is the edge set , and A = [a ij ] is a weighted adjacency matrix. Each pair e = (i, j) ∈ E indicates link between the node i and the node j in an undirected graph. Suppose that each agent is described by the continuous-time single-integrator dynamicsẋ where x i (t) ∈ R represents the position of agent i, and u i (t) is the control input to agent i. We shall drop the argument t throughout this paper unless it is necessary. It is worthwhile noting that here we consider only one dimensional agents for the sake of simplicity in notations. However, it is straightforward to show that our algorithm can be extended to higher dimensional dynamics as each dimension is decoupled from others and can be treated independently. The agents are supposed to reach at an agreed point that shall minimize a convex optimization problem as in which f i (·) : R → R is the local cost function associated with node i in the network. Furthermore, g i (·) : R → R represents a constraint on the optimal position and is associated with node i. It is supposed that each agent knows only its associated cost function and inequality constraint function. We assumed only one inequality constraint per node for simplicity; our algorithm can solve the same problem with a desired number of inequality constraints. We consider the following assumptions in relation to the problem (2). Assumption 1. (i) The objective functions f i (·), i ∈ N , and g i (·), i ∈ N , are convex and continuously differentiable on R. The Assumption 1 and 2 fulfill the solution existence conditions for the optimization problem (2). Note that the constrained optimal consensus problem that was defined above is equivalent to the following distributed convex optimization problem In the problem (3), the consensus constraint, i.e. x i = x j , i, j = 1, . . . , N , is imposed to guarantee the same decision variable is achieved eventually. Here, the agents with dynamics as in (1) shall seek the optimal point x * i.e. x i = x * , i ∈ N , which minimize the collective cost function N i=1 f i (x i ) in a distributed fashion, given inequality constraints g i (x i ) ≤ 0, i ∈ N . To this end, each agent searches for the minimum of its associated cost function, f i (x i ), with regards to its local inequality constraint, g i (x i ), not knowing other local cost functions and constraint inequalities. Furthermore, all agents shall reach an agreement on their positions through only knowing the relative distances from their neighboring agents. Now, we shall design the control input u i to fulfill these requirements. One can say that the problem (3) consists of a minimization sub-problem, with inequality constraints, and a consensus sub-problem. This splitting is the cornerstone of our approach to resolve the problem (2). The minimization sub-problem can be defined as and the consensus sub-problem is Before proceeding to solving the above mentioned sub-problems, i.e. the minimization sub-problem (4) and the consensus sub-problem (5), we present some optimality conditions for the optimization sub-problem through the following lemma. Later in Section 3, we will use these conditions to show the convergence of our algorithm. is the optimal solution of the problem (4) if and only if there exist Lagrangian multipliers, λ * i > 0 , i = 1, . . . , N , such that the following conditions are satisfied To solve the minimization sub-problem (4), we focus on the primal-dual method that seeks the saddle point of the Lagrangian associated with convex optimization sub-problem (4). The Lagrangian is defined by is convex inx and concave in λ. We have the following properties for L(x,λ), where x * ,λ * is said to be the saddle point of L(x,λ) [1, p. 238]. The following inequalities hold for all (x,λ) ∈ dom(L) (8), one can define the Lagrangian function for node i as In the sequel, we will use L and L i to denote the aggregate Lagrangian (8) and the Lagrangian corresponding to node i, i.e. (12), respectively. Hence, the main task of this paper is to find the saddle point of (8) while consensus on the agents' states is also achieved. 3. Main Results. We propose the following dynamics to find the saddle point of (8) and satisfy the consensus constraint (5) where α > 0 and is the set of neighbors corresponding to node i. The operators ∇ xi and ∇ λi are the partial derivative with respect to x i and λ i , respectively. Note that −α∇ xi L i + h i acts as the control input for agent i, i.e. In (14), a positive projection is used to ensure that Lagrangian multipliers remain non-negative. For scalars, [p] + q = p if p > 0 or q > 0, and [p] + q = 0 otherwise. When [p] + q = 0, the projection is said to be active. Therefore, in (14) when λ i > 0 and g i (x i ) < 0,λ i < 0 and λ i decreases until it reaches 0 where the projection becomes active and it remains 0 until the sign of g i (x i ) turns. Note that we start with λ i (0) > 0; therefore, λ i ≥ 0 for all t > 0. One can define the set of active projection by σ = {i : λ i = 0, g i (x i ) < 0}. Note that the control command (15), consists of two parts. The first part is to minimize the local cost function and the second part is associated with the consensus error. The following lemma is instrumental to some of the results presented in this paper. Before proving that the algorithm in (13) and (14) yields the saddle point of (8), we show that the positions of agents, x i , i ∈ N , reach consensus, when taking control input as u i = −α∇ xi L i + h i , i ∈ N . This is established in the next proposition. Proposition 1. Suppose that the graph G is connected and undirected. Then, there exists some finite t 1 such that the agents (1) under the protocol (15) Proof. The compact form of dynamics of agents (1) with (15) Let the network's consensus error be defined byē x = Πx, where Π = I N − 1 N 1 N 1 N andx denotes the all states of the whole network, that is defined byx = [x 1 . . . x N ] . Note that 1 Π = 0 and Π1 = 0. Thus, one can writeė where D = [d ik ] ∈ R N ×|E| is the incidence matrix associated with the topology G. And, its entries i.e. d ik , are obtained by assigning an arbitrary orientation for the edges in G. For instance, if one considers the k th edge i.e. e k = (i, j), then d ik = −1 if the edge e k leaves node i, d ik = 1 if it enters node i, and d ik = 0 otherwise. We choose the Lyapunov candidate function V (ē x ) = 1 2ē xēx . By taking time derivative from V (ē x ) along the trajectories ofē x , one can writė where v 2 (DD ) denotes the smallest non-zero eigenvalue of DD . In the above, the first inequality is resulted from Lemma 3.1, and the second inequality is resulted from the assumption ∇ xi L i − ∇ xj L j < ω 0 given in the statement of the proposition. From (17), one can say thaṫ where 0 < θ < 1. For ē x ≥ ω 0 α βv 2 (DD ) − θ , we obtainV (ē x ) < 0. Now, we are ready to invoke Theorem 5.1 from [7] that guarantees that by choosing β large enough, one can make the consensus error, δ 0 , as small as desired. Remark 1. Assumption ∇ xi L i − ∇ xj L j < ω 0 in Proposition 1 seems to be unreasonable at the first glance as it assumes that the primal and dual variables x i and λ i , i = 1, . . . , N , must remain bounded. However, we will show by the following lemma that this requirement always holds. It is worthwhile mentioning that by choosing a conservative bound on ω 0 one can adjust the protocol's parameters to reach consensus with any desired accuracy. We now assert that the trajectories generated by the dynamics (13) and (14) are globally bounded. Lemma 3.2. Given that the graph G is connected and undirected, the solutions of (13) and (14) are globally bounded. Proof. We study boundedness of the solutions of (13) and (14) by Lyapunov stability analysis. Let us define a quadratic Lyapunov function as In the above equation, x * ,λ * represents a saddle point equilibrium associated with L(x,λ). By taking derivative from both sides of (19) along the trajectories (13) and (14), with respect to time, we will havė where Suppose that for some index i, the projection becomes active i.e. i ∈ σ. In this case λ i = 0 and ∇ λi L i = g i (x i ) < 0. It is worthwhile noting that λ i < 0 never holds when parameters are initialized by positive values. Thus, in this case one can conclude that (λ i − λ * i )∇ λi L i ≥ 0 due to the fact that ∇ λi L i < 0 and λ * i ≥ 0. On the other hand, for the agents the projection is not active, Thus, we can assert that the following inequality holds. Then, from (9) and (10), we havė The inequality (22) is due to (11). Furthermore, the last equality results from the fact that j∈Ni (x i − x j ) = 0 in a network with the undirected graph G. It is easy to show that − N i=1 x i j∈Ni (x i − x j ) ≤ 0 in an undirected graph. Hence, W (x,λ) ≤ 0, and, thus, the proof is concluded. The dynamics (13) and (14) can be regarded as a hybrid system due to switching projection operator on the right side of the relation (14). Thus, before proceeding to the main result of this section, we introduce the LaSalle's invariance principle for hybrid systems through a lemma first given in [11] and later summarized in [4]. Next, in the light of the above lemma, we express the main result of this section. Theorem 3.4. Assume that f i (x i ) and g i (x i ), i ∈ N , are twice continuously differentiable on R. Given Assumption 1 and 2, the dynamics (13) and (14) will converge to x * ,λ * that is the solution to the optimization sub-problem (4). Proof. To prove the theorem, it suffices to show that dynamics (13) and (14) will converge to a saddle point associated with the Lagrangian function (8). To this end, we split the proof into two parts. We first illustrate that the Lyapunov function is always decreasing. Then, in the second part, we appeal to Lemma 3.3 to establish that the optimality conditions in Lemma 2.1 hold. To examine the above Lyapunov function, we only need to consider two scenarios, namely, the one in which the index set σ changes and the other one where this set is fixed. One should note that in the former case the Lyapunov function (24) might be discontinuous asλ i switches when σ changes according to (14). However, in the latter, the Lyapunov function (24) is continuous. In the following, we establish that in both cases the positive function (24) is always non-increasing. We first assume that σ is fixed. Taking derivative of V (ẋ,λ; σ) along the trajectories (13) and (14) with respect to time, we obtaiṅ The above equations can be simplified by expanding some of its terms into two cases, namely, i ∈ σ and i / ∈ σ. Note that when i ∈ σ, λ i = 0,λ i = 0. Thus, we can Then after a simple algebraic simplification, it is easy to verify thaṫ From the definition of h i , we attain the following equality: Now, with substituting (28) in (27), we obtaiṅ From Assumption 1 and that β α > 0, it is attained thaṫ V (ẋ,λ; σ) ≤ 0. In the following, we will show that the same property holds even when the set σ changes. Consider conditions under which the index set σ varies: (1) Consider the case at given time index, say t 0 , the index set σ is enlarged. This happens when there is a larger number of constraints with g i (x i (t + 0 )) < 0 compared to those with Here t − 0 and t + 0 stand for the moment just before and after t 0 , respectively. (2) Now suppose that the index set σ shrinks. This case occurs when the set loses a constraint i at time t 0 and g i (x i (t + 0 )) becomes positive. Since g i (·) is a continuous function and x i is continuous as well, it can be said that this function has passed through zero to become positive. The latter supports that the new termλ 2 i is added to V (ẋ,λ; σ) but since g i (x i (t + 0 )) = g i (x i (t − 0 )), no discontinuity happens. Therefore, one can say V (ẋ,λ; σ) does not change in this case and, therefore, remains non-increasing according to (30). Now, we invoke Lemma 3.3 that presents LaSalles invariance principle for hybrid systems. From Lemma 3.2, we conclude that whole space R 2N represents an invariant set for the hybrid dynamics (13) and (14). On the other hand, in the first part of the proof, we showed that the Lyapunov function (24) decreases along the trajectories produced by (13) and (14). According to the statement of Lemma 3.3 there should exist maximal invariant set, say M , that satisfies conditions (a) and (b) stated in Lemma 3.3. In the sequel, we will show that (13) and (14) will stabilize at the point in which conditions (a) and (b) are met; moreover, the KKT conditions (6) and (7) are also fulfilled. We first attend to part (a). From the equation (29), we attainẋ i = 0, i ∈ N , i.e.x ≡x * since one can derive from (13) thatx is continuous. Also, (7) is satisfied. As forλ, assume that g i (x * i ) > 0, then, λ i will grow unboundedly that it contradicts its boundedness shown earlier in Lemma 3.2. Therefore, g i (x * i ) ≤ 0, then two possible cases happen: i) λ i would decrease until it reaches at zero, producing a discontinuity once the projection becomes active. This would contradict with part (b) of Lemma 3.3. ii) λ i = 0; the corresponding projection is active for some i. Thus, g i (x * i ) ≤ 0 and λ * i = 0 always hold, and, (6) is met. In the above, we showed that the equilibrium point of the dynamics (13) and (14) is a saddle point of the Lagrangian function (8), and in the light of Saddle Point Theorem [17,Theorem 4.7], it is the optimal solution to (4). One should note that through Proposition 1, we showed consensus on states, i.e. x i = x j , i, j = 1, . . . , N . Furthermore, by Theorem 3.4, we proved that the control inputs (15) drive the agents towards the saddle point of the Lagrangian associated with (4). Hence, the optimal consensus problem (3) associated with the network of single-integrator agents (1) is resolved. Remark 2. There exists a trade-off between size of the control command and permitted consensus error when selecting parameters α and β. As β increases, according to Proposition 1, the consensus error becomes smaller while the control input size attains a larger value. On the other hand, with small α, the consensus error decreases; however, this decelerates the optimization process. 4. Simulation Example. As mentioned earlier, the results of current paper also hold when agents are modelled by several integrators i.e. x i ∈ R m . We exploit this fact and consider the following scenario that clearly exhibits the results of this paper through a numerical simulation. Consider four agents that move in a 2-D space and are connected under a ring topology. Assume that each agent is modelled by one single-integrator dynamics per coordinate. Their local objective functions are as f 1 (x 11 , has a local constraint as g 1 (x 11 , x 12 ) = −x 11 − x 12 + 1 ≤ 0. Agent 2 suffers the constraint g 2 (x 21 , x 22 ) = x 2 21 + x 2 22 − 2 ≤ 0. Agent 3 has the local constraint g 3 (x 31 , x 32 ) = x 2 31 + x 2 32 − 1 ≤ 0, while agent 4 has no constraint. Let α = 0.1 and β = 10 be the control law's coefficients as in (15). Under the control law (15), the trajectories of agents' positions are shown in Fig. 1 when the initial positions of the agents 1, 2, 3, and 4 are set as x 1 = 2 3 , x 2 = 1 4 , x 3 = 3 4 , and x 4 = 5 0 , respectively. We set the initial values for the Lagrangian multipliers as zero. The optimal solution to the problem is 0.85 0.53 . Among many existing penalty-based algorithm, due to the page limitation, we only compare our result with that of the algorithm proposed by [21] on the above Figure 2. Simulation results of [21]: States' trajectories for a ring network of single-integrator agents example (see Fig. 2). As it is observed, with the primal-dual dynamics proposed in [21], the agents spiral around the optimal point in a preplexed way to reach the optimal point. Such trajectories towards the final point will impose too much energy consumption and practically are not feasible to achieve. 5. Conclusion. We studied constrained optimal consensus problem for an undirected network of single-integrator agents. We proposed a fusion algorithm in which: i) a primal-dual gradient method was used to satisfy KKT conditions for constrained convex optimization problems, and ii) a consensus protocol was adopted to make all agents reach the agreed optimal value. Then, through the theory of stability of perturbed systems, we showed that this algorithm delivers consensus. Moreover, we proved that the equilibrium point of the network's dynamics coincides with the optimal solution to the optimization problem, adopting the LaSalle's invariance principle for hybrid systems. Finally, we illustrated the performance of our proposed algorithm through a numerical example.
5,359.2
2018-03-13T00:00:00.000
[ "Engineering", "Computer Science" ]
Quantum-gravity effects could in principle be witnessed in neutrino-like oscillations Two of us (CM and VV) recently showed how the quantum character of a physical system, in particular the gravitational field, can in principle be witnessed without directly measuring observables of that system, solely by its ability to mediate entanglement between two other systems. Here we propose a variant of that scheme, where the entanglement is again generated via gravitational interaction, but now between two particles both at sharp locations (very close to each other) but each in a superposition of two different masses. We discuss an in-principle example using two hypothetical massive, neutral, weakly-interacting particles generated in a superposition of different masses. The key property of such particles would be that, like neutrinos, they are affected only by weak nuclear interactions and gravity. Two of us (CM and VV [2]) recently showed how the quantum character of a physical system, in particular the gravitational field, can in principle be witnessed without directly measuring observables of that system, solely by its ability to mediate entanglement between two other systems. Here we propose a variant of that scheme, where the entanglement is again generated via gravitational interaction, but now between two particles both at sharp locations (very close to each other) but each in a superposition of two different masses. We discuss an in-principle example using two hypothetical massive, neutral, weakly-interacting particles generated in a superposition of different masses. The key property of such particles would be that, like neutrinos, they are affected only by weak nuclear interactions and gravity. PACS numbers: The domain where quantum theory and theories of gravitation intersect is traditionally probed by considering experiments using a single mass in a superposition of two different locations, as in Feynman's thought experiment [1]. Two recent schemes have been proposed extending this tradition [2,3]. Their key innovation is to use the gravitational field as a mediator to produce entanglement between two masses, each prepared in a quantum superposition of two different locations. As explained in [2], the entanglement generated between the two masses (in the path degree of freedom) is an indirect witness of the quantisation of the gravitational field, in that it requires the field to have at least two variables that cannot be simultaneously measured. Here we discuss a hypothetical variant of that scheme, where the gravitational interaction takes place between two particles that are each in a superposition of two different masses. Any particle that is in a superposition of two nondegenerate eigenstates of its Hamiltonian is in a superposition of two different generalised masses. Typically, gravitational effects on, or of, particles in such superpositions are too weak to be detected, and are swamped by decoherence from other sources. But not, for example, for neutrinos emitted in the phenomenon of double-beta decay, since neutrinos interact exclusively via gravity and the weak interaction. The latter has an extremely short range (about 10 −18 m) and is therefore negligible at the scale of the size d of the detector, which we envisage as something like a nucleus (10 −15 m). Neutrinos are neutral particles produced in β-decay [4], for instance the decay of a neutron into a proton, an electron and an electron-antineutrino. (For brevity we shall omit "anti" in what follows.) In the Standard Model, neutrinos were originally assumed to be massless; but the phenomenon of neutrino oscillations reveals that this is false [4]. In fact a neutrino can exist in at least three different flavours, each eigenstate of flavour being a superposition of different states of mass. Here we shall calculate the effect of the gravitational interaction between two hypothetical neutrino-like particles. As we shall see, the interaction affects the period of the oscilla-tions, which could in principle be used to demonstrate, or witness, gravitationally-mediated entanglement between the two particles. Two observables are relevant for the oscillations of a single neutrino-like particle: one is its Hamiltonian H (depending only on the internal degrees of freedom), with three eigenstates |m i , which we label by the value of the rest mass, m i . The other observable, which does not commute with H, is the flavour, with three eigenstates |ν i in the flavour subspace. We first analyse the oscillation of an isolated particle; and then we will see how the effect of the mutual interaction with another like particle changes the phases, thus providing the desired witness of entanglement. We shall work in the particle's rest-frame and assume that, like the real neutrino, it is created initially in a flavour eigenstate ν 1 , which, for simplicity, we shall assume is a superposition of two of the eigenstates of the Hamiltonian: where θ is the mixing angle -a fixed parameter which, as for neutrinos, would depend on the physics of weak interactions [4]. Now, suppose that the created particle evolves freely under its Hamiltonian H; and then at time t its flavour is measured. The state of the particle just before the measurement is: Thus the "survival" probability for particle still to be in the initial flavour eigenstate |ν 1 when the detection happens is where ∆m = m 2 − m 1 . In our thought experiment, we are interested in the effects of gravitational interactions between two of these particles that are produced very close together, simultaneously. Since the particles interact gravitationally, the phases of the flavour components of their states are further modified by the gravitational potential, at each of them, due to the other. As proved in [2], this causes entanglement which is a witness of the quantum nature of the gravitational field. So, consider two such particles separated by a distance d, where d is about the size of a nucleus. Initially they are both in the same state |ν 1 , i.e., their state is |ν 1 |ν 1 . Let us calculate the phase shift due to their mutual gravitational interaction. As a result of the interaction, the initial state of the two particles |ν 1 |ν 1 evolves into the state Thus, the degree of entanglement varies with t. The probability that a detector acting at time t will detect the flavour eigenstate |ν 1 is now modified by the gravitational interaction: The additional gravitationally induced phase Φ G = G m(∆m) d t would be extremely small for neutrinos, because of their tiny masses (of the same order as ∆m ≈ 10 −38 kg). But we can imagine a hypothetical neutrino-like particle with much larger masses. Suppose for example that they are of the order of the Planck mass m = 10 −8 kg. (Note that the ratio Φ G Φ , where Φ = ∆mc 2 t , depends on m 1 + m 2 , but not on ∆m). The particle could then not be created by a nuclear decay, since nuclei aren't that massive. It would have to come from some much more energetic event about whose nature we need not speculate, except to say that it must happen at a location fixed to an accuracy d. Given our assumption that, like neutrinos, the particle interacts solely via the gravitational and weak nuclear forces, that still-tiny, gravitationally induced phase change Φ G could be detectable in principle. For suppose that the source and the detector are a distance L apart, so that detection occurs at a time L v (in the laboratory frame) after emission, where v is the speed of the particle. The detection probability as a function of L will be periodic with wavelength λ = 2π cγ ω where ω = ∆mc 2 2 , and γ = 1 is the Lorentz factor. If we assume that the detector has the same size d as the source, the condition for this spatial variation to be detectable is λ > d. We also assume that the detector can distinguish a 2-particle hit from a 1-particle hit. The latter will typically be from one member of a pair whose other member travelled in a different direction, so that the detected particle was subjected to a negligible mutual gravitational interaction on its journey, leading to a negligible gravitational phase shift. Hence at distances L where Φ G Φ ∆mc 2 L γ v = (n + 1 2 )π for integer n, the oscillations of affected and unaffected particles will be out of phase, so that only single-(or alternately only double-) particle hits will be detected. As the particles travel to the detector, their wave packets will spread. If the initial spread in position is δ, the spread in speed can be as little as γmδ , so the final spread in position will be about δ + L γmvδ , which can be as low as 2 L γmv so we must have 2 L γmv < d 2 . These conditions are met for a range of values of v and ∆m -for instance d ≈ 10 −15 m, γ ≈ 10 4 , L v ≈ 10 −1 s, and ∆m ≈ 10 −25 kg. The different gravitational potentials due to other masses, at the two paths that are d apart, would also contribute to the phase. We may suppose that the experiment is done in free fall, far from all large masses. For a mass M at a distance R not to swamp the effect we are measuring, we must have Particles with the parameters we have considered easily satisfy this condition if the experiment is feasible in Earth orbit (with, say, R ≈ 10 7 m, and the mass of the Earth about 6 × 10 24 kg).
2,238.6
2018-04-08T00:00:00.000
[ "Physics" ]
The dialectics of disaster: Considerations on hazards and vulnerability in the age of climate breakdown, with a brief case study of Khuzestan In a historical moment inundated by disasters, understanding and conceptualising the phenomenon is a matter of some importance. No framework for doing so has been more productive than that developed by Wisner and his colleagues. But their so-called ‘Progression of Vulnerability’ (pressure and release [PAR] model) framework was conceived before the onset of the climate crisis. And that crisis, as the saying goes, changes everything. Contribution What follows is an immanent critique of the framework, with an eye towards shifting some of its parameters in order to account for the process of climate breakdown now multiplying disasters across the globe. In a sense, these iconoclasts simply returned the concept of vulnerability to its etymological roots.The Latin term vulnerabilis was 'used by the Romans to describe the state of a soldier lying already wounded on the battlefield, i.e. already injured [and] therefore at risk from further attack' (Kelly & Adger [2000] 2009:163).On this classical view, vulnerability is a condition inflicted on someone by some human antagonist.It makes the former liable to receive the next strike as a deathblow.'Disaster' is the name for the fall of the blow; or, it 'marks the interface between an extreme physical phenomenon and a vulnerable human population', as Wisner and colleagues put it in their seminal early paper in Nature (O' Keefe et al. 1976:566).The 1970s also appeared to be a moment of disasters galore.Data indicated a surge in their number and casualty tolls.What accounted for this disturbing trend?In a move that cleaned the slate for their emerging framework, Wisner and colleagues argued: No major geological or climatological changes over the last 50 years adequately explain the rise.There is little argument about geological change, but there has been much mystifying argument about climatic change, especially following the prolonged drought over the African and Asian continents.But no firm conclusion can be drawn about changing climatic conditions from available evidence.Randall Baker at the Development Studies School of the University of East Anglia recently reviewed all the evidence of climatic change in Africa and offered the Scottish judgment of 'case not proven'.Even if some long-term change was observable it would not explain the increase in disaster occurrence observed in the data.(Wisner, O'Keefe & Westgate 1977) In a historical moment inundated by disasters, understanding and conceptualising the phenomenon is a matter of some importance.No framework for doing so has been more productive than that developed by Wisner and his colleagues.But their so-called 'Progression of Vulnerability' (pressure and release [PAR] model) framework was conceived before the onset of the climate crisis.And that crisis, as the saying goes, changes everything. http://www.jamba.org.zaOpen Access The last statement is hard to make sense of as anything else than an axiomatic rejection.Had these words been written today, they would count as climate denial.But they were, of course, penned a decade before modern climate science matured, and therefore they escape that tarnish.The substantial point is that Wisner et al. erected their model on the premise of a stable climate system.Their deduction continued: If it is accepted that there has been no major geological and climatological change in recent years, then it can be assumed that the probability of the extreme physical occurrence is constant. 1If the probability is constant, then logically the explanation of the increasing numbers of disasters must be sought in an explanation of the growing vulnerability of the population to extreme physical events.(O'Keefe et al. 1976;cf. Susman, O'Keefe & Wisner 1983:265) In the inaugural issue of the journal Disasters, the case against climatic change was restated and the argument taken to its conclusion: More and more people are becoming vulnerable to the occurrence of certain physical events which have been occurring with a certain mathematically reconstructable probability for centuries, if not millennia.It is in the phenomenon of vulnerability -that is, on the human side of the man-nature relationship -that an explanation is to be sought.et al. 1976:566).That is why people are left exposed to the full force of natural blows: they 'die in disasters chiefly because insufficient money is spent saving lives' (Anon 1977:1), the first editorial of the journal Disasters declared.As an alternative approach, this had much logic and data to speak for it. Over the 1980s and 1990s, the counter-paradigm solidified under the labels 'contextual', 'second generation ', 'social' and 'critical' vulnerability studies (cf. Bolin 2007); here, we shall use the latter term.In the landmark volume Interpretations of Calamity: From the Viewpoint of Human Ecology, contributors pressed home one cardinal idea: disasters are not chance events 1.At a closer look, this is a fallacy: it does not follow from an observation of climate stability 'in recent years' that 'the probability of the extreme physical occurrence is constant' -constancy is not proven by conditions over a few years.If climate was stable in the 1970s, it could well collapse in the 2010s.A statement like this should perhaps be read as an indication of the perception of the climate system as having a guaranteed stability, still widespread in the 1970s: see Weart (2003). or 'acts of God' that erupt into ordinary life.Rather they should be seen as the starkest truth about that life, whose inner structure they bring to light (e.g.Hewitt 1983b:25). 2In his chapter, Michael Watts drew on research from northern Nigeria, later elaborated into a classic of political ecology: during a drought in the region, rich households stood the test thanks to the large size of their cattle herds and other assets, whereas the poorer ones bit the dust.Some owned the means for survival, others did not.Deaths and losses were not in any profound sense caused by the drought -it was at most 'a catalyst' -but rather by selective pressures inhering in the prevailing property relations (Watts 1983:258).In this scheme of things, what goes on in nature sensu stricto is almost beside the point. Modelling social release Critical vulnerability studies received its fullest and most eloquent exposition in At Risk: Natural Hazards, People's Vulnerability and Disasters, again written by Wisner and colleagues.Following the line of inquiry pursued since the 1970s, At Risk fleshes out the argument that nature is peripheral to the outcomes and adversity, a fact of life. Whether one can deal with it is a matter of having enough land to farm, adequate access to water, a stash of jewellery or a shed of tools to use in need, and this is strictly 'determined by social factors' (Wisner et al. 2004:6).Most fundamental of these are 'relations of production and flows of surpluses' (Wisner et al. 2004:91), since they decide what cushions an agent can dispose over.A population is divided into classes, and further into genders, ethnicities, age groups, citizens and migrants with antithetical positions: some wounded on the battlefields of exploitation, discrimination, persecution and oppression, others decked out in shining armour and ready for anything. Seeping out from Marxism and into mainstream academia, the basic insight about vulnerability is now easy to come by.Wisner et al. have made it enormously influential through 'the pressure and release model' (PAR).Disaster strikes as a clash between two magnitudes: socially determined vulnerability from the one side, natural hazards from the other.In between, people are squeezed or crunched as in a 'nutcracker' (Wisner et al. 2004:50).What truly accounts for the result, however, are the goings-on to the left, as depicted in Figure 1.There is a 'progression of vulnerability', a sequence of causation running from 'root causes' via 'dynamic pressures' to 'unsafe conditions'.Capitalist development marked by deep inequalities (root cause) leads to, among other things, corporate appropriation of land and accelerated urbanisation (dynamic pressures) that impoverish people and drive them to build homes on steep hillsides (unsafe conditions) -and then comes the deluge.The hazard is but a trigger that 'releases' the social pressures long accumulated: geophysicalism turned inside out. 2.It has been argued that Hewitt (1983a) and his contributors 'simply reinvented the wheel', since critical voices already in the 19th century claimed that famine in the colonies were the product not of nature, but of colonial oppression (Brookfield 1999:3).Similar claims were, of course, made by Karl Marx, for example in the case of the Irish potato famine (Marx 1990(Marx [1867]]).But this was apparently a wheel in need of reinvention -with Marx's writings as manuals -given its enormous subsequent influence in vulnerability research. But if this is the final word in, or the consummate model for, critical vulnerability studies, a question imposes itself.Where does 21st century climate breakdown fit in? Barely had the ink dried on the pioneering articles by Wisner and colleagues before it became clear that the scientific community had been overestimating the stability of the climate system and, more precisely, its impermeability to human -that is, social -influence.The belief in the constancy of extreme weather events was put to rest by the discovery of global warming (Weart 2003).Turbulence was injected to the right in the model, the side its originators had wanted to get away from.If the hazards themselves were to redouble in frequency and ferocity and then redouble again, it would be impossible to keep the 'root causes' of disaster to the furthest left; for the model to work, the storms and the floods, the landslides and the droughts had to be bracketed as chance events with a fixed probability. Only then could focus be shifted to the perceived social side of the equation. The analytical difficulties the discovery posed to critical vulnerability studies were apparent in an early attempt to integrate it.Here Wisner and colleagues suggested that climate change is 'a natural phenomenon but one which is caused by anthropogenic emissions of greenhouse gases' (O'Brien et al. 2006:68).In the same piece -ironically, in connection with a discussion of Intergovernmental Panel on Climate Change (IPCC) findings -the old article of faith was reiterated: 'Most disasters, or more correctly, hazards that lead to disasters, cannot be prevented.But their effects can be mitigated.(…) Hazards may be natural in origin, but it is the way in which societies have developed that causes them to become disasters' (O'Brien et al. 2006:65;emphasis added). Such phrasing came close to naturalising climatic hazards and portraying them as acts of God, beyond human influence, impossible to mitigate other than post festum -through successful adaptation, in standard terminology. Similar ad hoc accommodation of climate science is on display in At Risk.At one point, Wisner and colleagues classify global warming as yet another 'dynamic pressure' to the left, between root causes and unsafe conditions (Wisner et al. 2004:33).This is imprecise.Global warming can scarcely be conceived of as a vector analogous to, say, a debt trap or lack of skills, leading over to unprotected buildings or low incomes.Analytically, it belongs firmly to the right side of the equation, as an engine of hazards to which people are more or less vulnerable.And in their most explicit treatment of the subject in their book, Wisner and colleagues recognise precisely this reversal: In relation to famine, climate change principally acts as a trigger through drought, the shifting of the timing of rainfall and its seasonal patterns (e.g. the Asian monsoon), floods which disrupt the production and distribution of food and the possible spread of disease to humans, livestock and crops.All of these increased risks are almost certainly caused by human action (in relation to greenhouse gases) and relate to social vulnerability and to pre-existing 'normal' levels of hazards.But with climate change, human action is responsible for both the generation of people's vulnerability and the increased level of hazard.(Wisner et al. 2004:121;emphasis in original;see also p. 83, 114, 149, 213) Later on, we learn that 'consumption of fossil fuels has begun to change the earth's climate, with a whole series of consequences for food security and health' (Wisner et al. 2004:195).Such consumption whips up entirely unprecedented amounts of drought, flood, disease and other disastrous If these findings are taken on board, the conceptual framework of critical vulnerability studies starts to fray.Or, more sharply put, the pressure-and-release model explodes in a rightwards direction: the social is no longer on the left side solely.It has saturated the hazards themselves. 3It turns out that the model rested on an overestimation of the purely natural in nature (or of the purely social in society), tucking away the hazards in a black box when they have themselves become effluents of the prevailing property relations.As critical vulnerability theory once negated geophysicalism, a negation of the negation seems called for. 4 Towards a dialectical model of climate disaster Since models of this sort are meant to be heuristic devices, we can allow ourselves some stylised simplifications.We want a fuller, more dialectical model of climatic disaster.Extending the artwork from Wisner et al., we might picture it something like as shown in Figure 2. Other factors could be inserted into this model, which is in the nature of a sketch.But the basic point that marks it off from both geophysicalism and pressure-and-release is that similar social drivers are active on both sides of the equation -not as omnipotent causes to which nothing can be added, but as non-negligible prime movers.This more fully realises the metaphor of the nutcracker, a tool that works by the same force pressing both handles.The naturalness is removed from natural disasters, in the gestation of the extreme geophysical occurrences and the vulnerability of the bodies on which they strike.The enemy that deals the deathblow has first done the damage.A climate disaster lights up the truth about society: about its dependence on fossil fuels and its impoverishment of people, which are not necessarily distantly related.A climate disaster releases the pressure of excess energy stored in the system.Now, if the age of climate breakdown is characterised by uncontrolled speed-up in the production of natural hazards, this would seem to indicate a progressive skewing of the model, with the right side weighing heavier and heavier in disaster causation.Will the splitting of humanity into rich and poor matter when ever-more extreme climate impacts slam into it?Dipesh Chakrabarty has made the (somewhat infamous) argument that this particular crisis suspends the 3.This is emphatically not to suggest that nature is hereby 'constructed' or 'produced' or 'ended' by society: rather to the contrary.For some considerations on this dialectic, see Malm (2018). 4.In fact, Wisner and two colleagues did attempt to take account of the social saturation of left-and-right hand sides in a mechanical manner by adding a feedback arrow to PAR.However, this amounts only to an acknowledgement of the problem posed for PAR, not a detailed solution.See Wisner, Gaillard & Kelman (2012). internal divisions of our species: 'Unlike in the crisis of capitalism, there are no lifeboats here for the rich and privileged' (Chakrabarty 2009:221).But this holds only in the very long run.If average temperature on earth rises by 6 or 8 or 12 degrees Celsius, surely the richest will drown and burn too.Long before that, however, poor people will have perished: during the early stages of global heating, which areand this is of the greatest importance -also the stages when the process can still be slowed down and possibly reversed, the suffering will be concentrated to poles of deprivation (cf. Malm & the Zetkin Collective 2021).If climate breakdown spells the destruction of all manner of biophysical resources, the people who sustain the worst and first losses will, virtually by definition, be those who have the least property rights to such resources and eke out a precarious existence on the margins. The temporality of global heating is not that of a sudden catastrophic asteroid strike.It is rather like a rising, warming sea that sends off more and more storm surges and category 5 hurricanes, until all walls and levees are -eventuallyovertopped.Before that endpoint, the disasters will not cancel out but accentuate the different fates of those with buffers and those without, bringing to the fore the deadly consequences of inequalities, much as in the original framework of critical vulnerability studies.Climate disasters, that is to say, will make 'the progression of vulnerability' more ubiquitous.The rich and privileged can already now, at an average warming somewhere between 1 and 1.5 degrees, be afflicted -witness the heatwaves in British Columbia, wildfires in California, floods in western Germany in the summer of 2021.Perfect safety from global heating is available for purchase.Aggregate affluence can hide pockets of acute vulnerability (Eriksen et al. 2020).When storm Ida battered New York City in early September, 11 of the 13 casualties lived in basement apartments, drowned in an instant by inrushing water, most of them immigrants and people of colour: the rule in a fractal worldwide pattern of vulnerability (Holpuch 2021) (Bohle 1994).To take but one example, Egypt is susceptible to harm from global heating, notably sea level rise: but some residents of coastal towns are protected by walls, while others are left in the lurch; some farmers have access to sand, fertilisers, pumps and excavators that allow them to continue farming even as the soil is salinised, while others cannot afford any of this; some entrepreneurs establish fish farms, while artisanal fisherfolk struggle to weather the unpredictable storms, and so on.Most of these differentials can be traced to the specificities of capitalist development in Egypt (Malm 2012;Malm & Esmailian 2012a, 2012b).On the other side of the Atlantic, hurricane Maria in 2017 caused ruin and mass death in Puerto Rico but scratched Texas only lightly: one more occasion for research on climate vulnerability to return to the writings of Wisner and colleagues (Thomas et al. 2018:3-5).They remain indispensable for one half of the problem.What, then, are the merits of a model that adds the other half? It should be possible to operationalise a dialectical model of climate disaster on several scales of inquiry.One obvious start would be sites where the compound of the two progressions is exceedingly concrete.The Iranian province of Khuzestan is a case in point.To give only the most cursory illustration of the general argument, it is to this scene of contemporary misery we now turn.(Hein & Sedighi 2016;Zandieh, Hekmat & Maghsoudi 2021).In the year after the coup, the colonial construct Anglo-Iranian Oil Company changed its name to British Petroleum, the oil of Khuzestan thereby becoming the main tributary to the entity still known as BP.But the reckoning with the exploitation of Iran had only been postponed.When it exploded in 1979, all oil and gas reserves and installations were nationalised.They have since remained under the tutelage of the National Iranian Oil Company (NIOC), with comparatively limited presence of foreign capital, US corporations entirely absent as the American empire keeps Iran under an increasingly suffocating sanctions regime.Instead, the riches of Khuzestan have come to serve as the foundation of a national bourgeoise sometimes referred to as 'the millionaire mullahs' (Malm & Esmailian 2007).The province houses 80% of the oil and 60% of the gas reserves controlled by NIOC. The case of Khuzestan This history means that the fossil fuels of Khuzestan have made a non-trivial contribution to global heating.In the pathbreaking research conducted by Richard Heede and his colleagues, the companies that have extracted them rank high among the corporate entities most responsible for increasing the atmospheric concentration of CO 2 : for the period 1880-2010, NIOC is on place six, BP on four; in the last four decades of that period, the two carbon majors swapped position (Ekwurzel et al. 2017:585;cf. Climate Accountability Institute 2020;Heede 2014).By 2015, NIOC -that is, for all practical purposes, the enterprise holding Khuzestan -was the third largest corporation in the world, when measured in the greenhouse gas (GHG) emissions its products generated (Griffin 2017:10).This is the capitalist development -what we have elsewhere called the accumulation of 'primitive fossil capital' (Malm & the Zetkin Collective 2021) -that forms the burning core of global heating. And now Khuzestan is itself feeling the heat.Over the past two decades, the heatwaves have become more frequent, severe and long-lasting; in 2017, the provincial capital of Ahvaz hit 54 degrees Celsius, still, as of this writing, the all-time record for temperatures in Asia (al-Jazeera 2021).Evaporation rates have reached extreme levels and increased irrigation demands.Rain-fed fields have had to be abandoned, and in 2018, authorities went so far as to ban rice cultivation (Dehcheshmeh & Ghaedi 2020:8;Khavarian-Garmsir et al. 2019:5;Pakmehr, Yazdanpanah & Baradaran 2020:4-5).Wildfires now regularly tear through the vegetation (e.g.MEHR 2018).The dry soils cannot absorb suddenly arriving water masses, and so the heavy rainfalls in Iran in March 2019 -described as a '1-in-100-year event', the usual euphemism for climate disasters off the chartsbrought devastating floods to Khuzestan (National Disaster Management Organization of Iran, United Nations & Presidency Islamic Republic of Iran Plan and Budget Organization 2019).But the new normal is drought, to the extent that one Iranian climatologist has proposed ditching that term for 'drying out' -a permanent desiccation of the province (Karami 2021).And then there are the dust storms.They began in 2001 and have since become steadily harsher, winds sweeping up dust from the drying plains and blanketing towns in a greyish-yellowish film that blots out the sun, brings life to a standstill and sends thousands to hospitals with respiratory problems.In the 2010s, Ahwaz rose to become one of world's most polluted cities, Abadanthe old refinery town -recording 85 days of dust blankets as a new annual average (Khavarian-Garmsir et al. 2019:5;Nada 2021).All of these trends are in line with climate projections for Iran as a whole: it will get even drier and hotter in the decades ahead (e.g.Daneshvar, Ebrahimi & Nejadsoleymani 2019;Hashemi 2015;Rahimi, Malekian & Khalili 2019).Fossil fuels, however, are implicated not only in the production of hazards, but also in vulnerability to the same. Dust storms can pick up material from far afield, including the similarly dried-out lands of southern Iraq (Javadian, Behrangi & Sorooshian 2019).But they would have been more containable and bearable if Khuzestan had retained its once extensive wetlands.One of the largest, Hour al-Azim, had a water depth of up to 10 m and sprouted an archipelago of lush islets until the turn of the millennium.Then NIOC began to bulldoze its way through the lagoon in search for more oil, draining it, crisscrossing it with roads and platforms and depositing toxic waste into it.The company oversaw the definitive destruction of the Hour al-Azim in the mid-2010s, with assistance from Chinese oil companies flouting the US sanctions (Financial Tribune 2016;Madani 2021:238;Tehran Bureau Correspondent 2015;Zohoorian-Pordel et al. 2017).As a result, an essential geophysical buffer against dust storms -straddling the border with Iraq -was removed and itself turned into a source of dust, enhancing the vulnerability of villages and towns in Khuzestan to the hazard.Such a sequence belongs not to the right side of our dialectical model, but to the left, where, in this case, the dynamic pressure of reckless appropriation of land produces the unsafe conditions of dangerous locations and unprotected buildings.(To make things more (or less) complicated, however, global warming has itself partaken in the drying out of Hour al-Azim.)Gas flaring stacks and other installations in the fossil fuel sector produce heat island effects, further driving up local temperatures (Dehcheshmeh & Ghaedi 2020:12).This added some political dimensions to fossil-fuelled capitalist development in Iran.Khuzestan is home to the country's largest Arab population, somewhere between 2 and 5 million, probably still a majority (the southern part of the province used to be called 'Arabistan').The dominant class ruling from Tehran is overwhelmingly Persian.Its principal material base is the abundance of fossil fuels, withdrawn from Khuzestan to feed metropolitan accumulation, giving rise to an ethno-political contradiction familiar from other oil-producing nations (notably Saudi Arabia and Nigeria): the centre cannot trust the minority to govern the most precious of resources.Strict military control must be upheld.Arabs have long been suspected of insufficient loyalty to the nation, including sympathies with Saddam Hussein (whose invasion of Iraq laid waste to their land) and Saudi Arabia or separatist aspirations (which occasionally do surface).Conversely, the Arabs of the periphery have long resented the seizure of the riches under their feet.'They see the towers of the oil refineries and the flares and all of that money, which is a lot, and it is going out of the province', a United Nations (UN) envoy observed in the summer of 2005, just after Khuzestan had erupted in the 'Ahwazi intifada', in which 130 protestors were killed (Malm & Esmailian 2007:96-7).Some of the most extreme poverty in Iran is found next to those towers. Unsurprisingly, then, the Arabs of Khuzestan harbour grudges against the central state for sacrificing them to inclement weather.During the floods of 2019, rumours spread about water being redirected from oil installations to Arab villages and fields, ruining the latter on purpose.A video of a distraught Arab man telling the governor that 'you won't help us because we are Arab' went viral, sparking small-scale protests with chants such as 'Khuzestan has been washed away and [our] leaders have fallen asleep!' (Centre for Human Rights in Iran 2019; Saidi 2019).In the summer of 2021, the water uprising was triggered by another viral film clip, in which an Arab sheikh in traditional dress accused officials of orchestrating the extreme weather: 'Look, we are not going to leave this land, you brought us floods and drought to make us migrate.We won't leave, this is our ancestral land' (Fassihi 2021). Behind such conspiracy theories is a reality of highly differentiated vulnerability.In Iran, as much as anywhere in the global South, peasants are more sensitive to climatic pressure insofar as they have small farms, rudimentary equipment, few crops to sell, and limited access to credit (Jamshidi et al. 2019;Savari & Zhoolideh 2021).The adaptive measures proposed for Khuzestan's agriculture -drip irrigation, advanced well pipes, artificial coverings to check evaporation, drought-resistant species, more efficient fertilisers -tend to presuppose precisely the wealth that underdevelopment has denied the peasant masses (Kaabi et al. 2021).'New crops need new technology for cultivation and harvesting that we have no access to', one farmer from the ancient area of Susa recently lamented to a research team (Chenani et al. 2021:6).Clearly, this vulnerability is inversely related to the stream of oil and gas revenues.The same hands that have accumulated capital by producing fossil fuels in Khuzestan have produced vulnerability to the consequences of their combustion through that process.The tightness of this dialectic might only be replicated in some other zones of fossil fuel production -neighbouring Iraq and the Niger Delta come to mind -but it could certainly be observed in looser form, still causally relevant, on plenty of other sites and scales. In Khuzestan, the dialectic has grown so bad that swathes of territory are being depopulated.During the past three decades, more than 1,000 rural settlements have been evacuated primarily because of climatic stress (Dehcheshmeh & Ghaedi 2020:159).Some of the main cities -among them Abadan and Masjed-e Soleyman -are haemorrhaging inhabitants, again primarily because of the heatwaves and the dust storms, global heating having already 'compromised the human habitability of the region' (Khavarian-Garmsir et al. 2019:10).Given that these rustic hamlets were turned into boom towns by oil, one is here tempted to see Abdelrahman Munif's prophecy from 1984 nearing fulfilment: In twenty or thirty years' time we shall discover that oil has been a real tragedy for the Arabs, and these giant cities built in the desert will find no one to live in them and their hundreds of thousands of inhabitants will have to begin again their quest for the unknown (…) As a result we shall again have to face a sense of loss and estrangement, this time in complete poverty.(quoted in Nixon 2011:100-101). But all the migrants fanning out across Iran from Khuzestan are not Arabs, nor are they necessarily abysmally poor.The option of migration can be most affordable for the higher educated and better connected.(Construction and other outdoor workers, on the other hand, are the first to lose their livelihoods when cities shut down because of intolerable heat or dust.)Nor are fossil fuels -in the stage of production or consumption -the sole culprits in Iran's water crisis.There is a plethora of contributing factors, ranging from aggressive agribusinesses and excessive dam construction to wasteful consumption practices, indirectly related, at most, to oil and gas (Ashraf, Nazemi & AghaKouchak 2021;Madani, AghaKouchak & Mirchi 2016).On the other hand, some of the geopolitical conflicts that have given their share to the wrecking of Khuzestan -the Iran-Iraq War, the Gulf War, the American sanctions -are rather closely related to the struggle for the black gold (e.g.Madani 2021).Whatever its exact role in these events, one thing is not in doubt: the Iranian branch of fossil capital is not foregoing sources of profit.In November 2019, then-president Hassan Rouhani announced the discovery of a new giant oilfield in Khuzestan, adding one third to the nation's reserves in one stroke -'a small gift by the government to the people of Iran', he called it (Altaher & Robinson 2019).In January 2021, the largest gas refinery in the Middle East went online, slated to produce 56m cubic metres of processed gas per day and some $700m in profit per year, in the eastern corner of the province, in a rural area abutting the Gulf (Tehran Times 2021).And this is unlikely to have been the last acts of business-as-usual in Khuzestan. Shifting revolution to the right The political gist of critical vulnerability studies was never very hard to spot.'Only radical changes in the organisation of production and in access to political power will affect in a large number of direct and indirect ways vulnerability to disaster', wrote Ben Wisner in the journal Disasters in 1979 (p. 305).In Interpretations of Calamity, he and his colleagues proclaimed that 'the only way to reduce vulnerability is to concentrate disaster planning within development planning, and that development planning context must be, broadly speaking, socialist' (Susman et al. 1983:280).The authors of At Risk take notice of a critic who charges them with views that 'simply call for overall social revolution' (Keith Smith quoted in Wisner et al. 2004:27), and they do not bother to refute the claim.Instead, they reiterate that the best recipe for protection against disaster is an assault on the 'root causes' of vulnerability through a 'revolution or major realignment in the balance of class forces' (Wisner et al. 2004:91).All of this would happen on the left side of their model. When critical vulnerability studies were first applied to the problem of climate change, they pointed to a rather sanguine policy recommendation: the right dose of social transformation can bring the problem under control.If 'it is not so much the droughts or floods that are alarming, but people's vulnerability to the consequences associated with them' (Ribot et al. [1996] 2009:133), then eradication of such vulnerability would silence the alarm.The importance of globalised markets and state systems in determining vulnerability, one researcher argued, 'indicates that climate change, while a significant challenge, can be managed with the correct adaptive responses' (Ford et al. 2010:383;emphasis added).While 'responses' have another inflection than 'revolution', the essential point is the same: make people equal, heal the wounds sustained on the battlefield and the sword will bounce.Today, such a position appears outdated, or, if you will, half-revolutionary. Wisner recently called for the phrase 'disaster risk reduction' to be replaced with the slogan 'resist disaster risk creation' (Wisner 2017:3, emphasis in original).His immediate reference was to megaprojects -including oil infrastructure burying coastal wetlands in the southern US -the main result of which is deepened vulnerability.But that too was a prescription to the left side.In the same vein, the water protests of Khuzestan have voiced bitterness against the Islamic Republic for leaving people to their fate in times of flood and drought.But they have yet to target the platforms and the refineries.Until that happens -until revolutionary struggle breaks out on the right side -we are, it seems, doomed to facing an ever-rising tide of disasters.Why is this turn almost nowhere to be seen?The answers to that question are beyond the scope of this essay.They might fill whole libraries of their own. And that consumption is not, of course, determined by geophysical factors.An extensive body of scholarship has demonstrated that it was initiated and then propagated, accelerated and sustained into this day by capitalist development (e.g.Christophers 2021; Foster, Clark & York 2011; Hanieh 2021; Klein 2014; Malm 2016a, 2016b; Malm & the Zetkin Collective 2021; Wright & Nyberg 2015). . A notion of climate breakdown as a great leveller requires a counter-factual scenario where the left side of the model vanishes, filled instead with exclusively geophysical determinants of vulnerabilitywithout significant inequalities.Such a world does not exist on this planet, and pretending that it does would be to regress to the blindest Cold War-era geophysicalism.
7,568.4
2023-12-26T00:00:00.000
[ "Environmental Science", "Philosophy" ]
Costs of avoiding net negative emissions under a carbon budget The 2 °C and 1.5 °C temperature targets of the Paris Agreement can be interpreted as targets never to be exceeded, or as end-of-century targets. Recent literature proposes to move away from the latter, in favour of avoiding a temperature overshoot and the associated net negative emissions. To inform this discussion, we investigate under which conditions avoiding an overshoot is economically attractive. We show that some form of overshoot is attractive under a wide range of assumptions, even when considering the extra damages due to additional climate change in the optimisation process. For medium assumptions regarding mitigation costs and climate damages, avoiding net negative emissions leads to an increase in total costs until 2100 of 5% to 14%. However, avoiding overshoot only leads to some additional costs when mitigation costs are low, damages are high and when using a low discount rate. Finally, if damages are not fully reversible, avoiding net negative emissions can even become attractive. Under these conditions, avoiding overshoot may be justified, especially when non-monetary risks are considered. At the 21st Conference of the Parties to the United Nations Framework Convention on Climate Change in 2015, 174 countries ratified the Paris Agreement. They agreed to limit global mean temperature change to well below 2 • C and pursue efforts to stay below 1.5 • C above pre-industrial levels. Different interpretations of such temperature targets can be found in the literature, i.e. either a value that can never be exceeded or something that needs to be achieved this century (allowing a temporary overshoot). Given the near-linear relationship between CO 2 emissions and global temperature change, the former translates into a peak carbon budget, i.e. the cumulative net CO 2 emissions until net-zero CO 2 emissions is reached. In contrast, the latter translates into a net carbon budget during the 21st century (in both cases assuming an equivalent reduction of non-CO 2 greenhouse gas emissions). Many of the scenarios developed by integrated assessment models (IAMs) used in the fifth assessment report of the IPCC followed the second approach: first, they exceeded the carbon budget (for a short period), after which the excess emissions were compensated by net negative emissions towards the end of the century [1,2]. In response, there has been a lively debate in the literature about both the risks related to (net) negative emissions and the allowance of overshoot [3,4]. In this context, Rogelj et al [5] proposed to replace the end-of-century budgets with so-called peak budgets. Interestingly, in their proposal, little consideration was given to the related costs and benefits of avoiding net negative emissions. On the one hand, avoiding overshoot avoids the extra damages from climate change incurred throughout the century as a result of exceeding the temperature target. On the other hand, it also leads to less flexibility in the timing of mitigation, leading to higher mitigation costs (up to 80% higher in current IAM literature scenarios [6]). In this paper, we fill this gap by investigating the net effect of these opposite economic impacts of avoiding overshoot. More specifically, we determine under which conditions peak budgets might be an attractive strategy from an economic perspective and under which conditions it would not be. The answer to these questions depends on several factors, such as the severity of damages, discount rate, climate sensitivity, and mitigation costs. We perform a sensitivity analysis covering the literature ranges for each of these factors to investigate the economic effect of the decision not to allow overshoot-therefore providing evidence of the rationality of such a choice based on abatement costs and damage costs. This informs the debate about the (dis)advantages of net negative emissions. It should be noted that this is not accounting all factors. Negative emissions could also impact biodiversity and food security [7,8] (depending on the choice of technology and uncertainties regarding efficiency and management; some amount of negative emissions can probably be generated with relatively little impacts [3]). An additional novel aspect of our research in the discussion of the role of negative emissions related to carbon budgets is that we take into account partially irreversible damages. Most, if not all, traditional IAMs assume that when temperature decreases, damages decrease accordingly [9]. However, some types of damages, such as disappearing glaciers and species extinction, are irreversible, and, therefore, will remain even when temperature declines. We propose a modelling framework including partially irreversible climate damages in an IAM setting. Economic impact of avoiding net negative emissions We analyse the economic impact of avoiding net negative emissions using a simple and transparent IAM similar to DICE [10] (see section 5). Gross GDP is calculated in this model using a production function based on technological progress (total factor productivity, TFP), capital and population. Both climate mitigation costs and damage costs resulting from climate change impacts are subtracted from the gross GDP. The resulting net GDP is divided in a fixed share to consumption and investments. Therefore, the mitigation and damage costs induce a direct loss of consumption and an indirect effect on economic growth by affecting investments. The model maximises the total discounted per capita utility, which is a concave function of per capita consumption, using pure rate of time preference (PRTP) values spanning the current literature range. The temperature is calculated as a linear function of cumulative emissions using the transient climate response to emissions (TCRE) relation [11]. We calibrated all factors in the model based on the literature (see section 5). For mitigation costs, the mitigation potential as a function of costs is calibrated to the literature range in the IPCC scenario database for AR5 and SR1.5 (underlying a range of mitigation options). In a scenario where net-negative emissions are allowed, the yearly CO 2 emissions are limited to −20 GtCO 2 yr −1 representing the limits due to biophysical, technical, economic and sustainability constraints. In the literature a wide range of values for the contribution of net negative emissions can be found, ranging from 0 to more than 40 GtCO 2 yr −1 [3,12], similar to the literature range for high overshoot scenarios in the IPCC SR1.5 database (5-30 GtCO 2 yr −1 ) [13,14]. Avoiding netnegative emissions sets this limit to 0 GtCO 2 yr −1 . Unless stated otherwise, the end-of-century carbon budget is set to 600 GtCO 2 , in line with a 1.5 • C target [13] (median climate temperature estimate). Finally, for damage costs, we use a stylised function that can be scaled (using a damage coefficient) to mimic the entire range from the DICE damage function [10] to the long-run damage function from Burke et al [15], with as default medium damage estimate, the metamodel damage estimate from Howard et al [9]. For baseline assumptions, we use the SSP2 scenario (covering medium estimates for GDP, population and emission growth, see section 5). The economically optimal emission paths and associated macroeconomic costs of a scenario with and without net negative emissions are shown in figure 1. These results are created using a medium mitigation cost level, medium damage function (i.e. Howard Total, see section 5), medium TCRE and the three PRTPs spanning the current literature range: 0.1% yr −1 , as used in the Stern review [16], 1.5% yr −1 , as used in DICE-2007 and following versions [17], and 3% yr −1 , as used in the original DICE model [18]. In a scenario where net negative emissions are avoided, strong emission reductions need to occur in the first half of the century to stay within the carbon budget (figure 1(a); dotted versus solid lines). In the scenarios that allow for net negative emissions, some mitigation effort is delayed to the second half of the century, reaching net zero around 2075 instead of 2050. While the net negative emissions have a higher marginal cost, the fact that they occur later in combination with discounting makes their use economically attractive. This also means that a lower PRTP significantly reduces the amount of net negative emissions, from 469 GtCO 2 with a 3% PRTP to 115 GtCO 2 for 0.1% PRTP (figure 1(b); see also figure 1(a) for time profile). This corresponds to a temperature overshoot of 0.29 • C and 0.07 • C respectively (similar results were found in previous studies [19]). Avoiding net negative emissions leads to a reduction in damage costs varying from 10% to 34%, caused by a combination of avoiding overshoot and earlier mitigation effort (figure 1(c)). Simultaneously, the mitigation costs increase between 9% for low discount rates and 37% for the highest discount rate assumed, leading to an increase in total costs (sum of damage and mitigation costs) of 5% to 14%. Both damage and mitigation costs are calculated using their net present value (NPV) (2020-2100) with a fixed 4% social discount rate, regardless of their PRTP value (see section 5). In other words, in all cases, allowing for some negative emissions is for medium parameters settings for mitigation and damage costs, from an economic perspective, attractive (even if damages are accounted for). The level of this preference, however, depends on the discount rate. An important aspect to consider is the timing of mitigation effort and incurred damages. In figure 2, we show the abatement costs and damage costs over time. For medium parameter values, the peak of total costs (abatement plus damage costs) occurs towards the end of the century when allowing net negative emissions (2%, 5% and 8% of GDP for respectively 2030, 2060 and 2090). When net negative emissions are not allowed, the peak in total costs is much earlier, albeit slightly lower (4%, 6.5% and 4% of GDP for 2030, 2060 and 2090). Once the minimum emission level is attained, the relative mitigation costs decrease due to technological learning and the increasing baseline GDP of SSP2. The corresponding global carbon prices are shown in supplementary figure 7 (available online at stacks.iop.org/ERL/ 16/064071/mmedia) and reach a maximum of 800-1000 USD/tCO when avoiding net negative emissions and 810-1250 USD/tCO when net negative emissions are allowed (as a comparison, the European Trading System carbon prices are around 40 €/tCO 2 in 2021). Besides discounting, the assumed level of climate damages plays an important role in determining the economic attractiveness of net negative emissions as well. In figure 2, we perform a sensitivity analysis on the damage function (specifically, the damage coefficient, see section 5). We use a low damage function (DICE, giving 2% GDP loss at 3 • C warming), a medium one (Howard Total, 9% GDP loss at 3 • C) and a high damage function (Burke LR, 22% GDP loss at 3 • C). For the low damage function, the extra mitigation effort early in the century when avoiding net negative emissions leads to much higher total costs (19% to 29% increase in NPV of total costs). However, when the damage function is high, the early emission reduction leads to significantly lower damages, making the total cost difference smaller. For the Burke damage function with low PRTP, the total costs are minimal when no net negative emissions are used. For such a high damage function, the economically optimal emission path is to reduce as much as possible at any point in time. Allowing net negative emissions allows for deeper reductions throughout the century, with corresponding Timing of abatement costs (light shade) and damage costs (dark shade) for scenarios without (yellow) and with (purple) net negative emissions, as a percentage of GDP. The columns represent three levels of damage functions (low, medium and high), the rows represent three values of the PRTP. The grey bars give relative change in NPV (2020-2100) of total costs (abatement plus damage costs) when avoiding net negative emissions. The NPVs are calculated using a fixed social discount rate of 4% yr −1 . When this change is negative, the economic benefits of allowing net negative emissions only happen after 2100. higher mitigation costs but lower damages. The effect of these lower damages increases further after 2100. Since in figure 2, we report the NPV from 2020 to 2100, but we optimise discounted utility until 2150, it is possible to obtain higher total costs until 2100 in a scenario with no net negative emissions than in a scenario with negative emissions. The damage function (specifically, the damage coefficient, see section 5) and the mitigation cost level have an equally strong influence on the difference in total costs. We perform a sensitivity analysis on these three factors: PRTP, damage coefficient and mitigation cost level. For each combination of parameter values, we run a scenario with and one without net negative emissions and calculate the increase in total costs between the two (supplementary figure 11). The extra costs, from low mitigation costs to high mitigation costs, range from +0% to +24% (with medium values for the other parameters). For damage cost uncertainty, the extra costs range from +0% to +28% from low damages (DICE) to high damages (Burke), again with all other values medium. Higher mitigation costs always lead to higher additional costs of avoiding negative emissions, as depicted by the differences between the panels in supplementary figure 11. The impact of damage cost uncertainty is similar to the impact of mitigation cost uncertainty: the higher the damage coefficient, the earlier the mitigation effort occurs to avoid high climate damages later in the century. Early abatement action leads to a decrease in total net negative emissions (supplementary figure 8). In fact, the emission paths, and associated cost differences between allowing and avoiding net negative emissions of a scenario with low mitigation cost and medium damage function are very similar to a scenario with medium mitigation costs and high damages. The total costs, relative to GDP, are, of course, significantly higher in the latter scenarios. Interesting interactions between these parameters can be observed. First, the influence of damage costs uncertainty on timing increases with lower mitigation costs, simply because the relative importance of damages in total costs increases. As a result, in the case of low mitigation costs and high damages, avoiding negative emissions hardly leads to additional total costs. The additional costs even become slightly negative, as was already shown for high damages and low discounting in figure 2, which is possible as utility until 2150 instead of total costs until 2100 is optimised. It can also be noted that the impact of higher damage estimates becomes non-linear for the combination of low mitigation cost levels and low PRTP: in that case, the optimal emission path stays significantly below the set carbon budget (see supplementary figure 9). For this set of parameters, a higher damage coefficient leads to more net negative emissions to keep climaterelated damages at a minimum. The costs differences become significantly lower when using a less stringent carbon budget. When using a carbon budget reaching 2 • C instead of 1.5 • C, avoiding net negative emissions only leads to extra costs when mitigation costs are high, or damages low (supplementary figure 18). The effect of using the low or high instead of the median value of the TCRE is only significant for high damage coefficients. A high TCRE accentuates the effect of climate impacts resulting in more negative emissions if allowed in the scenario (supplementary figure 8). Partially irreversible climate damages We have shown that if climate damages are reversible, it is in most cases economically optimal to allow some net negative emissions (and thus exceed the peak budget). However, not all damages are likely to be fully reversible. While climate impacts like reduced yields, health impacts and extra energy consumption for air conditioning are likely to be reversible, disappearing glaciers, species extinction and biodiversity loss are clearly irreversible processes. For other impacts, reversibility is more uncertain: while sea-level rise could be considered an irreversible process due to ice melting, the slow timescale at which it occurs also makes it relatively insensitive to a limited period of temperature overshoot. Here, we investigate the consequences of assuming that a share of the damages is irreversible. The implementation details are discussed in SI 1.1. However, to properly assess the impact of (ir)reversibility of climate impacts, the carbon budget constraint must be changed. The reason is that the damages (and thus the optimal pathways) do not depend anymore on the cumulative net emissions. In fact, enforcing a carbon budget goal could be so restrictive that the model shows negative emissions even without any reversibility, which does not make any economic sense. We therefore translate the carbon budget to a maximum damage target for 2100 (see section 5). Such a maximum damage target inevitably depends on the assumed damage function. The 600 GtCO 2 carbon budget translates to maximum damage costs in 2100 of respectively 0.25%, 1% and 2.7% of GDP for the DICE, Howard Total and Burke (LR) damage functions. Figure 3(a) shows that the amount of economically optimal net negative emissions is strongly dependent on the percentage of irreversible damages. For low discounting, net negative emissions are almost entirely unattractive when 30% of damages are irreversible. This happens at around 70% of irreversibility for medium discounting-but the use of net negative emissions is already lower by a factor of 2 if 50% of damages are irreversible. For high discounting, the irreversibility of damages only becomes significant beyond a share of 50%. As a consequence of the irreversibility of damages, net negative emissions need to be compensated by extra mitigation effort to reach the maximum damage target ( figure 3(b)). When damages are (almost) fully reversible, the cumulative emissions are close to the original carbon budget from which the damage target was derived, even when using a high amount of net negative emissions (left part of figure 3(b)). When damages are partially irreversible, it becomes economically attractive to have some overshoot (155 GtCO 2 for medium assumptions), even at the cost of extra mitigation effort (85 GtCO 2 for medium assumptions, middle part of figure 3(b)). When damages are even more irreversible, net negative emissions become less attractive, leading again to cumulative emissions close to the original carbon budget (right part of figure 3(b)). An exception for this is the combination of high damage function and low discounting (dotted blue line in figure 3(b)): the damage target constraint is not economically optimal anymore, resulting in lower cumulative emissions than prescribed by the maximum damage target. Time evolution of GDP Avoiding an emission overshoot requires earlier mitigation effort, leading to increased total discounted costs ( figure 1(c)). This influences the GDP growth path. As shown in supplementary figure 6, the mitigation costs in 2030 are twice as high when avoiding the overshoot, while the damages are still the same with and without net negative emissions. By 2070, the total costs (mitigation and damage costs) reach the same level in both scenarios. At the end of the century, the absolute GDP level of the scenario avoiding overshoot is significantly higher since the mitigation costs for the negative emissions start to increase after 2070. However, since we optimise on cumulative discounted utility and not on final GDP, the overshoot scenario is still economically favourable. Moreover, since the net negative emission costs are assumed to be phased out after 2100 to keep the same carbon budget, the GDP paths of both scenarios will gradually converge. Non-monetary aspects In this paper, we only consider the macroeconomic effects of different emission paths: the increased monetary cost of climate policy (abatement costs) and the reduction of climate change damage due to earlier abatement effort. However, as mentioned in the introduction, this does not include the extra pressure on ecosystems and biodiversity due to the increased use of land-use related negative emission options such as BECCS and afforestation [3,7,8], the massive logistical and political bottlenecks associated with upscaling negative emission technology [20,21], or the risks of non-performance at any point in the future. While it seems that some amount of negative emissions can be achieved without too many negative side effects [3] (or that some technologies, like afforestation and soil carbon management, could even have some co-benefits), the negative other consequences should still be weighed against the economic results presented in this paper. Reversibility of climate damages We have shown that the amount of net negative emissions is strongly dependent on the extent to which climate damages are reversible. However, 'reversibility' in climate change is a broad concept. In the literature on reversibility and climate change, three distinct effects are described, mostly independently of each other. First, climate reversibility, describing how temperature behaves under decreasing concentrations of atmospheric CO 2 . Second, the impact reversibility, which analyses how, and if climate damages decrease when temperature decreases. Third, the economic persistence, which treats the long term economic effects of a shock due to climate change. Regarding the first topic of climate reversibility, our model assumes that temperature is directly proportional to cumulative emissions. Previous research has shown [22][23][24] that the assumption of fixed temperature/concentration relation might not fully hold: under decreasing atmospheric CO 2 concentrations, temperature decreases at a slower rate than when concentrations are rising. The impact is relatively small for a relatively small overshoot and the discrepancy with our modelling method, which focuses on the reversibility of damages is expected to be small. The second concept, impact reversibility, is what we consider in this paper as irreversible climate damages. As already described, due to irreversible processes in biodiversity loss, melting glaciers and socio-economic tipping points, not all damages will decrease when temperature decreases. The third concept is economic persistency. Empirical economic research has shown that climate change does not only induce direct monetary losses (like destroyed real estate after a flood) but also impacts economic growth [15,25,26]. The latter has a much longer-term effect. This paper considers this indirectly by using the Burke et al [15] damage function at the high end of our sensitivity range on climate damages. While we have translated the growth effects of Burke et al to a direct temperature-GDP loss relation (therefore not affecting growth rate), the underlying calibration still uses growth impacts (see section 5). While the second and third concepts (impact reversibility and economic persistence) might be related, the exact relationship is still unclear. In fact, economic persistence also happens when temperatures are increasing, whereas impact reversibility is only relevant for decreasing temperatures. Comparison to other literature The increased mitigation costs when avoiding net negative emissions have already been assessed by Hilaire et al [6] They analysed recent IAM mitigation scenarios reaching 1.5 • C and 2 • C with varying levels of negative emissions. For the 1.5 • C target, mitigation costs go from 2.26% of GDP for unconstrained BECCS to 4.1% of GDP with limited BECCS (both cost values are NPV 2010-2100, 5% yr −1 ), an increase of over 80%. In this study, we find an increase in mitigation costs of 9% to 37% for medium parameter values. This large discrepancy comes from two reasons. First, we calculate the NPV using a smaller discount rate of 4% instead of 5%, giving more weight to future generations (if we used 5%, the cost increase would be up to 53% for medium values). Second, and most importantly, we take damages into account when calculating the economically optimal emission trajectory, whereas most traditional IAMs under carbon budget calculate the cost-effective path, ignoring climate damages. Conclusions and implications Our results suggest that economically, some form of overshoot is attractive, even when considering the extra damages in the optimisation process. The choice to avoid negative emissions, and thereby interpreting the Paris Agreement target as a 'no overshoot' target will lead to a sum of abatement costs and damage costs that is around 13% higher than without the restriction when using a PRTP of 1.5% and the medium damage function. Still, the cost differences are much smaller if mitigation costs are assumed to be relatively small (compared to the literature median), damages high, or when a low discount rate is used. Moreover, assuming that climate damages are not fully reversible significantly reduces the attractiveness of net negative emissions. Assuming that 50% of damages are irreversible leads to 50% lower total net negative emissions, since extra mitigation effort is required to reach the same maximum damage target when using net negative emissions. Under a wide range of assumptions on damages, mitigation costs, time preference, reversibility of damages, we find that the attractiveness of negative emissions is much lower than often shown in scenarios based on optimisation of mitigation costs only. Methods In this paper, we use a simple and transparent IAM described in detail in the SI. The model is similar to DICE [10]. Gross GDP is calculated using a production function based on technological progress (TFP), capital and population. The mitigation costs and the damage costs resulting from climate change impacts are subtracted from the gross GDP. The net GDP is divided in a fixed part (21%) of investments and the rest to consumption. The model maximises the total discounted per capita utility, which is a concave increasing function of per capita consumption. Greenhouse gas emissions are calculated by multiplying economic activity with an emission factor. Each timestep, the emissions are added to the cumulative emissions. The cumulative CO 2 causes a change in global mean temperature, modelled through the instantaneous and linear TCRE relation [11]. This relation includes a linear relation between non-CO 2 and CO 2 emissions. The global mean temperature, in turn, determines the damage costs. In response, the model can determine to mitigate emissions. The mitigation level (or equivalently the carbon price) over time is determined by maximising the NPV of utility. The mitigation costs are subtracted from investments and consumption. Calibration The parameters are as much as possible calibrated against existing literature. Population, baseline emission intensity and TFP are exogenous and calibrated to match the growth rates of the shared socioeconomic pathways (SSPs) [27]. We use the SSP2 ('Middle of the Road') scenario which has medium assumptions about population growth, emissions, GDP, technological growth and lifestyle. For details, see Riahi et al [27] and for the exact implementation in our model see SI 1.2. Emission reductions are quantified through a quadratic marginal abatement cost (MAC) curve. The area under the MAC gives the mitigation costs. The resulting mitigation costs are calibrated using the consumption loss range of the 5th Assessment Report of the IPCC [1]. To consider the wide range in mitigation costs, we perform a quantile regression on the AR5 data points to the 5th, 50th and 95th percentiles to represent the low, medium and high end of the mitigation cost range. The 5th percentile leads to mitigation costs 2.5 times smaller than the median costs, the 95th percentile 2.5 times larger. The damage function is defined as a quadratic function of global mean temperature T: where D (T) is the fraction of GDP loss due to climate impacts. The damage coefficient c is calibrated to capture the full literature range. At the low end, we choose the DICE-2013R damage function [10] with c = 0.00267. The medium estimate is based on the results from a meta-analysis of literature damage functions by Howard et al [9], with a damage coefficient of c = 0.01004. The high end of the range is parametrised by the long-run empirical damage from Burke, Hsiang and Miguel [15]. While their damage estimates are quantified as impacts on growth rates and not directly on GDP, we use the iterative strategy from recent literature [28] to create a damage function usable by IAMs like our model. The idea of this method is to calculate which direct GDP losses would result in the same GDP path as when Burke's growth impacts are used. Iteratively, a damage curve (as function of temperature change) is created giving the same damages as the growth impact definition [29]. A quadratic function is then fitted to the resulting approximation (R 2 = 0.99), leading to c = 0.02835, about ten times higher than the DICE damage function. The utility discount rate, called throughout this paper the PRTP, is chosen to be 0.1% yr −1 , as used in the Stern review [16], 1.5% yr −1 and 3% yr −1 , as used in DICE-1999, DICE-2007 and following versions [17,30]. The elasticity of marginal utility is 1.001. The combination of PRTP and elasticity of marginal utility are in line with the expert elicitation by Drupp et al [31]. The minimum yearly emission level in the scenarios without net negative emissions, is, by definition, set at 0 GtCO 2 . The potential for net negative emissions is limited by biophysical, technical, economic and sustainability constraints. In the literature a wide range of values for the contribution of net negative emissions can be found, ranging from 0 to more than 40 GtCO 2 yr −1 . For instance, Fuss et al [3] estimated a maximum sustainable supply of about 5 GtCO 2 yr −1 for individual CDR options in 2050but the combination of these options could be higher, while Hanssen et al [12] showed a maximum potential of 40 GtCO 2 yr −1 in 2100. The literature range for 1.5 • C scenarios in the IPCC Special Report on 1.5 • C is around 5 GtCO 2 to 30 GtCO 2 yr −1 for overshoot scenarios. Here, we limit the contribution of net negative emissions to a maximum of 20 GtCO 2 yr −1 . Moreover, to account for technological and political inertia, we assume that the emissions cannot be mitigated faster than 2.2 GtCO 2 yr −1 (based on the maximum reduction speed of the IPCC 1.5 • C database [14]) for each scenario. Finally, from the year 2100 onwards, the cumulative emissions from 2020 cannot exceed a carbon budget. Unless stated otherwise, the carbon budget is set to 600 GtCO 2 , in line with a 1.5 • C target [13]. Finally, the TCRE determines the increase in global mean temperature per unit of extra CO 2 emissions [11]. Using the method from van Vuuren (2020) [32], the TCRE used here is calibrated to key results from the Working Group I from the IPCC AR5 report [33]. In this paper, three values are considered, corresponding to the uncertainty range's 5th, 50th and 95th percentile. Unless mentioned differently, we use the median value for the TCRE, equal to 0.62 • C per 1000 GtCO 2 . The percentage of climate damages which is irreversible has, to the best of our knowledge, not been fully estimated in current literature. While several studies have shown that impacts like decreased precipitation [34] and sea level rise [35,36] can continue to increase after atmospheric CO 2 concentrations have stabilised, there is notoriously less literature quantifying how these impacts behave when emissions become net negative. For this reason, we cover the full range from 0% (fully reversible) to 100% (fully irreversible), even though neither of these extremes is realistic. Cost comparison The abatement and damage costs in this paper are presented as NPV relative to baseline GDP: relative costs = NPV (abat. costs) NPV (baseline GDP) and similar for the damage costs, where NPV is calculated as discounted sum until timestep T: NPV (x) = T ∑ t=0 e −rt (x (t)) . A fixed social discount rate of 4% yr −1 is used, in line with our medium PRTP value and elasticity of marginal utility (see SI 1.2). In order to compare the macroeconomic costs of a scenario with and without net negative emissions, the ratio of their NPV GDP losses are calculated: cost diff. = relative costs with net negs relative costs without − 1.
7,362.6
2021-06-01T00:00:00.000
[ "Economics" ]
Non-Identifiability of Simultaneous Spatial Autoregressive Model and Singularity of Fisher Information Matrix Non-Identifiability of Simultaneous Spatial Autoregressive Model and Singularity of Fisher Information Matrix Yuuki Rikimaru1,2 & Ritei Shibata3 1 School of International Liberal Studies, Waseda University, Tokyo, Japan 2 School of Fundamental Science and Technology, Keio University, Kanagawa, Japan 3 Department of Mathematics, Keio University, Kanagawa, Japan Correspondence: Yuuki Rikimaru, School of International Liberal Studies, Waseda University, 1-6-1 Nishi-Waseda, Shinjuku-ku, Tokyo, Japan. E-mail<EMAIL_ADDRESS> Introduction A simultaneous spatial autoregressive model for a weakly stationary random field {X v ; v ∈ Z n } with the mean 0 and the autocovariance function γ h = E(X v X v+h ), h ∈ Z n is the model which satisfies the equation where {ε v ; v ∈ Z n } is a set of uncorrelated random variables with the mean 0 and the variance σ 2 , σ > 0.Here the operator is an n-dimensional transfer function with the real coefficients β k , k ∈ K where 0 ∈ K is a set of finite points k = (k 1 , k 2 , ..., k n ) on Z n and β 0 = 1.We denote the number of elements of K is m, so that the number of regression parameters as m − 1.The operators T j , j = 1, . . ., n are shift operators such as T j X v = X v 1 ,...,v j +1,...,v n . We assume the following for the weak stationary of the simultaneous spatial autoregressive model throughout this paper. Non-identifiability of Simultaneous Spatial Autoregressive Model We first note that any polynomial P(z 1 , . . ., z n ) is decomposable into a product of prime factors h k (z 1 , . . ., z n ), k = 1, ..., p as Therefore, there exist 2 p choices in selecting h k (z 1 , . . ., z n ) or h k (z 1 , . . ., z n ) for k = 1, ..., p to have a transfer function P(z 1 , . . ., z n ) which leads us to the spectral density There is also freedom to add a factor of the form cz ℓ 1 1 z ℓ 2 2 • • • z ℓ n n to the transfer function P(z 1 , . . ., z n ) for any constant c and integers ℓ 1 , ℓ 2 , . . ., ℓ n , since the constant c can be absorbed into the parameter σ 2 . Example 1.Let us consider a simple one-dimensional autoregressive model, (3) Then there exist 2 2 = 4 different choices of transfer function for the spectral density where z = e iω and α 1 , α 2 ∈ C are the roots of the polynomial P(z) = z + β 1 z 2 + β −1 .In fact, there exist the following four different transfer functions for the spectral density (4). It is easy to show that each transfer function has real coefficients, providing us a model (3) with different coefficients.The variance parameter σ 2 varies from transfer function to transfer function, σ 2 = σ 2 0 /|α 1 + α 2 | 2 for P 1 (z) and P 2 (z), and σ 2 = σ 2 0 /|1 + α 1 ᾱ2 | 2 for P 3 (z) and P 4 (z).It is easy to see that P 1 (z) and P 2 (z) become identical if and only if α 1 and α 2 are real and α 1 α 2 = 1, and the P 3 (z) and P 4 (z) become identical if and only if α 1 and α 2 are real and α 1 = α 2 .By noting Assumption 1, we see that such conditions are summarized as β 1 = β −1 with β 2 1 < 1/4, that is, time reversible simultaneous spatial autoregressive model.However, it does not mean unique transfer function for the spectral density of time reversible model.The conditions α 1 = α 2 and α 1 α 2 = 1 are not compatible because of Assumption 1.Only two of the four transfer functions become identical and two others are not time reversible.We now see that there is no unique model for the given spectral density (4) . Maximum Likelihood Estimate It is well known that the exact likelihood of simultaneous spatial autoregressive model has no closed form in terms of parameters even if the Gaussianity is assumed.Historically, a lot of approximations of the log-likelihood have been proposed.One of such approximations is that based on a modified periodogram, proposed by Guyon (1982).However, the estimation procedure is not only expensive in computation but also inaccurate because it requires multiple integration of the spectral density for each parameter value.In this respect, the approximation recently proposed by Rikimaru & Shibata (2016) is stronger and more straightforward, and closed in time domain.They also proved that the parameter estimate which maximises the approximation L A in the following is asymptotically efficient. Let us assume that the observations {x v , v ∈ N} are on a rectangular lattice observations are arranged to make a vector x in lexicographic order.By combining the m − 1 dimensional regression parameter vector β whose elements are arranged in lexicographic order of k 0 ∈ K with σ, we have the whole parameter vector θ.An approximation of the log-likelihood of θ proposed by Rikimaru & Shibata (2016) is then where Here the symbol ⊗ is Kronecker product and α n j = 1 + 1/n j , j = 1, . . ., n are shrinkage factors to retain √ N consistency. The matrix W n is an n × n circulant matrix such that the off-diagonal elements (W n ) j, j+1 are all 1 for j = 1, . . ., n − 1, (W n ) n,1 = 1 and the other elements are all 0. It is clear that The asymptotic efficiency proved is that the covariance matrix of the parameter estimate converges to the lower bound given by the inverse of the Fisher information matrix I(θ), whose elements are given by (Whittle, 1954;Guyon, 1982;Robinson & Vidal Sanz, 2006), provided that I(θ) is non-singular which is a key assumption for the proof.It is rather unusual that the Fisher information matrix is singular in ordinary theory of statistics, but it often happens in case of simultaneous spatial autoregressive model.Before investigating when and why it happens, we will see other problems caused by non-identifiability of the model in maximum likelihood estimation by the following example.This suggests that the Gaussian likelihood function has the same value for such four sets of parameter values, since they share the same covariance structure.Therefore the likelihood function always has four maximum points on parameter space unless some of four transfer functions are identical.In fact, the following result of numerical experiment demonstrates this.In the experiment, N = 1000 random numbers are generated for {X v } by using the transfer function P 1 (z) with Then, the following four maximum likelihood estimates are obtained by maximising L A in this experiment.Therefore, although the maximum likelihood estimate is consistent and asymptotically efficient as is proved, there is no global unique solution.This implies that we always have several different estimates of parameters, which may depend on the initial values of parameters for optimisation algorithm.There would be no good way to avoid such a problem in practice, because the problem is not over-parametrisation but non-identifiability of transfer function for given spectral density or covariances.Only a possible remedy would be to restrict our attention into a specific region of parameter space, which is meaningful for the underlying problem and effective for restricting the transfer function into a unique one.We might have to search for all possible solutions anyway since it would not be so easy to restrict the region beforehand. Singularity of the Fisher Information Matrix I(θ) We have seen that several different parameters, θ 1 , θ 2 , ..., θ 2 p are mapped from a given simultaneous spatial autoregressive spectral density.The problem of simultaneous spatial autoregressive model is not only on such a non-identifiability but also on the singularity of the Fisher information matrix which is closely related to the non-identifiability.We will concentrate our attention into the singularity of Fisher information matrix I(θ) in ( 5), which is also the limit of The following theorem states that the Fisher information matrix becomes singular if some of the parameters are duplicated. Theorem 1 Fisher information matrix I(θ 0 ) becomes singular when some of the parameters are duplicated for the spectral density identified by θ 0 . The following example illustrates what happens if the Fisher information matrix is singular.It would be clear if we note that the Hessian matrix of the log-likelihood ( 6) is likely to be singular if it happened. Example 3. Let us consider the same model as in Example 1.As is already seen, if β 1 = β −1 and β 2 1 < 1/4, then the transfer functions P 1 and P 2 or P 3 and P 4 are identical and the Fisher information matrix becomes singular as From the maximum likelihood equation, is asymptotically normally distributed, so that we can only estimate β 1 + β −1 and σ but not individual β 1 or β −1 . Conditions for Non-Singularity of I(θ) As is seen from Example 3, singularity of the Fisher information matrix I(θ) causes more serious problem, non-estimability of individual parameters.It would be worthy of investigating what kind of conditions is necessary for the singularity of I(θ) because the Fisher information matrix is a complicated function of parameters and it is not feasible to check it as it is.We first derive a simple necessary and sufficient condition directly derived from the quadratic form of I(θ), Clearly a necessary and sufficient condition for the non-singularity is that the vector y = (y 1 , y 2 , . . ., y m ) is zero whenever Theorem 2 A necessary and sufficient condition for I(θ) to be non-singular is ., m are linearly independent. Corollary 1 A sufficient condition for non-singularity of I(θ) is that ∂γ k ∂θ j , j = 1, 2, ..., m are linearly independent f or a k. Proof.We see that Rikimaru & Shibata (2016) by noting that ∂Σ −1 /∂θ p = Σ −1 (∂Σ/∂θ p )Σ −1 .Since the eigenvalues of Σ −1 are bounded away from 0, we have tr It is enough to note that at most N elements of the matrix ∑ m j=1 y j ∂Σ/∂θ j are It is enough to note that on the domain D. Example 4. Consider a 2-dimensional model, then the spectral density is There exists only two transfer functions 2 ) for this spectral density.This is because P(z 1 , z 2 ) is prime factor.In fact, there exist no polynomials Q 1 (z 1 , z 2 ) and Q 2 (z 1 , z 2 ) of at most order 2 with respect to z 1 and z 2 , such that Therefore, P(z 1 , z 2 ) is not decomposable into a product of transfer functions which accommodate with the underlying model.It is clear that P(z 1 , z 2 ) and P(z −1 1 , z −1 2 ) are identical if and only if β 10 = β −10 and β 01 = β 0−1 .The singularity of I(θ) follows from Theorem 1 as well as from Corollary 2 in this case. A practical procedure to check if the Fisher information matrix is singular would be through the matrix, where Here We should choose either ℓ j or −ℓ j in L since β ℓ j = β −ℓ j .Let us k i , i = 1, . . ., m are indices arranged in lexicographic order in K and β k = 0 for k K as a convention.Note that it is always true that L > m − 1. Theorem 3 A necessary and sufficient condition for the non-singularity of I(θ) is that B is of full rank. Proof.We may restrict our attention into the non-singularity of the first (m − 1) × (m − 1) submatrix of I(θ), since the last row and column off-diagonal elements are all 0 and the diagonal element is (2/σ) 2 .By setting y m = 0 in (8) and introducing A necessary and sufficient condition for the non-singularity of I(θ) is now that ∑ implies y j = 0, j = 1, 2, ..., m − 1.This completes the proof. Example 5.The matrix B for the model in Example 1 is derived from is zero if and only if β 1 = β −1 since 1 + β 1 + β −1 0 and 1 − β 1 − β −1 0 from Assumption 1.Thus, we see that the condition β 1 = β −1 with β 2 1 < 1/4 is not only necessary and sufficient condition for some of transfer functions being identical, but also for the singularity of the Fisher information matrix in this example. Unilateral Simultaneous Spatial Autoregressive Model It is taken it for granted that unilateral simultaneous spatial autoregressive model including AR model in time series, is always identifiable and the Fisher information matrix I(θ) is non-singular.However, it would be worthy of proving in the frame work of simultaneous spatial autoregressive model.Then, it becomes clearer that the problems we have discussed are due to the lack of unilaterality of general simultaneous spatial autoregressive model. Theorem 4 Unilateral simultaneous spatial autoregressive model is always identifiable and the Fisher information matrix I(θ) is always non-singular. Proof.It is only possible to choose h k (z 1 , z 2 , ..., z n ), k = 1, 2, ..., p to find out transfer function P(z 1 , z 2 , ..., z n ) for the spectral density (2).Any other choice contradicts with the unilaterality of the model.Therefore unilateral simultaneous spatial autoregressive model is always unique for given spectral density.On the other hand the quadratic form ( 8) is then rewritten as is zero if and only if Y(z 1 , . . ., z n ) = 0 and y m = 0.This proves the non-singularity of I(θ). Concluding Remarks We have shown that simultaneous spatial autoregressive model is non-identifiable from the covariance structure or the spectral density.Several different regression parameters with different standard deviation of the disturbance are mapped from a spectral density.Therefore, we have to be careful about estimation of parameters based on the second moments, for example, estimation by Gaussian maximum likelihood principle.There could be many other estimates even if an estimate had been obtained by giving an initial value to an optimisation algorithm.A practical procedure would be to find out all estimates and pick up one which is most meaningful for the underlying phenomena.This non-identifiability of the model has been already mentioned in the context of two sided moving average model (Rosenblatt, 1980).A cure he proposed is to employ bispectrum, which can be applied for the model, too.But we leave it for future investigation, together with an investigation of the type of parameter mapping from the spectral density. Another problem we have investigated in this paper is possible singularity of the Fisher information matrix, where not all parameters are estimable.Theorem 1 demonstrates that it happens when some of parameters mapped from a spectral density are duplicated.Non-identifiability of simultaneous spatial autoregressive model leads us not only to multiple estimates of parameters but also non-estimable parameters.We need to check such a singularity before estimation.Otherwise, we may face unconvergence of optimisation algorithm or instability of the estimate.Several types of conditions given in Section 4 would be useful for the check.There are a lot of open problems left, for example, converse of Theorem 1 or any other type of necessary and sufficient condition for the non-singularity than that given in Theorem 3.
3,756.2
2017-06-15T00:00:00.000
[ "Mathematics" ]
Initialization of a spin qubit in a site-controlled nanowire quantum dot A fault-tolerant quantum repeater or quantum computer using solid-state spin-based quantum bits will likely require a physical implementation with many spins arranged in a grid. Self-assembled quantum dots (QDs) have been established as attractive candidates for building spin-based quantum information processing devices, but such QDs are randomly positioned, which makes them unsuitable for constructing large-scale processors. Recent efforts have shown that QDs embedded in nanowires can be deterministically positioned in regular arrays, can store single charges, and have excellent optical properties, but so far there have been no demonstrations of spin qubit operations using nanowire QDs. Here we demonstrate optical pumping of individual spins trapped in site-controlled nanowire QDs, resulting in high-fidelity spin-qubit initialization. This represents the next step towards establishing spins in nanowire QDs as quantum memories suitable for use in a large-scale, fault-tolerant quantum computer or repeater based on all-optical control of the spin qubits. Introduction The development of site-controlled quantum dots (QDs), and demonstration of their suitability for hosting spin-based qubits, is a key objective in the roadmap towards a scalable quantum information processing system implemented with QDs [1][2][3]. There has been considerable recent effort in exploring different techniques for fabricating site-controlled QDs, including lithographic patterning of growth substrates [4,5], stress-induced Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. positioning within micropillars [6], and the growth of QDs within seeded nanowires [7]. QDs within nanowires have been shown to have both high photon-extraction efficiencies [8,9], and good single-photon source characteristics [7][8][9]. Furthermore, magneto-photoluminescence spectroscopy studies [10,11] of InAsP QDs in InP nanowires have shown that QDs in nanowires may be a promising platform for hosting spin qubits, but to our knowledge, thus far there have been no demonstrations of the fundamental spin manipulation operations [12][13][14][15][16][17][18][19][20] on spins trapped in nanowire-hosted QDs, nor in any other site-controlled QD devices. Nanowire QDs are a substantially different platform-with respect to both material and structural characteristics-than selfassembled QDs in a bulk host (the system with which the majority of spin qubit studies using optically active QDs have been performed to date). Nanowire QDs have high brightness due to the waveguiding effect of the needle-like structure in which each QD is embedded. Brightness is an advantage that nanowire QDs share with QDs embedded in micropillars, but they currently offer the additional advantage of being deterministically positionable without compromising optical quality. Here we demonstrate all-optical initialization of spin qubits embedded in several deterministically positioned InP nanowire QDs, which is a first step towards realizing more complex spin experiments with nanowire QDs, including coherent spin control [3] and spin-photon entanglement generation [2]. Methods We studied a sample with InAsP QDs embedded in InP nanowires that was grown using vapor-liquid-solid epitaxy on a (111)B InP substrate; the growth details can be found in [7]. The substrate was covered with a SiO 2 mask containing a grid of apertures, which were produced using e-beam nanopatterning. The growth of each nanowire was seeded by placing a gold nanoparticle in the center of each aperture [7,22,23] and consisted of a two-step process that involves growth of a core nanowire containing the InAsP QD followed by growth of a shell, which results in the needle-like shape of the nanowires. Photoluminescence spectra were measured using a custom double-grating spectrometer setup with ∼10 μeV resolution, which is necessary in order to spectrally select just a single emission line from the QD, and measure its signal on a single-photon counter, while rejecting light from a CW laser that is used for the spin initialization and has its center frequency near the QD emission line that is being collected. For all the experiments that required application of a magnetic field, we used the Voigt geometry, since this is the configuration that is used for spin control with optical pulses [15,16,18], and for generation of spin-photon entanglement [24][25][26][27]. Results In this article we present results from a typical quantum-dot-nanowire device in our sample; the QD exhibited emission under CW above-band (780 nm) excitation that was both bright and spectrally narrow. Figure 1(a) shows the polarization-resolved spectra from the QD. At saturation the QD exhibited a linewidth of ∼60 μeV full-width-at-half-maximum, and an energy difference of δE=1.6 μeV between the two orthogonal linear polarization components (measured between the fitted peak centers; see figure 1(a). The emission intensity of both H-and V-polarized lines have a linear dependence on the above-band laser power, until saturation is reached at approximately 550 nW ( figure 1(b)). The linear power dependence is consistent with these lines corresponding to single exciton emission, as opposed to biexciton emission (which would exhibit a quadratic power dependence). To determine that the QD was charged, we used magneto-photoluminescence spectroscopy in the Voigt configuration (magnetic field perpendicular to the nanowire growth direction): figure 1(c) shows the photoluminescence signal from the QD as a function of its emission energy and polarization. The photoluminescence spectra clearly show a four-fold splitting, which is consistent with emission from a charged dot [28]. We note that in this figure and the remainder of the paper, we define H as the linear polarization that is θ=50°relative to the magnetic field and V as the linear polarization that is at θ=140°relative to the magnetic field, as shown in the illustration in figure 1(d). We obtained further evidence that the QD was charged by recording photoluminescence spectra as a function of the applied magnetic field strength. Figure 2(a) shows the photoluminescence spectra of the QD for varying magnetic fields in the range from 0 to 5 T for both the H and V polarizations; the splitting of the emission into four lines is clearly evident. This is expected for a charged dot, for which there are two contributions to the transition energies that depend on the magnetic field: a linear dependence due to the Zeeman effect [28] (figure 2(b)), and a quadratic dependence resulting from the diamagnetic shift [29,30], (figure 2(c)). The four-fold splitting of the spectral lines, their linear dependence on the B-field (after the diamagnetic shift has been subtracted), their polarization properties, and their approximately equal brightness, indicate that the QD has both the level structure and the selection rules of a charged QD. Figure 2(d) shows the relevant energylevel diagram for a charged QD in a Voigt-configuration magnetic field [28], and the polarization selection rules for the optical transitions. Figure 3 illustrates our spin-pumping experiments, and shows the main results. We performed a set of four different experiments, to demonstrate that we can perform spin pumping into either the ñ | state or the ñ | state, in each case using one of two (per spin state) available optical transitions. The inset of figure 3(a) illustrates the QD optical transitions we used for pumping and detection in one of the four experiments, and we use it here as an example case to describe the experiment in detail. The principle of the experiment is as follows [31,32]. A fixed-wavelength above-band laser is used to randomize the state of the spin; it does this by incoherently exciting states at far higher energies than the QD trion levels, and through a series of decay processes, some of which are non-spin-preserving, the QD trion levels are randomly populated, and these levels in turn decay and randomly populate the QD ground-state spin levels. The action of the spin-randomization laser is depicted as violet upward wavy arrows. A tunable laser is used to resonantly excite one of the trion states via a vertical transition ñ -  ñ (| | ). If the system is initially in the state ñ | , then this laser will cause the trion state   ñ | to be populated. This trion state will then decay to either the ñ | or ñ | state with equal probability (gray downward wavy arrows in the inset of figure 3(a)). If the decay is to the state ñ | , then the tunable laser will re-excite the trion state. If, on the other hand, the decay is to the ñ | state, then the tunable laser will no longer be resonant with any transition and the system will be initialized in the ñ | state, until the spin is randomized again. The emission from the   ñ -ñ | | transition is spectrally filtered and sent to a single-photon counter, providing a measurement of the spin state [3]. Figure 3(a) shows the collected photon counts as a function of the wavelength of the spin-pumping laser, as it was tuned over the ñ -  ñ | | transition. Two traces are shown: one when only the spin-pumping laser was on (in blue), and one when both the randomization laser and the spin-pumping laser were on (in red). When both lasers were on, the data (red points) show a clear resonance, corresponding to spin-pumping-laser photons being absorbed by the ñ -  ñ | | transition, and being emitted by the   ñ -ñ | | transition. The reason that photons can be absorbed by the ñ -  ñ | | transition is that the ñ | state is continually being populated as a result of the randomization laser being on. However, when the randomization laser is turned off, the data (blue points) shows no resonance as the spin-pumping laser passed over the ñ -  ñ | | transition. This serves as strong evidence that the QD spin has in this case been optically initialized in the ñ | state. (The intuitive reasoning behind this claim is that if the spin is initialized in the ñ | state, then no further photons from the spinpumping laser can be absorbed, so in the absence of the randomization laser, there will be no excitation of the QD, and hence no photons emitted. In contrast, if the spin was being imperfectly initialized, or the spin relaxation time was very short, then one should expect to observe photons being emitted when the spinpumping laser is on resonance with the ñ -  ñ | | transition. Our data are therefore consistent with our having successfully initialized the spin.) In an analogous manner, figure 3(b) shows how the QD spin can be optically pumped into the ñ | state via the other vertical optical transition ñ -  ßñ (| | ), and figures 3(c) and (d) show how the QD spin can be optically pumped using the two available diagonal optical transitions. To demonstrate the robustness and repeatability of this spin qubit initialization technique in nanowire QDs, we performed the same type of optical pumping experiments on several nanowire QDs on the same sample, which yielded essentially identical results to those shown in figure 3 (see supplementary data). To further characterize the spin pumping process, we also studied the spin-pumping signal as a function of the applied spin-pumping-laser power. Figure 4 shows the dependence of the peak spin-pumping signal on the spin-pumping-laser power in the experiment in figure 3(a) (where the spin is pumped into the ñ | state using the ñ -  ñ | | transition). In particular, the red data points in figure 4(a) show that the peak of the spin-pumping signal initially increases rapidly with applied laser power, and saturates once the spin-pumping-laser power reaches approximately 350 nW. The blue data points are from the same experiment, except the randomization laser had been turned off. We note that even with a power well above the value that is sufficient to saturate the ñ -  ñ | | transition (and hence cause maximal spin pumping), when the randomization laser is off, there is no increase in the counts as a function of power, which is consistent with high-fidelity optical pumping and excellent spectral-filtering-based rejection of the scattered spin-pumping-laser light. We also studied the effect of the applied laser power on the width of the transition resonance. Figure 4(b) shows the full-width-at-halfmaximum linewidth of the resonance when both the randomization laser and the spin-pumping laser were on, as a function of the power of the spin-pumping laser. At lower powers (P spin-pump <0.15 μW), we observed a linewidth of less than 20 μeV, with substantial broadening as the power was increased to be above the saturation limit of the transition. The power-dependence measurements in figures 1(b) (above-band excitation photoluminescence power dependence) and 4(a) (spin-pumping laser power dependence) provide valuable photons as a function of the spin-pumping-laser energy, for the specific experimental protocols illustrated in the respective insets. Red data points show the collected photon counts when both lasers were on, whereas the blue data points show the photon counts when only the spin-pumping laser was on. For all four spin pumping schemes the powers used were P spin-pump =500 nW (spin-pumping laser) and P rand =60 nW (above-band spin randomization laser). The insets are showing the relevant levels of the quantum dot, the applied laser fields, and photon collection. The net effect of the spin randomization laser (above-band excitation that decays randomly into the two trion states; see main text for details) is shown as upward violet wavy arrows. Spontaneous decay is depicted as downward gray wavy arrows. A tunable CW laser ('Pumping'), is scanned across one of the optical resonances. Photons emitted by the yellow shaded transition ('Detection') are collected, and measured by a single-photon counter. . Spin-pumping peak amplitude and width as a function of the power of the spin-pumping laser, in the experiment described in figure 3. The randomization laser power was kept constant at P rand =60 nW. (a) Red: photon counts at the peak of the spinpumping resonance (when both lasers were on), as a function of the spin-pumping laser power. Blue: photon counts measured when the randomization laser was off. (b) The full-width-at-half-maximum linewidth of the resonance shown in figure 3(a), as a function of the spin-pumping laser power. information for quantitatively assessing the efficacy of the spin pumping process, since the saturation values can be used to infer the relative rates of spin pumping and spin randomization. We have used a rate-equation model (described in detail in the supplementary data) of the spin pumping experiment to analyze our experimental results; it shows that our data is consistent with optical spin pumping causing spin qubit initialization with a fidelity of 99% in less than 10 ns. This is similar to the reported performance of spin pumping in self-assembled InAs QDs [13,16,18]. We used our model to infer a lower bound on the spin lifetime of T 1 >3 μs. These values suggest promise for the use of charges in nanowire QDs as spin qubits. Conclusion As is the case with micropillar QDs, the structural characteristics of nanowire QDs render them incompatible with most scalable two-qubit gate proposals for spin qubits, due to the lack of a direct way for the trapped electrons (or holes) to interact. Spin qubits in nanowire QDs (and micropillar QDs) are thus perhaps better suited to quantum computing or repeater architectures in which the stationary qubits are entangled indirectly, by interfering and detecting photons that are entangled with the spins [2], which is also the one of the leading approaches for scaling free-space trapped-ion qubits [21]. Our experiments make a contribution towards the intermediate-term goal of entangling two spatially separate QD spins on a single chip by showing that one of the steps of the entanglement-generation protocol-spin initialization-can be performed with QDs that are deterministically positioned and have high brightness. In this paper we have demonstrated that in the InAsP-QD/InP-nanowire system a charged QD in a Voigt magnetic field does yield two optical Λ-systems that can be manipulated, and we have demonstrated optical spin pumping using independent experiments on both transitions in both Λ-systems. Using several different nanowires, we were able to show that spin measurement as part of the optical pumping process is possible in the InAsP-QD/InP-nanowire system. These experiments were all performed on site-controlled nanowires, making this the first demonstration, to our knowledge, of optical pumping of site-controlled QD spins. Funding sources We acknowledge financial support from the Air Force Office of Scientific Research, the MURI Center for Multi-functional Light-Matter Interfaces based on Atoms and Solids, and from the Army Research Office (grant number W911NF1310309). This research was also supported by the Cabinet Office, Government of Japan, and the Japan Society for the Promotion of Science (JSPS) through the Funding Program for World-Leading Innovative R&D on Science and Technology (FIRST Program). KGL acknowledges support by the Swiss National Science Foundation. PLM was supported by a David Cheriton Stanford Graduate Fellowship.
3,873.8
2014-09-16T00:00:00.000
[ "Physics" ]
Hyers–Ulam Stability of Additive Functional Equation Using Direct and Fixed-Point Methods In this present work, we obtain the solution of the generalized additive functional equation and also establish Hyers–Ulam stability results by using alternative fixed point for a generalized additive functional equation χ(􏽐lg�1 vg) � 􏽐1≤g<h<i≤lχ(vg + vh + vi) − 􏽐1≤g<h≤lχ(vg + vh) − ((l2 − 5l + 2)/2) 􏽐 l g�1(χ(vg) − χ(− vg)/2). where l is a nonnegative integer with N − 0, 1, 2, 3, 4 { } in Banach spaces. Introduction e problem of Ulam-Hyers stability concerns determining circumstances under which, given an approximate solution of a functional equation, one may locate an exact key that is closer to it in some sense. e investigation of stability problem for functional equations is identified to a question of Ulam [1] about the stability of group homomorphisms and affirmatively answered for Banach space by Hyers [2,3]. It was further generalized and interesting results were obtained by a number of authors [4][5][6]. In 2019, Park et al. [33] introduced additive s-functional inequality. Using the fixed-point method and direct method, he established the Hyers-Ulam stability for the abovementioned one in complex Banach spaces. Also, he examined the Hyers-Ulam stability of homomorphism and derivations in complex Banach algebras. In 2018, Almahalebi [34] investigated the quadratic functional equation in Banach spaces. And, he established the hyperstability outcome of the same equation through the fixed-point approach. Radu [35] investigated various results about the stability problem by using the fixed-point alternative. He applied the fixed-point method to examine the stability of Cauchy functional equation and Jensen's functional equations. After his work, numerous authors used the fixed-point method to investigate several functional equations [36][37][38][39][40][41]. e functional equation is called the Cauchy additive functional equation and it is the most famous functional equation. As f(x) � cx is the solution of (1), every solution of the additive equation is called an additive function. In this present work, we derive the solution of the generalized additive functional equation along with established Hyers-Ulam stability results by using direct and fixed-point methods for a generalized additive functional equation where l ≥ 5 is a nonnegative integer in Banach spaces. General Solution of the Functional Equation (2) In this section, we derive the general solution of the generalized additive functional equation (2). Here, we consider Φ and Ω be real vector spaces. Proof. Suppose a mapping χ: Φ ⟶ Ω satisfies the functional equation (2). (2) and using the property of odd function, we have for all v ∈ Φ. Replacing v by 2v in (3), we obtain for all v ∈ Φ. Again, replacing v by 2v in (5) and using (3), we have for all v ∈ Φ. We can generalize for any nonnegative integer n and we get (2), we obtain our desired result of equation (1). Remark 1. Let Ω be a linear space and a function χ: Φ ⟶ Ω satisfies the functional equation (2). en, the following claims hold: In Sections 3 and 4, we take Φ be a normed space and Ω be a Banach space. For our convincing effortlessness, we describe a function Θ: Φ ⟶ Ω as for Hyers-Ulam Stability of the Functional Equation (2): Direct Method In this section, we investigated the Hyers-Ulam stability of the generalized additive functional equation (2) in Banach space by using the direct method. for all v 1 , v 2 , . . . , v l ∈ Φ, then there exists a unique additive mapping Ψ: Φ ⟶ Ω satisfying equation (2) and for all v ∈ Φ. From equality (12), we get for all v ∈ Φ. Exchanging v through 2v in (13), we obtain for all v ∈ Φ. From (14), we achieve for all v ∈ Φ. Adding together (13) and (15), we get the following outcome: for all v ∈ Φ. It follows from (13), (15), and (16), and we can generalize that as follows: for all v ∈ Φ. In order to establish the convergence of the sequence χ(2 w v)/2 w , switch v through 2 s v and also divide by 2 s in (17). We conclude that, for some w, s > 0, for all v ∈ Φ. erefore, the sequence χ(2 w v)/2 w is a Cauchy. As Ω is complete, there exists Ψ: Φ ⟶ Ω so that Ψ(v) � lim w⟶∞ (χ(2 w v)/2 w ) for all v ∈ Φ. Taking the limit w ⟶ ∞ in (17), we obtain that result (11) holds for all v ∈ Φ. To prove that the function Ψ satisfies equation (2), replacing (v 1 , v 2 , . . . , v l ) by (2 w v 1 , 2 w v 2 , . . . , 2 w v l ) and also dividing by 2 w in (10), we get for all v 1 , v 2 , . . . , v l ∈ Φ. Taking the limit w ⟶ ∞ in the above inequality and using the definition of Ψ(v), we have us, the function Ψ satisfies equation (2). To prove that the function Ψ is unique, let φ: Φ ⟶ Ω be another additive mapping satisfying the functional equation (2) and (11). Hence, Hence, Ψ is unique. Now, replacing v through (v/2) in (12), we have for all v ∈ Φ. e rest of the proof is similar to that when ζ � 1. So for ζ � − 1, we can prove the results by a similar manner. Hence, the proof is completed. Corollary 1. Let ϕ and ϑ be positive real numbers. If there exists a mapping Θ: Φ ⟶ Ω satisfying the inequality for all v 1 , v 2 , . . . , v l ∈ Φ, then there exists a unique additive mapping Ψ: Φ ⟶ Ω such that for all v ∈ Φ. Hyers-Ulam Stability of the Functional Equation (2): Fixed-Point Method In this section, we examined the Hyers-Ulam stability of the generalized additive functional equation (2) in Banach space by using the fixed-point method. Theorem 3. Let Ψ: Φ ⟶ Ω be a mapping for which there exists a mapping ξ: Φ l ⟶ [0, ∞) and and such that it satisfies the inequality has the property for all v ∈ Φ. en, there exists a unique additive mapping Ψ: Φ ⟶ Ω satisfying equation (2) and for all v ∈ Φ.
1,505
2020-12-12T00:00:00.000
[ "Mathematics" ]
Viable Inflationary Models in a Ghost-free Gauss-Bonnet Theory of Gravity In this work we investigate the inflationary phenomenological implications of a recently developed ghost-free Gauss-Bonnet theory of gravity. The resulting theory can be viewed as a scalar Einstein-Gauss-Bonnet theory of gravity, so by employing the formalism for cosmological perturbations for the latter theory, we calculate the slow-roll indices and the observational indices, and we compare these with the latest observational data. Due to the presence of a freely chosen function in the model, in principle any cosmological evolution can be realized, so we specify the Hubble rate and the freely chosen function and we examine the phenomenology of the model. Specifically we focus on de Sitter, quasi-de Sitter and a cosmological evolution in which the Hubble rate evolves exponentially, with the last two being more realistic choices for describing inflation. As we demonstrate, the ghost-free model can produce inflationary phenomenology compatible with the observational data. We also briefly address the stability of first order scalar and tensor cosmological perturbations, for the exponential Hubble rate, and as we demonstrate, stability is achieved for the same range of values of the free parameters that guarantee the phenomenological viability of the models. In this work we investigate the inflationary phenomenological implications of a recently developed ghost-free Gauss-Bonnet theory of gravity. The resulting theory can be viewed as a scalar Einstein-Gauss-Bonnet theory of gravity, so by employing the formalism for cosmological perturbations for the latter theory, we calculate the slow-roll indices and the observational indices, and we compare these with the latest observational data. Due to the presence of a freely chosen function in the model, in principle any cosmological evolution can be realized, so we specify the Hubble rate and the freely chosen function and we examine the phenomenology of the model. Specifically we focus on de Sitter, quasi-de Sitter and a cosmological evolution in which the Hubble rate evolves exponentially, with the last two being more realistic choices for describing inflation. As we demonstrate, the ghostfree model can produce inflationary phenomenology compatible with the observational data. We also briefly address the stability of first order scalar and tensor cosmological perturbations, for the exponential Hubble rate, and as we demonstrate, stability is achieved for the same range of values of the free parameters that guarantee the phenomenological viability of the models. Nearly forty years ago, three of the major problems in contemporary cosmology, namely the Horizon Problem, the Flatness Problem and the Magnetic-Monopoles Problem, have been given a successful solution in the context of the inflationary scenario. This scenario was firstly proposed in Ref. [1] and was further developed in Ref. [2,3]. According to the inflationary scenario, merely fractions of seconds after the Big Bang, the spatial coordinates of the Universe expanded exponentially. An expansion of this sort is supposed to last from about 10 −36 sec to 10 −15 sec and the size of the Universe is increased by a factor of 10 26 . The nature of this scenario is rather bizarre for classical cosmology, since traditional Big Bang Friedmann-Robertson-Walker (FRW) models do not match the fast evolution of the Universe [4][5][6]. The first approximation is to consider the expansion as the de Sitter phase of the Universe. The standard approach to achieve the de Sitter inflationary phase in cosmology is to use scalar fields, and many of the initial models of inflation made use of the scalar field formalism. However, it is also possible to produce an inflationary phase of the Universe in the context of modified gravity, see Refs. [7][8][9][10][11][12][13] for reviews on this. In fact, the first model of f (R) which remains viable up to date is the Starobinsky model [14], and ever since many models have been developed in various forms of modified gravity [7][8][9][10][11][12][13]. In all the modified gravities the key element is that geometric terms are included in the gravitational Lagrangian, which are absent in the Einstein-Hilbert gravity. These terms may dominate the Universe's evolution at early times or even at late times. Such models may include additional curvature terms, namely the f (R) theories, torsional terms namely the teleparallel f (T ) theories, or the Gauss-Bonnet modified gravities f (G) theories, as well as the generalized f (R, G) theories (see [7][8][9][10][11][12][13]). Such theoretical formulations of gravity are able to model both the early-time expansion and In this section we shall recall the essential features of the ghost free f (G) gravity developed in Ref. [16]. The whole ghost-free construction scheme is based on introducing a Lagrange multiplier λ in the standard f (G) gravity action, so the ghost-free action is the following, where µ is a mass-dimension one constant. Upon variation with respect to the Lagrange multiplier λ, we obtain the following constraint equation, Effectively, the kinetic term is a constant, so it can be absorbed in the scalar potential in the following way, and in effect, the action of Eq. (1) is rewritten as, The equations of motion for the action (4), are (2) and the following, Upon multiplication of Eq. (6) with g µν , we get, By solving Eq. (7) with respect to λ, we get, Let us now see how the equations of motion become if the metric background is a flat Friedmann-Robertson-Walker (FRW), with line element, Assuming that the functions λ and χ are only cosmic time dependent, and also that no matter fluids are present, that is, T matter µν = 0, Eq. (2) has the following simple solution, Hence, the (t, t) and (i, j) components of Eq. (6) can be written, and in addition from Eq. (5) we get, 0 = µ 2λ + 3µ 2 Hλ + 24H 2 Ḣ + H 2 h ′ µ 2 t −Ṽ ′ µ 2 t . By solving Eq. (11) with respect to λ we get, It is easy to see that by combining Eqs. (14) and (13), we easily obtain Eq. (12). Also by solving Eq. (12) with respect to the scalar potentialṼ µ 2 t , we get, Hence, for an arbitrarily chosen function h(χ(t)), and with the potentialṼ (χ) being equal to, then we can realize an arbitrary cosmology corresponding to a given Hubble rate H(t). Finally, the functional form of the Lagrange multiplier is equal to, The resulting theory with Lagrangian (4) is a form of the scalar Einstein-Gauss-Bonnet gravity and in the next section we shall extensively discuss the inflationary dynamics of this model. The presence of the arbitrary function h(χ) provides us with the freedom of realizing several viable cosmologies. III. INFLATIONARY DYNAMICS OF THE GHOST-FREE f (G) MODEL As we already mentioned, the ghost-free f (G) model of Eq. (4) is a sort of scalar Einstein-Gauss-Bonnet model [19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36], the cosmological perturbations of which were studied in Ref. [37]. In this section we shall use the formalism, notation and results of Ref. [37], and we shall calculate the spectral index of primordial curvature perturbations and the tensor-to-scalar ratio for the model (4), by specifying the functional form of h(χ) and the Hubble rate. Then, by replacing the cosmic time with the e-foldings number, we shall express all the observational and slow-roll indices as functions of the e-foldings number, and we shall put the phenomenology of the model into test by confronting the resulting theory with the latest observational data. We begin by defining the functions Q i (χ) (see [37] for more details), as follows, where H is the Hubble rate, H ≡ȧ/a. In addition, the wave speeds c A and c T become, , ∂ 2 f ∂X 2 = 0 and F = 1 in our case. Note that c A is the wave speed of the perturbed field in the context of the perturbed FRW metric, and c T is the sound speed. For more details on this we refer the reader to [37]. The definition of the wave speeds is for the general Gauss-Bonnet corrected f (R, χ) theory with F = ∂f ∂R , but in our case f (R, χ) = R and F = 1. Also the waves speeds are affected from the Gauss-Bonnet coupling via the functions Q f and Q b which in our case have the form (18). As a result, the two wave speeds are further simplified with the wave speed of the perturbed field c A being trivial as in the classical case, while the wave speed of the gravitational waves in non-trivial, In order to calculate the slow-roll parameters, we first need to determine the function E(R, χ, X) which is defined as follows [37], The slow-roll parameters are defined as follows [37] (R, χ, X) HE(R, χ, X) , The two spectral indices, for scalar and for tensor perturbations in the inflationary era respectively, are defined using the slow-roll parameters [37], Finally, the tensor-to-scalar ratio is equal to [37], The above expressions of the parameters for the slow-roll inflationary dynamics, are in fact functions of the cosmic time, t. However, such a description is not sufficient for our study, since the preferable variable to perfectly quantify the evolution during the inflationary era is the e-foldings number, N . So we need to transform the above relations with respect to the e-foldings numbers. At first, we consider a given Hubble expansion rate for the inflationary era, as a function of time, H = H(t). The e-foldings number is defined as where t i is the initial and t f the final moments of inflation. Considering a given initial moment for inflation, t i ∈ [0, 10 −36 ], and an unspecified final moment, t, the e-foldings number is obtained via Eq. 25 as a function of time, N = N (t). Supposing this function is reversible, time is also given as a function of the e-foldings number, t = t(N ). Consequently, the first-and the second-order derivatives with respect to time, are transformed into first-and secondorder derivatives with respect to the e-foldings number, as follows, Since the scalar field, χ = χ(t) is a function of time, its potential,Ṽ (χ) =Ṽ (χ(t)), and the Lagrange multiplier, λ = λ(t), the coupling function, h(χ) = h(χ(t)), as well as the Ricci scalar, the Gauss-Bonnet invariant and the function E(R, χ) = E (R(t), χ(t)) are also functions of the cosmic time. As a result, they can all be rewritten with respect to the e-foldings number. Furthermore, the functions Q i (χ) are also transformed, taking the following forms, where the prime denotes differentiation with respect to the e-foldings number. In the same manner, we may redefine the wave speed for the gravitational waves, The next step is to express the slow-roll parameters, ǫ i , with respect to the e-foldings number, and the resulting expressions are, Through these, the spectral indices and the tensor-to-scalar ratio are directly calculated with respect to the e-foldings number, using Eqs. (23) and (24). What remains is to define a specific coupling function, h(χ), as well as the Hubble rate for the cosmological FRW background, and also to calculate the spectral indices and the tensor-to-scalar ratio and compare our results with that of the latest Planck [17] and BICEP2/Keck-Array [18] observations. With regard to the coupling function, we shall assume that it has either exponential or power-law forms, while with regard to the Hubble rate, we shall firstly assume the de Sitter evolution for a warm up study, and finally we shall assume the quasi-de Sitter evolution. IV. THE CASE OF DE SITTER BACKGROUND EVOLUTION In the de Sitter case, the Hubble rate is constant as a function of the cosmic time, therefore the e-foldings number and the cosmic time are related as follows, As a result, the Ricci scalar and the Gauss-Bonnet invariant are both constant, Finally, the scalar field given by Eq. (10), takes the following form, Using, Eqs. (30), (31) and (33) and in addition a specific form for the function h(χ), we can calculate the slow-roll indices and the observational indices for the de Sitter evolution cosmology. A. A power-law coupling function, h(χ) = γχ b Let us assume that the coupling function is a simple power law, where γ and b are real constants, to be used as free parameters later. Using Eqs. (33) and (31), we can write the coupling function first as function of time, and then as a function of the e-foldings number, Using Eq. (16), we may derive the potential as a function of the e-foldings number, as well as the Lagrange multiplier, From the equations in (27), we can write the Q i functions with respect to the e-foldings number, as follows, while the wave-speeds appearing in Eqs. (19) and (28) are, The function E(R, χ) is written with respect to the e-foldings number in the following way, Using Eqs. (29), (30), (36) and (41), we obtain the slow-roll parameters of the de Sitter evolution case, which are, Using the above results, we can proceed in calculating the spectral indices, from Eqs. (23), and the tensor-to-scalar ratio, from Eq. (24), r = 16 Having these at hand, we can compare them directly to the Planck [17] and the BICEP2/Keck-Array data [18], which indicate that n S = 0.9649 ± 0.0042 and r < 0.064. It can be shown that the viability of the theory is achieved for a restricted range of values of the free parameters. Actually, if we set N = 50 (or N = 60) to indicate the end of the inflationary era, it is easy to see that the values of H 0 , γ and µ do not affect the resulting values. In effect, we choose γ = 1 and µ = 1 sec −1 for simplicity and H 0 = 10 26 sec −1 (or H 0 = 10 27 sec −1 ). The tensor-to-scalar ratio is constantly close to zero, while the spectral index coincides with the Planck data only for µ ∼ 4 sec −1 . Namely, n S = 0.9644 only for b = 3.78 when N = 50, or b = 4.136 for N = 60 for the same values, r ∈ [10 −50 , 10 −20 ]. In Fig. 1 we present the plots of the spectral index and of the tensor-to-scalar ratio as a function of b. As a result, a power-law coupling function for the de Sitter background evolution, may generate a viable inflationary model, only under the strict assumption of h(χ) ∼ χ 4 . B. An Exponential Coupling Function, h(χ) = γe bχ In this case, we assume that the coupling function h(χ) has the following exponential form, where γ and b are real constants, to be used as free parameters later. Using Eqs. (33) and (31), we can write the coupling function first as function of time, and then as a function of the e-foldings number, At this point, by using Eq. (16), we may derive the potential as a function of the e-foldings number, as well as the Lagrange multiplier, Accordingly from Eqs. (27), we derive the Q i functions with respect to the e-foldings number, as follows, while the wave-speeds appearing in Eqs. (19) and (28), take the following form, The function E(R, χ) is written with respect to the e-foldings number as follows, Using Eqs. (29), (30), (47) and (52), we obtain the slow-roll parameters for the de Sitter evolution case with an exponential coupling function, which is, By using the above results, we can proceed in calculating the spectral indices, from Eqs. (23), and the tensor-to-scalar ratio, from Eq. (24), In order to examine the viability of the model, we need to calculate the numerical values for the spectral index n S and the tensor-to-scalar ratio r, for various values of the parameters H 0 , γ, b and µ at the end of inflation (for N ∈ [50, 60]) and compare these values to the observational results of the Planck collaboration [17] and the BICEP2/Keck-Array [18]. However in this case, no simultaneous compatibility with the observations can be obtained, and more specifically, the values of n S and r do not depend on the choice of γ, so we set it equal to one for simplicity. They also do not depend on the number of e-foldings, so N = 50 and N = 60 are used in the same manner. They depend on H 0 , b and µ, though, thus assuming that H 0 ∼ 10 27 sec −1 and setting µ = 10 12 sec −1 , we get b = 35.6 so that n S = 0.9644 (Planck's previous result) however, the resulting value of the tensor-to-scalar ratio is excluded. This can also be seen in Fig. 2. V. A FLAT QUASI-DE SITTER VACUUM AS BACKGROUND Now we assume that the Universe's evolution is described by the quasi-de Sitter Hubble rate, Integrating Eq. (56) with respect to the cosmic time, we obtain and solving with respect to time, we may write the latter with respect to the e-foldings number as follows, As a result, the Hubble rate with respect to the e-foldings number becomes, while the Ricci scalar and the Gauss-Bonnet scalar are equal to, Finally, we also express the scalar field of Eq. (10) with respect to the e-foldings number as follows, As in section IV, the Eqs. (30), (31) and (33) and a coupling function allow us to reveal the phenomenological implications of the model by calculating the observational indices of inflation. A. An exponential coupling function, h(χ) = γe bχ At first, we shall assume that the function h(χ) has the functional form given in Eq. (45), which in the case at hand is written in terms of the e-foldings number as follows, By using Eq. (16), we may derive the potential as a function of the e-foldings number, as well as the Lagrange multiplier, The functions Q i with respect to the e-foldings number are derived from the Eqs. (27), while the wave-speeds are Interestingly, both the Q i functions and the wave-speeds have a trivial form in the case of the quasi-de Sitter expansion. This triviality is independent of the coupling function, as we see later on, and should be attributed to this specific FRW background. The function E(R, χ) with respect to the e-foldings number takes the form, Using Eqs. (29), (30), (47) and (52), we obtain the slow-roll parameters of the flat quasi-de Sitter case with an exponential coupling function. Interestingly, the five of them take the following trivial form, that seems independent of the coupling function, while the fourth has a long and complex form depending on the coupling function, where ε exp is some notation for the complicated functional form of the slow-roll index ǫ 4 . Similarly, the spectral indices and the tensor-to-scalar ratio are also long and complex functions of the e-foldings number, the mass µ and the model parameters, H 0 and H 1 due to the expansion rate and γ and b due to the coupling function, thus we do not present them in close form. What is interesting to note is that the spectral indices and the tensor-to-scalar ratio yield the same values independently of which case of Eq. (57) we will use. Again, we perform comparisons using the observable values for n S and r obtained by the Planck with their latest data [17], along with [18]. As we stated before, the spectral index of the scalar modes must be within the interval [0.9607, 0.9691] and mean n S = 0.9649; the tensor-to-scalar mode, on the other hand, is restricted below 0.1 by [18], while [17] restricts further as r < 0.064. In our case, the parameters γ and b, as well as the mass µ of the scalar field seem not to affect the numerical values of the spectral index or the tensor-to-scalar ratio. As a result, we consider them equal to unity (γ = b = 1 and µ = 1 sec −1 ), so that the analysis is simplified and focused on the rest of the parameters. The e-foldings number is chosen N = 50 and N = 60, so as to indicate the end of inflation, but this also does not alter the results. As for the expansion rate, given that H 0 ≥ 10 14 sec −1 for H 1 ≈ 10 26 sec −2 (or that H 0 ≥ 5 × 10 14 sec −1 for H 1 ≈ 10 27 sec −2 ), the spectral index approaches unity, restricting our choices. We consider H 0 to be in the interval [10 12 , 10 15 ] sec −1 and H 1 in the respective interval [10 26 , 10 29 ] sec −2 , where the spectral index of our model equals to the observable value, as we can see in Fig. 3. For the majority of these cases, the tensor-to-scalar ratio is close to zero, as we can see in Fig. 4. As an example, choosing N = 50 (or N = 60) and H 1 = 10 27 sec −2 , then for H 0 = 4.91375 × 10 14 sec −1 , we have n S = 0.9644 and r = 0.0282787, which comply with the latest data of the Planck collaboration. What we need to notice is that these two parameters (H 0 and H 1 ) need careful fine-tuning and cannot differ significantly for the set of values we gave, otherwise the model collapses before the data. Now let us assume that the function h(χ) takes the form given in Eq. (34), which in terms of the e-foldings number is expressed as follows, From here, using Eq. (16), we may derive the potential as a function of the e-foldings number, as well as the Lagrange multiplier, The Q i functions with respect to the e-foldings number have the same trivial form given in Eqs. (64) and (65). The function E(R, χ) with respect to the e-foldings number takes the form, Using Eqs. (29), (30), (68) and (71), we obtain the slow-roll parameters of the flat quasi-de Sitter case with an exponential coupling function. Except from the fourth one, which has a long and complex expression, (N, H 0 , H 1 , a, b, µ) , the rest are given in Eqs. (67). The spectral indices and the tensor-to-scalar ratio have the same form as in the case of the exponential coupling function, presented above. Again, we perform comparisons using the observable values for n S and r obtained by the Planck with their latest data [17], along with [18]. We assume that the parameters γ and b, as well as the mass µ are equal to unity (γ = b = 1 and µ = 1 sec −1 ), so that the analysis is simplified and focused on the rest of the parameters. The e-foldings number is chosen N = 50 and N = 60, and as for the expansion rate, given that H 0 ≥ 10 14 sec −1 for H 1 ≈ 10 26 sec −2 (or that H 0 ≥ 5 × 10 14 sec −1 for H 1 ≈ 10 27 sec −2 ), the spectral index approaches unity, restricting our choices. We consider H 0 to be in the interval [10 12 , 10 15 ]sec −1 and H 1 in the respective interval [10 26 , 10 29 ]sec −2 , where the spectral index value of our model becomes equal to the observable value, as we can see in Fig. 5. For the majority of these cases, the tensor-to-scalar ratio is close to zero, as we can see we have n S = 0.9644 and r = 0.0400002 (or r = 0.0333335), that match the latest data of the Planck collaboration. Again, these two parameters (H 0 and H 1 ) need careful fine-tuning and cannot differ significantly from the above values. VI. THE CASE OF AN EXPONENTIAL HUBBLE EVOLUTION Finally, let us assume that the evolution of our Universe is described by the following Hubble rate, where H 0 and Ω are model parameters with both having mass dimension [+1]. The Hubble rate of Eq. (73) becomes approximately a quasi de-Sitter like evolution at early times, when t → 0, that is, and also the exit from the inflationary epoch occurs at a finite time t f , which is, Moreover such exponential type Hubble parameter has been used in previous works, in the context of f (R) gravity [38,39] as well as in different theoretical frameworks [40,41]. Motivated by such properties of H = H 0 e −Ωt , here we use it in the context of ghost free f (G) gravity to describe the inflationary phase of our Universe we will test the viability of the model by confronting it with the Planck 2018 constraints. Also, we can express the cosmic time as a function of the e-foldings number N , by using the definition of the latter, where t h is the horizon crossing time instance. Inverting Eq. (76), we get t h in terms of N as follows, This expression of t h is important, since the inflationary parameters will be calculated at the horizon crossing time instance. In the following we will calculate the slow-roll indices and the observational indices of inflation by specifying the function h(χ). A. Exponential coupling : h(χ) = e −αχ Let us assume that h(χ) = e −αχ where α is a model parameter having mass dimension [-1]. For this exponential function h(χ), and also for the Hubble rate chosen as in Eq. (73), the scalar potential is equal to, while the Lagrange multiplier is equal to, Accordingly, the function E defined in Eq. (21) evaluated at the horizon crossing time instance, so by expressing it in terms of the e-foldings number, this reads, We also need to evaluate the expression ofĖ (= dE/dt) as it will be needed for the calculation of the slow-roll indices. In terms of the e-foldings number, this reads, with T = Ω(1+N ) H0 . Furthermore, by using Eq. (18), we explicitly determine the functions Q i in terms of e-foldings number, Having the above expressions at hand, we can easily calculate the spectral index n s and tensor-to-scalar ratio, which are, and where A 1 and B 1 are defined as follows, It may be noticed that n s and r depend on the parameters Ω/H 0 , αµ 2 /Ω, κH 0 and N . We can now directly confront the spectral index and the tensor-to-scalar ratio with the Planck 2018 constraints and the BICEP-2 Keck-Array data, which recall that constraint the observational indices as: n s = 0.9649 ± 0.0042 and r < 0.064, as shown earlier. For the model at hand, n s and r lie within the Planck constraints for the following ranges of parameter values: 0 Ω/H 0 ≤ 0.035 , 0 αµ 2 /Ω ≤ 1.5 with κH 0 ∼ 0.01 and N = 60 and this behavior is depicted in Fig. 7. At this stage it deserves mentioning that an exponential coupling function in a scalar GB theory (without scalar field potential) admits, at early times, slowly expanding solutions of the form a(t) = (At + B) 1/5 (see [42]) and thus exhibits an epoch of deceleration. However here, we show that in the presence of ghost free f (G) gravity, the exponential coupling function may be considered as a "good inflationary" model, which allows an early acceleration and also it is compatible with observations. Before closing, we can also notice that if some sort of slow-roll conditions are employed in the model, viability with the observational data can also be achieved. The slow-roll conditions in the ghost free Gauss-Bonnet scenario are the following, The first condition carries the information about the slow-evolution of the Hubble rate, while the last two demand a slowly evolving of the function h(χ). These conditions, and especially the last two can significantly constrain the and the Lagrange multiplier is, Furthermore, the function E(R) and consequently its derivative, evaluated initially at the horizon crossing time instance, and expressed eventually in terms of the e-foldings number, are equal to, with S = ln H0 Ω (1+N ) . Accordingly, the functions Q i , in terms of e-folding number, are equal to, Hence, the spectral index becomes in this case, and the tensor-to-scalar ratio is equal to, respectively, with A 2 and B 2 being defined as follows, and B 2 (Ω/H 0 ,µ 2 /(ΩM ), n, κH 0 , N ) Now we shall confront the resulting theory with the observational constraints, by assuming two different values for the parameter n, namely, n = 2 and n = 3. For n = 2, the tensor-to-scalar ratio acquires a minimum value r min = − 4 (1+N ) which is equal to r min = 0.065 for N = 60 ( and to r min = 0.078 for N = 50 ). The behavior of the tensor-to-scalar ratio as a function of the free parameters, is given in Fig. 8. As it can be seen in Fig. 8 the Fig. 9 we can see that the spectral index and the tensor-to-scalar ratio can be simultaneously compatible with the observational data, for a wide range of values of the free parameters. Before closing this subsection, we need to comment that it was shown in [42] that a quadratic coupling function in a scalar GB theory (without scalar field potential) gives either a pure de-Sitter evolution of our Universe or a de-Sitter solution at early times connected by a Milne phase at late times, while the cubic and higher order coupling functions describe contracting cosmological solutions with a final singularity at asymptotically infinite time. Thus none of the power law coupling function corresponding to n ∈ [2, 3] admits a successful inflationary model in scalar GB theory in the absence of scalar potential. However in the context of ghost free f (G) gravity, we demonstrated that h(χ) ∼ χ n with n ∈ [2,3] can realize an accelerating Universe at early times, although only the cubic coupling function h(χ) ∼ χ 3 produces a viable inflationary phenomenology, in contrast to the models studied in [42]. C. A Different Reconstruction Approach In this subsection we shall consider an alternative approach in comparison to the previous cases, by providing the scalar potential and the Hubble rate, and we seek for the function h(χ) that may realize the cosmology with Hubble rate (73). We shall consider two types of potentials, namely exponential and power law potentials and we shall confront the resulting theories with the observational data. and the Lagrange multiplier is, With the above expressions of h(χ) and λ, we get the function E(R) as well as its derivative, which are, respectively, with T = Ω(N +1) H0 , defined earlier. In addition, the functions Q i as functions of the e-foldings number become in this case, Having the above expressions in hand, we determine the explicit expressions of the spectral index and of the tensorto-scalar ratio, which are, respectively, where we took V 0 = Ω 4 . Moreover C 1 , D 1 appearing in Eq. (97)) are defined as follows, From Eqs. (97) and (98), it easily follows that the spectral index of scalar perturbation and the tensor-to-scalar ratio depend on the dimensionless parameters : Ω/H 0 , βµ 2 /Ω, κH 0 and N . These theoretical expressions of n s and r should be confronted with the latest Planck constraints in order to check the viability of the model. As a consequence, it is found that the compatibility with the observational data occurs for a narrow range of values of the free parameters, and particularly for 0.001 ≤ Ω/H 0 ≤ 0.02, 82 ≤ βµ 2 /Ω ≤ 83, κH 0 ∼ 0.01 and N = 60. This can also be seen in Fig. 10 where we present the parametric plot of n s and r. With regard to the exponential potential, the classical single scalar theory has no inherent mechanism to trigger the graceful exit from inflation, since the slow-roll indices are constant and field-independent. However the ghost free f (G) theory has the slow-roll index ǫ 4 which is field dependent, and thus the slow-roll phase ends when this index becomes of order O(1). Moreover we have already shown that the model with V = V 0 e −βχ in f (G) gravity, is also in agreement with Planck observational constraints. Hence the ghost free f (G) gravity can make the exponential scalar potential a phenomenologically appealing model for inflation, in contrast to the single scalar canonical exponential theory. Power law scalar potential As a final consideration, we shall assume that the scalar field potential has the form, where n is a positive integer. For such power law potential, the function h(χ) and Lagrange multiplier are equal to, and respectively. Accordingly the function E(R) expressed in terms of the e-foldings number is equal to, and also its derivative is, For the Hubble rate given in Eq. (73) and with the expression of h(χ) we found above, we can easily find the Q i functions expressed in terms of the e-foldings number, Let us use the above results in order to investigate the viability of a power-law class of potentials. According to the latest Planck data, the cubic and quartic potentials are not compatible with the Planck data, so let us investigate whether compatibility with the observations is obtained if the ghost free f (G) theory is used. Let us first assume that n = 3 so we consider the cubic potential first. Using V (χ) = V 0 χ 3 along with the explicit expressions of Q i functions (see the equations in 103, we determine the spectral index and tensor to scalar ratio in terms of the model parameters as follows, and r = 3N (N + 1) where we assumed that V 0 = H 0 (for the cubic potential, V 0 has mass dimension [+1]). Moreover C 2 and D 2 have the following form, parameters, and in particular for 0.001 ≤ Ω H0 ≤ 0.003, 50 µ 2 Ω 2 (κH 0 ) 2/3 52 and N = 60, as shown in Fig. 11. We should note that the single canonical scalar field model with cubic potential without the Gauss-Bonnet coupling yields n s ≃ 0.9089 and r ≃ 0.01, so the spectral index is not compatible with the Planck data. Hence, the presence of the ghost free f (G) gravity can make the cubic potential scalar field class of models to be compatible with the observations. This kind of result is also shown in a different context [36]. Let us now consider the n = 4 case, in which case the potential is V = V 0 χ 4 . In this case, the spectral index of the primordial scalar curvature perturbations and the tensor-to-scalar ratio are equal to, inflationary observational indices lie within 0.960 ≤ n s ≤ 0.970 and 0.049 ≤ r ≤ 0.065 respectively. Thus the model becomes viable (with respect to the Planck 2018 constraints) for such narrow parameter space. However as may be noticed that µ 2 Ω 2 (κH 0 ) 1/2 must be fine-tuned within the values 109 and 110.9 to keep the model compatible with Planck constraints. The simultaneous compatibility of n s and r is illustrated in Fig. 12. However the single canonical scalar field theory with a quartic potential yields n s ≃ 0.8677 and r ≃ 0.066 for 60 e-foldings, so the spectral index of the corresponding canonical scalar field theory is excluded by the latest observational data. Hence, the presence of ghost free f (G) theory modifies the quartic scalar field theory and enhances the phenomenological viability of the model. D. Stability of First Order Perturbations for the Exponential Cosmological Evolution In this subsection we shall study stability of the first order perturbation cosmological perturbations, following the work of [43][44][45][46][47][48] where scalar, vector and tensor perturbations are calculated in the context of Gauss-Bonnet theory. Scalar, vector and tensor perturbations are decoupled, as in general relativity, so that we can focus our attention to tensor and scalar perturbations separately, as discussed in what follows. Let us consider first tensor perturbations, which the flat FRW perturbed line element has the form, where f ij (t, x) is the tensorial perturbation satisfying f i i = f ij ,j = 0. Plugging back the above metric into the original action and expanding, keeping terms up to O(f 2 ) (to obtain the first order equations), we get the following perturbed action [43,47,48], where we use the background equations of motion. With the Fourier decomposition as f ij (t, x) = dkf ij (t)e i k. x , the above perturbed action takes the following form, Thereby, the tensor perturbation is ghost free and stable if the following two conditions hold true, 1 + 8κ 2ḣ H >0 , and are satisfied simultaneously. If we assume that the slow-roll conditions of Eq. (85) hold true, the coupling function h(χ) rolls slowly if it obeys ḣ H ≪ 1/κ 2 and ḧ ≪ 1/κ 2 . We have shown in previous sections that a phenomenologically viable cosmological evolution also satisfies these constraints if the free parameters are chosen appropriately, so in view of Eq. (111) we may conclude that the tensor perturbations are ghost free and stable, at least at first order. So the theory is compatible with the observational data and stable up to first order cosmological tensor perturbations. Now let us turn our focus to scalar perturbations on the FRW background spacetime, in which case the line element is, with Ψ(t, x) being the scalar perturbation. Following [43], the perturbed action up to order O(Ψ 2 ) is equal to, where Z 1 and Z 2 are defined as follows, Clearly the scalar perturbation is ghost free and stable if Z 1 and Z 2 are both positive. With the slow-roll criteria taken into account, and the corresponding field equations, the positivity of Z 1 , Z 2 is guaranteed if the following two conditions hold true,Ḣ Now let us proceed to explore whether, for our considered choice of coupling or potential function, the above two conditions are in agreement with the Planck 2018 constraints. The first condition is satisfied for Ω > 0 ( asḢ = −ΩH 0 e −Ωt ) and simply gives the information that the Hubble parameter must decrease with cosmic time at the early universe, which is also expected in an inflationary scenario. On the other hand, it is shown that all the previous four cases (see Sections from [VI A] to [VI C 2]) need Ω > 0 in order to be compatible with Planck constraints and thus one of the stability condition of scalar perturbation is ensured. Now let us investigate the second condition case by case: for the exponential coupling i.e. h(χ) = e −αχ ,ḧ −ḣH becomes positive for α > 0 which is also needed to make the model observationally viable (as shown in Section [VI A]). To investigate what happens in the power-law case of h(χ), we provide the plot ofḣ Ḧ h as a function of the e-foldings number in Fig. 13. As it can be seen in Fig. 13, the ratioḣ Ḧ h remains less than unity for all the parameter values that render the theory compatible with the latest Planck data. Thus this ensures numerically the stability of the scalar perturbations. E. Reheating mechanism for the exponential cosmological evolution Before moving to the conclusion section, here we discuss the phenomenological implications of the theory we studied in the reheating era, and the possible effects of Gauss-Bonnet coupling on it. Needless to say that reheating describes the production of Standard Model matter after the period of accelerated expansion. For this purpose, we assume that the inflaton field (i.e the field χ) is coupled to another scalar field ζ, given by the interaction Lagrangian, where g is a dimensionless coupling constant and λ is a mass scale. The scalar field ζ quantifies Standard Model particles in our case study. With this interaction Lagrangian, the decay rate of the inflaton into ζ particles becomes, where m denotes the mass of the inflaton field and can be obtained from the effective potential V ef f (χ) = V (χ) − 24H 2 (H 2 +Ḣ)h(χ) through which the Gauss-Bonnet coupling function (h(χ)) affects indirectly the reheating mechanism. Moreover, the presence of Gauss-Bonnet term also affects the self-potential functionṼ (χ) as may be noticed in Eq. (16) ( see the terms dependent on h(χ) in the right hand side of Eq. (16)). Generally during the reheating epoch, the inflaton losses energy due to the expansion of the Universe, and due to transfer of energy to the ζ particles, controlled by the Hubble parameter and the decay rate respectively. As a result the production of ζ particles becomes effective when the Hubble parameter becomes less or comparable to Γ, otherwise the energy loss into particles is negligible compared to the energy loss due to the expansion of space as occurred during the early phases of the inflation. Therefore, the time scale t h (let us call it the reheating time) after when the production of ζ becomes effective is given by, where we used the form of the functionṼ (χ) as obtained in Eq. (78). Consequently the stable point (< χ > (ec) , where the notation "ec" stands for "exponential coupling") of V ef f can be determined by the following algebraic equation, The presence of h 0 in the above expression entails that the Gauss-Bonnet coupling indeed affects the stability of the inflaton. In order to understand the effect of the GB coupling more clearly, we write < χ > (ec) =< χ > 0 + < δχ > (ec) , where < χ > 0 is the stable point of χ in absence of GB term (h 0 = 0) i.e., Thus < δχ > (ec) is the deviation of stable point from χ > 0 solely due to the presence of the Gauss-Bonnet term. Expanding Eq. (120) in terms of < χ > (ec) =< χ > 0 + < δχ > (ec) , we get the following expression for < δχ > (ec) , where we kept terms up to first order in < δχ > (ec) and we also assumed Ω αµ 2 > 1, which is also consistent with the Planck observations, as mentioned earlier in Section[VI A]. Clearly < δχ > (ec) becomes zero as h 0 → 0, as expected. Eqns. (121) and (122) immediately lead to the stable point of V ef f in presence of Gauss-Bonnet coupling, which is, Using the above expression for < χ > (ec) , we determine the mass squared of the inflaton (m 2 (ec) ) for the case of exponential coupling function, which is, Thus in the absence of the Gauss-Bonnet term (i.e for h 0 = 0), m 2 (ec) becomes m 2 (ec) = 2Ω 4 µ 4 κ 2 which is also consistent with Eq. (121). However the presence of exponential coupling affects the inflaton mass by the factor proportional to h 0 , as is evident from Eq. (124), in particular the mass increases due to the presence of the Gauss-Bonnet term, compared to the case where h 0 = 0. Having the explicit expression of m 2 (ec) (see Eq. (124)) at hand, now we can determine the reheating time by using Eq. (118), which is, Thus we can argue that the presence of the exponential GB coupling function, enhances the mass of the inflaton which in turn makes the reheating time larger compared to the situation where the Gauss-Bonnet term is absent. For the cubic coupling (h(χ) = h 0 χ/M 3 ), the effective potential of the inflaton is equal to, Following the same procedure as above, we determine the stable point of the effective potential and the mass of the inflaton field, in the case of cubic coupling, which are, < χ > (cc) = µ 2 Ω ln 3H 0 /Ω − respectively, where x 0 = ln 3H 0 /Ω and the notation 'cc' stands for "cubic coupling". Thereby, the presence of cubic GB coupling function makes the inflaton mass larger relative to the situation where the GB term is absent. As a consequence, the reheating time t (cc) h = 1 Ω ln 8πm (cc) H0 g 2 λ 2 also increases due to the effect of the cubic coupling function, similar to the case of the exponential coupling we discussed earlier. Before closing, let us comment on an interesting issue, related to previous works in the field. In Ref. [49], the authors calculated the observational indices of inflation for a generalized Galileon theories, however these theories are quantitatively different to a great extent from the theory we developed in this paper. Particularly, the theory at hand with action (4) can be treated at a quantitative level as an generalized Einstein-Gauss-Bonnet theory of gravity, which is entirely different from the Galileon models studied in Ref. [49]. At a quantitative level, the theories developed in Ref. [49], allow the derivation of general forms of the observational indices, however in our case, and in Einstein Gauss-Bonnet models, it is hard to derive general relations for the observational indices. This is because the latter depend strongly on the choice of the Gauss-Bonnet scalar coupling function h(φ). Thus the results are strongly model dependent, as we evinced in the previous sections, for example in Section VI A with h(χ) = e −αχ or in Section VI B with h(χ) = χ/M n . As we have shown, the quadratic coupling is not viable although the exponential one in section VI A is viable. We have further extended our discussion to investigate the possible effects of GB coupling function on reheating mechanism, unlike to [49]. VII. CONCLUSIONS In this work we studied the inflationary phenomenology of a recently developed ghost-free f (G) model of gravity. Particularly, the form of the model mimics the scalar Einstein-Gauss-Bonnet theory, so we employed the formalism of cosmological perturbations for the latter theory, in order to calculate the slow-roll indices and the corresponding observational indices for the theory at hand. The model has rich phenomenology due to the presence of a freely chosen function h(χ), in which case by choosing this function and the Hubble rate, the observational indices can be calculated easily. We examined three types of inflationary cosmic evolution and functional forms of the function h(χ), and as we demonstrated it is possible to have a viable inflationary era, compatible with the latest observational data. Particularly we used de Sitter, quasi-de Sitter and exponential cosmological evolutions, and also exponential and power-law functional forms for the function h(χ). The simple de Sitter evolution leads in some cases to problematic phenomenology, however no realistic cosmology gives the exact de Sitter evolution, so we investigated the quasi-de Sitter case, in which case the viability of the theory with the observational data comes more easily. The same applies for the exponential cosmological evolution. For the exponential Hubble rate case, we also tested the stability of the first order scalar and tensor cosmological perturbations, and as we demonstrated these are stable for the same range of values of the free parameters, for which the phenomenological viability of the model is ensured. Finally we explore the reheating mechanism and the possible effects of Gauss-Bonnet term on it for the case of exponential Hubble rate. As a result we found that the presence of GB coupling, in particular the exponential and cubic coupling function, enhance the mass of the inflaton which in turn makes the reheating time larger compared to the situation where the Gauss-Bonnet term is absent. In this work we mainly focused on realizing inflationary evolutions, however it is also possible to realize non-singular cosmological evolutions, such as cosmological bounces, however we defer this task to future work.
11,233.6
2019-06-30T00:00:00.000
[ "Physics" ]
Topological Anderson Insulator in Cation-Disordered Cu2ZnSnS4 We present the first candidate for the realization of a disorder-induced Topological Anderson Insulator in a real material system. High-energy reactive mechanical alloying produces a polymorph of Cu2ZnSnS4 with high cation disorder. Density functional theory calculations show an inverted ordering of bands at the Brillouin zone center for this polymorph, which is in contrast to its ordered phase. Adiabatic continuity arguments establish that this disordered Cu2ZnSnS4 can be connected to the closely related Cu2ZnSnSe4, which was previously predicted to be a 3D topological insulator, while band structure calculations with a slab geometry reveal the presence of robust surface states. This evidence makes a strong case in favor of a novel topological phase. As such, the study opens up a window to understanding and potentially exploiting topological behavior in a rich class of easily-synthesized multinary, disordered compounds. Introduction Topologically non-trivial materials present a novel and exciting field of research in condensed matter [1]. They are valued both for their importance to fundamental science as exotic states of quantum matter as well as their inherent potential for application in new and future technologies including thermoelectrics [2][3][4], spintronics [5][6][7], and quantum computation [5,6,8]. Starting with the discovery of the Quantum Hall Effect (QHE) by von Klitzing et al. [9], this class of materials has grown to include many candidates in 2-, 3-, and higher dimensional systems, a growing (albeit still small) fraction of which have been experimentally realized. Three-dimensional (3D) topological insulators (TIs) present a sub-class of these exotic materials. They may generally be described as hosting insulating bands in the bulk with band inversion at high-symmetry points, coupled with symmetry-protected gapless surface states [10]. In the absence of symmetry breaking, these surface states support high-mobility electron transport along specific directions on the surface, without backscattering. Large spin-orbit coupling (SOC) was originally understood to be driving the topologically non-trivial behavior [11][12][13][14][15]. Subsequently, Fu [16] demonstrated that topological surface states can also be protected by crystalline symmetries in the absence of SOC (topological crystalline insulators). This allows for topologically non-trivial materials with a weak SOC [17]. The possibility of TIs in the quaternary chalcogenide class has been investigated by Chen et al. [18], using density functional theory (DFT) band structures. They showed that HgTe, a 3D semimetal with the zinc-blende structure, may be transformed into a TI by introducing a strong crystal field splitting (∆ CF ). This can be achieved either by epitaxial straining or by substituting two group-II Hg ions with one group-I ion and one group-III ion. The latter approach results in ordered I-III-VI 2 chalcopyrites. Subsequently, by replacing two group-III cations with one group-I and one group-II cation, the I 2 -II-IV-VI 4 chalcogenides are obtained. The non-trivial band gap of these materials could be increased further via a simultaneous increase in ∆ CF and the band-inversion strength (BIS). In a contemporaneous study, Wang et al. [19] performed a DFT-based screening of several ternary famatinite and quaternary chalcogenides for TIs and were able to identify several naturally occurring, Cu-based 3D TIs. Unsurprisingly given its weak SOC, Cu 2 ZnSnS 4 (CZTS) was found to be topologically trivial, although the authors showed that it could be 'transformed' into a TI by changing the atomic number of the cations, which was manifested as a doping effect evolving toward the ternary TI Cu 3 SbS 4 . Topological insulators, including the multinary compounds mentioned above, are generally known to host a bulk band gap coupled to gapless surface states, which are robust to weak levels of disorder. Several recent studies have highlighted how TI behavior can exist in aperiodic systems such as quasicrystals [20] and can persist in systems with bulk defects such as grain boundaries and vacancies below a certain threshold [21,22]. Nevertheless, sufficiently strong disorder is expected to close the bulk gap and destroy all topological features [10,22]. In light of this, a surprising prediction was made by Li et al. [23], who claimed that adding disorder to otherwise trivial systems can lead to the emergence of topological behavior. The authors showed that disorder-induced Anderson localization may lead to a renormalization of the topological mass of the charge carriers via the band structure, causing a transition from a topologically trivial phase to a TI. This gives rise to the so-called Topological Anderson Insulator (TAI) phase. TAIs have been theoretically shown to be feasible by introducing disorder into trivial 3D systems close to a topological phase [24,25]. TAI behavior was first demonstrated by Meier et al. [26] using quantum simulations in a metamaterial consisting of a 1D chain of ultracold rubidium atoms. Subsequently, in a recent article by Nakajima et al. [27], using a Thouless pump realized with ultracold ytterbium atoms on a dynamical optical lattice, the authors demonstrate a disorder-induced pumping, with a topologically trivial phase in the clean limit driven to a non-trivial phase due to quasi-periodic disorder. However, to date, evidence of TAI phases in a real material remains conspicuously absent. Crucially, the quaternary chalcogenides screened in the aforementioned studies including CZTS are all ordered tetragonal structures. However, it is known that CZTS crystallizes in multiple polymorphs. In a recent study, Isotta et al. [28] demonstrated remarkably improved thermoelectric properties in a cubic polymorph of CZTS with complete cation disorder. This polymorph, which was synthesized using high-energy reactive mechanical alloying (ball-milling), shows a simultaneous improvement in both Seebeck coefficient and electrical conductivity, as well as a lower thermal conductivity compared to the ordered tetragonal polymorph. Using the DFT band structure calculations, we argue that introducing full cation disorder in CZTS drives it into a TAI phase; experimental measurements of electrical resistivity and carrier mobility are in agreement with a surface contribution to transport, which such a phase is expected to host. As such, we present the first concrete prediction of a TAI in a material, opening up myriad possibilities to investigate topologically non-trivial behavior in disordered quaternary compounds and their potential effect on thermoelectric performance. Computational Methodology The ab initio electronic structure calculations have been performed using the plane wave basis set implemented in the Vienna ab initio simulation package (VASP), version 5.4.4, Vasp Software GmBH [29,30]. The electron-exchange correlation functional was approximated using the Perdew−Burke−Ernzerhof (PBE) [31] form of the generalized gradient approximation (GGA). It should be noted that the GGA tends to underestimate the band gap for most compounds, which may be corrected using computationally expensive hybrid functionals [32]. However, a hybrid Hartree-Fock/DFT study [33] has established that the band topology for both computational schemes (hybrid and PBE) are very similar for CZTS. The hybrid functional only shifts the conduction bands to a higher energy, justifying our use of the standard, computationally inexpensive PBE functional. In order to Nanomaterials 2021, 11, 2595 3 of 12 preclude a spurious negative band gap, we have also performed a single-point calculation with the HSE06 functional, with a 25% contribution from the exact Fock exchange energy. All calculations were performed with an energy cutoff of 450 eV and Gaussian charge smearing in the order of 0.01 eV. Calculations for CZTS were performed both with and without spin-orbit coupling, while calculations for CZTSe were made only with SOC. The geometry was optimized with a 2 × 2 × 2 Monkhorst Pack (MP) Γ-centered k-mesh for 64 atom supercells until Hellman-Feynman forces on each atom were converged to below 0.01 eV/Å. SCF calculations were made with a similar 4 × 4 × 4 k-mesh, with electronic degrees of freedom relaxed until the change in the total free energy and energy eigenvalues were both smaller than 10 −6 eV. Calculations for surface states were performed within a surface slab geometry, with a 10 Å vacuum layer in the Z-direction to minimize the interaction between periodic copies. Only the top three layers of the slab were allowed to relax, with lower layers held fixed. Geometry optimization for the surface slab was made with a 2 × 2 × 1 MP k-mesh, while SCF calculations used a 4 × 4 × 2 mesh. For band structure calculations, the high-symmetry path in the Brillouin zone was obtained using SeekPath [34]. VESTA [35] was used for visualizing atomic geometries. The disordered geometries are constructed by generating a peseudorandom number between 1 and 32 and assigning each cation to the corresponding serially numbered cation site. Band Inversion in the Bulk CZTS is a quaternary chalcogenide compound extensively investigated for its potential applications, primarily in photovoltaics [36][37][38][39][40][41], and recently thermoelectrics [28,[42][43][44][45][46][47]. The most ubiquitous polymorph of CZTS is the kesterite structure (Figure 1a), which crystallizes in the tetragonal I-4 space group. The structure may be described as alternating layers of cations and sulfurs, with a further alternation in the cation layers, which are either composed of Cu and Zn or Cu and Sn. Above 533 K, this I-4 structure undergoes the so-called order-disorder transition [48] into a tetragonal I-42m phase. In this structure, the disorder is manifested through an in-plane randomization of the cations in the Cu-Zn layer ( Figure 1b). This disorder induces a narrowing of the band gap compared to the ordered tetragonal polymorph (Figure 1d,e), while the increase in global symmetry introduces a three-fold degeneracy at the center of the irreducible Brillouin zone (Γ-point) in the valence band maximum (Figure 1e). In a recent article, Isotta et al. [28] have presented the synthesis of a novel polymorph of CZTS, this time with full cation disorder. This is manifested as a complete randomization of atoms in the cation position ( Figure 1c). This polymorph was found to crystallize in the cubic zinc-blende/sphalerite structure with space group F-43m and remain stable up to 673 K when it transitions to the tetragonal polymorph. Electronic structure calculations revealed the presence of significant inhomogenous bonding. This removes the three-fold degeneracy present in the bands (Figure 1f) of the disordered tetragonal polymorph, via strong crystal field splitting, while opening up the band gap somewhat (see Supplementary Note 1). A common feature of all three polymorphs of CZTS is that the states in the valence band maximum (VBM) are dominated by the Cu-d electrons, while those in the conduction band minimum (CBM) are mainly derived from S-p orbitals (Figure 1d-f). However, upon closer inspection of the projected bands for cubic CZTS, we observe that the order of the bands is reversed at and around the Γ-point, with an inversion in the Cu-d and S-p orbitals ( Figure 1f) (see Supplementary Note 1, Figure S1 for discussion on band gap). True disorder in ionic positions is of course rather difficult to simulate within the size constraints of a DFT supercell with periodic boundary conditions, which impose a longrange order on the system. In order to ensure that the band inversion is not an accidental artifact but rather a property of the system, we have calculated the band structure for a further nine different configurations of cubic CZTS (Supplementary Figure S2), with Cu, Zn, and Sn ions randomly assigned to each cation position, while maintaining the Cu 2 ZnSnS 4 stoichiometry (see Table S1 in the Supplementary Materials File) for energies of each configuration. The lowest energy configuration is shown in Figure 1c,f. Additionally, in order to discard the possibility that an underestimation of the band gap by the PBE functional leads to a spurious band inversion, a single-point calculation was performed using the computationally expensive HSE06 hybrid functional. This confirms a negative band gap in the order of −0.12 eV. Orange balls refer to copper ions, red balls refer to tin, gray balls refer to zinc, and yellow balls refer to sulfur; green arrows identify the ordered layers, while red arrows show the layers with cation randomization; (d) the bands for ordered tetragonal CZTS; (e) the bands for disordered tetragonal CZTS; and (f) the bands for cubic CZTS; green circles correspond to the dominant contribution from sulfur-p orbitals, while red circles correspond to the contribution from copper-d orbitals; the blue box highlights the region of band inversion in cubic CZTS. A common feature of all three polymorphs of CZTS is that the states in the valence band maximum (VBM) are dominated by the Cu-d electrons, while those in the conduction band minimum (CBM) are mainly derived from S-p orbitals (Figure 1d-f). However, upon closer inspection of the projected bands for cubic CZTS, we observe that the order of the bands is reversed at and around the Γ-point, with an inversion in the Cu-d and S-p orbitals ( Figure 1f) (see Supplementary Note 1, Figure S1 for discussion on band gap).True disorder in ionic positions is of course rather difficult to simulate within the size constraints of a DFT supercell with periodic boundary conditions, which impose a long-range order on the system. In order to ensure that the band inversion is not an accidental artifact but rather a property of the system, we have calculated the band structure for a further nine different configurations of cubic CZTS (Supplementary Figure S2), with Cu, Zn, and Sn ions randomly assigned to each cation position, while maintaining the Cu2ZnSnS4 stoichiometry (see Table S1 in the Supplementary Materials File) for energies of each configuration. The lowest energy configuration is shown in Figure 1c,f. Additionally, in order to discard the possibility that an underestimation of the band gap by the PBE functional leads to a spurious band inversion, a single-point calculation was performed using the computationally expensive HSE06 hybrid functional. This confirms a negative band gap in the order of −0.12 eV. The features of the bands are necessarily somewhat different from each other. This is because each configuration of disorder generates a different kind of inhomogeneity in the Orange balls refer to copper ions, red balls refer to tin, gray balls refer to zinc, and yellow balls refer to sulfur; green arrows identify the ordered layers, while red arrows show the layers with cation randomization; (d) the bands for ordered tetragonal CZTS; (e) the bands for disordered tetragonal CZTS; and (f) the bands for cubic CZTS; green circles correspond to the dominant contribution from sulfur-p orbitals, while red circles correspond to the contribution from copper-d orbitals; the blue box highlights the region of band inversion in cubic CZTS. The features of the bands are necessarily somewhat different from each other. This is because each configuration of disorder generates a different kind of inhomogeneity in the charge distribution, leading to different levels of crystal field splitting. Crucially, however, band inversion is present in every case, which is coupled in most cases with an anti-crossing (camel's back) feature at VBM and CBM. In fact, the band-inversion strength, as defined by the energy difference between the lowest inverted S-p level in the valence band and the highest Cu-d level, is found to be reasonably positively correlated with ∆ CF , with a Pearson's r value of 0.84 (Figure 2a). While SOC is known to play a driving role in most topologically non-trivial systems, previous studies [49] have shown that it is negligible for tetragonal CZTS. We confirm that this remains the case in the cubic polymorph. Including SOC in the calculation does not significantly alter the nature of the bands (Supplementary Figure S3b) at the valence and conduction band extrema compared to the bands obtained without SOC (Supplementary Figure S3a). Band inversion remains intact in both cases, suggesting that SOC might not be the main feature driving the system into a TI phase. a Pearson's r value of 0.84 (Figure 2a). While SOC is known to play a driving role in most topologically non-trivial systems, previous studies [49] have shown that it is negligible for tetragonal CZTS. We confirm that this remains the case in the cubic polymorph. Including SOC in the calculation does not significantly alter the nature of the bands (Supplementary Figure S3b) at the valence and conduction band extrema compared to the bands obtained without SOC (Supplementary Figure S3a). Band inversion remains intact in both cases, suggesting that SOC might not be the main feature driving the system into a TI phase. Figure 2. (a) Correlation between band inversion strength and crystal field splitting; (b) the local potential in ordered tetragonal (black line) and cubic (red line) CZTS along the x-direction (above) and the difference between the two (below); (c) the local potential in ordered tetragonal (black line) and cubic (red line) CZTS along the y-direction (above) and the difference between the two (below). Instead, we assert that it is the large ΔCF that causes the inversion and opens up a nontrivial band gap. This is in line with the arguments proposed by Chen et al. [18] in the case of strained HgTe and tetragonal ternary chalcopyrites and quaternary chalcogenides. Here, the ΔCF is a result of inhomogeneous bonding, arising from the full cation disorder in cubic CZTS. In light of this disorder-induced topological transition, we propose cubic CZTS as a candidate Topological Anderson Insulator. It is well known [50] that an inverted band structure corresponds to a negative (topological) effective fermion mass, m. In the TAI phase, Groth et al. [24] have demonstrated that this inversion is obtained as a result of elastic scattering from a disorder potential, which leads to states with a definite momentum decaying exponentially as a function of space and time. When the effective Hamiltonian of the disordered system acts on the exponentially decaying state, it adds a negative correction δm to the effective mass. This renormalized topological mass m' = m + δm can have a sign that is opposite that of the bare mass m, corresponding to a band inversion. The low energy Hamiltonian H of a general 3D topological Anderson insulator was written by Guo et al. [25] as, Figure 2. (a) Correlation between band inversion strength and crystal field splitting; (b) the local potential in ordered tetragonal (black line) and cubic (red line) CZTS along the x-direction (above) and the difference between the two (below); (c) the local potential in ordered tetragonal (black line) and cubic (red line) CZTS along the y-direction (above) and the difference between the two (below). Instead, we assert that it is the large ∆ CF that causes the inversion and opens up a non-trivial band gap. This is in line with the arguments proposed by Chen et al. [18] in the case of strained HgTe and tetragonal ternary chalcopyrites and quaternary chalcogenides. Here, the ∆ CF is a result of inhomogeneous bonding, arising from the full cation disorder in cubic CZTS. In light of this disorder-induced topological transition, we propose cubic CZTS as a candidate Topological Anderson Insulator. It is well known [50] that an inverted band structure corresponds to a negative (topological) effective fermion mass, m. In the TAI phase, Groth et al. [24] have demonstrated that this inversion is obtained as a result of elastic scattering from a disorder potential, which leads to states with a definite momentum decaying exponentially as a function of space and time. When the effective Hamiltonian of the disordered system acts on the exponentially decaying state, it adds a negative correction δm to the effective mass. This renormalized topological mass m' = m + δm can have a sign that is opposite that of the bare mass m, corresponding to a band inversion. The low energy Hamiltonian H of a general 3D topological Anderson insulator was written by Guo et al. [25] as, where H 0 is the Hamiltonian of the ordered (trivial) system, Ψ j is the overall wave function at the j-th lattice site, and U j is the on-site disorder or Anderson potential. By definition, the Anderson potential must vary randomly within the crystal lattice, and it will correspond to a random component in the local potential energy in addition to the periodic component due to the crystal lattice. In order to compare the differences in the potential for the different polymorphs, we compute the local potential energy along the X-, Y-, and Z-directions in the CZTS supercells. It is evident from Figure 2b,c (and Supplementary Figure S4a) that the potential in the ordered tetragonal polymorph (black curve) exhibits a highly periodic nature. Instead, the cubic structure (red curve) deviates significantly from this periodicity. This is quite similar to the maximum quasi-periodic disorder of the Thouless pump reported by Nakajima et al. [27], which drives a trivial phase into a non-trivial one. As such, we assert that the potential in the disordered polymorphs can be safely approximated to be the potential of the ordered structure plus a modifying potential due to disorder, which is in the spirit of Equation (1). Then, this modifying term is given by the difference between the ordered and disordered potentials, as seen in Figure 2b,c (and Supplementary Figure S4b). Critically, it has been demonstrated [51] that bond disorder, which adds random hopping terms to the Hamiltonian, and is present in many material systems, cannot drive a system into the TAI phase. As such, the bonding inhomogeneity prevalent in disordered CZTS cannot be held responsible for the non-trivial nature of the system. Instead, it is an independent by-product of the same random on-site cation disorder potential, which also gives rise to the TAI behavior. This is evident from Figure 2b,c, which show the random disorder potential in the x-and y-directions, respectively. These directions are not in fact the bonding directions in CZTS, thus putting our results in agreement with those of Song et al. [51]. Instead, Girschik et al. [52] have suggested that any long-range correlations in the disorder potential might lead to a strong suppression of the TAI phase. Such correlations can be reasonably precluded from our system by considering the global nature of the disorder in CZTS. This constitutes a total randomization of atomic species in the cation lattice sites. Thus, the nature of the disorder prevents long-range correlations, instead promoting short-range, random variations of the local potential and allowing for the TAI phase to manifest. Then, it is clear that the modifying potential is a highly random and aperiodic shortrange onsite potential. This makes it a suitable candidate for the on-site disorder potential in the theory of TAIs. Given the previous predictions of the closely related Cu 2 ZnSnSe 4 as a TI [19], and the presence of a strong Anderson potential in the cubic polymorph, we put forward that ordered CZTS is driven into the disordered TAI phase. This is the result of introducing a high level of cation disorder, such as can be achieved through high-energy ball milling. Adiabatic Continuity The presence of band inversion in the bulk is considered a necessary but not sufficient condition for the presence of a topological insulator phase [53]. To this end, adiabatic continuity arguments have emerged as a powerful tool to characterize the topological nature of materials through ab initio calculations, and they have been used to predict new TI phases [54][55][56][57][58]. The process involves connecting a known topological material to a new structure through a series of adiabatic changes. These include straining the crystalline lattice, tuning the strength of the SOC, and modifying the nuclear charge of constituent atoms within the constraint of overall charge neutrality [53]. If the Hamiltonian of this new system can be adiabatically connected to that of the known TI via some combination of the aforementioned ways without inducing a band inversion or a closing of the gap, the new material can be considered to be topologically equivalent to the known material and thus also a TI. Previous studies [18,19] have adiabatically connected quaternary chalcogenides to the known TI HgTe (strained) via both ternary famatinites and chalcopyrites. Of these compounds, the closest to our present case is the proposed TI [19] Cu 2 ZnSnSe 4 (CZTSe) with an I-42m tetragonal stannite structure. Starting from this structure, we are able to transition to a fully disordered CZTSe with a cubic F-43m lattice by introducing randomization. This is done by interchanging the coordinates of a single pair of cations at a time. Given that both stoichiometry and charge remain conserved overall, such a transition corresponds to an adiabatic change of the total Hamiltonian of the system (see Table S2 in the Supplementary Materials File for energies of the intermediate configurations in the transition). Subsequently, we replace selenium ions with sulfur in the anion position, thereby transitioning into our cubic CZTS. This, once more, is an adiabatic transition given that both S and Se are group IV elements with identical s2p4 outer-shell electronic configurations. The extra contribution of Se is only through fully occupied core levels that lie well away from the Fermi energy. As Figure 3 demonstrates, this entire transition can be made without closing the inverted band gap at the Γ-point. Thus, we can conclude that the disordered cubic polymorph of CZTS is in fact topologically connected to the previously predicted TI CZTSe in the adiabatic limit (see Supplementary Figure S5 for band structures of the entire transition). It should be noted that the high degree of cation disorder implicitly increases the global symmetry of CZTS from tetragonal to cubic with two interpenetrating sub-lattices of cations and sulfur anions. This structure lies in the same F-43m space group as HgTe, which is the parent compound of this family of adiabatically connected 3D TIs. with a cubic F-43m lattice by introducing randomization. This is done by interchanging the coordinates of a single pair of cations at a time. Given that both stoichiometry and charge remain conserved overall, such a transition corresponds to an adiabatic change of the total Hamiltonian of the system (see Table S2 in the Supplementary Materials File for energies of the intermediate configurations in the transition). Subsequently, we replace selenium ions with sulfur in the anion position, thereby transitioning into our cubic CZTS. This, once more, is an adiabatic transition given that both S and Se are group IV elements with identical s2p4 outer-shell electronic configurations. The extra contribution of Se is only through fully occupied core levels that lie well away from the Fermi energy. As Figure 3 demonstrates, this entire transition can be made without closing the inverted band gap at the Γ-point. Thus, we can conclude that the disordered cubic polymorph of CZTS is in fact topologically connected to the previously predicted TI CZTSe in the adiabatic limit (see Supplementary Figure S5 for band structures of the entire transition). It should be noted that the high degree of cation disorder implicitly increases the global symmetry of CZTS from tetragonal to cubic with two interpenetrating sub-lattices of cations and sulfur anions. This structure lies in the same F-43m space group as HgTe, which is the parent compound of this family of adiabatically connected 3D TIs. Topological Surface States Further evidence to verify the topologically non-trivial nature of a material can be obtained by characterizing the topologically protected gapless surface states, which are guaranteed through bulk-boundary correspondence. The calculation of these states from first principles is computationally demanding, and the results can be distorted by spurious gaps due to interactions between periodic copies [53]. In order to reduce these artefacts, we calculate the surface states for a 001 surface with sulfur termination within a slab geometry, with a large (10 Å) vacuum introduced in the Z-direction (see Supplementary Figure S6 for a simulation cell). Other than the expected quasi-gapless state at the Γpoint, our calculations point to a further two states at the high-symmetry R and V points ( Figure 4a). This is in agreement with the requirement for an odd number of Dirac points. It must be mentioned that these states tend more toward a parabolic curvature rather than the well-known linearly-dispersing Dirac cones. This "quadratic band touching" was predicted by Fu [16] using a tight-binding model for spinless fermions. In real materials, this corresponds to systems with weak SOC. These so-called "Schroedinger paraboloids" have recently been reported in novel topological semi-metals, namely the Weyl semi-metal candidate SrSi 2 [59] and the so-called Schroedinger semimetal Be 2 P 3 N [60]. Both these materials, interestingly, show weak SOC similar to cubic CZTS. obtained by characterizing the topologically protected gapless surface states, which are guaranteed through bulk-boundary correspondence. The calculation of these states from first principles is computationally demanding, and the results can be distorted by spurious gaps due to interactions between periodic copies [53]. In order to reduce these artefacts, we calculate the surface states for a 001 surface with sulfur termination within a slab geometry, with a large (10 Å) vacuum introduced in the Z-direction (see Supplementary Figure S6 for a simulation cell). Other than the expected quasi-gapless state at the Γ-point, our calculations point to a further two states at the high-symmetry R and V points ( Figure 4a). This is in agreement with the requirement for an odd number of Dirac points. It must be mentioned that these states tend more toward a parabolic curvature rather than the well-known linearly-dispersing Dirac cones. This "quadratic band touching" was predicted by Fu [16] using a tight-binding model for spinless fermions. In real materials, this corresponds to systems with weak SOC. These so-called "Schroedinger paraboloids" have recently been reported in novel topological semi-metals, namely the Weyl semi-metal candidate SrSi2 [59] and the so-called Schroedinger semimetal Be2P3N [60]. Both these materials, interestingly, show weak SOC similar to cubic CZTS. Topological surface states can be sharply distinguished from well-known trivial surface states in semiconductors/insulators. The latter are less robust and can be removed via surface deformation [53]. Interestingly, we find that the topologically trivial ordered tetragonal CZTS hosts such surface states at the E and C 2 high-symmetry points on the 001 surface (Figure 4d). In order to test for robustness, we have deformed the surface by applying a small (1%) in-plane expansive strain (Figure 4e) as well as simply removing a single S atom from the surface layer (Figure 4f). This leads to an opening of the gap in ordered tetragonal CZTS, causing the trivial gapless states to vanish. In comparison, the surface states in the disordered cubic polymorph are found to be significantly resilient (Figure 4c,d) to identical surface treatments, confirming that these states are in fact topologically protected. These robust surface states are expected to support quasi-metallic surface transport. This would contribute significantly to the improved conductivity observed in disordered cubic CZTS [28]. Discussion In general, the evidence presented above, while strongly suggesting that disordered cubic CZTS behaves as a TAI, cannot be considered to be conclusive. Further investigation, both theoretical and experimental, is part of ongoing research. Theoretical calculations using tight-binding models and effective Hamiltonians can be used to calculate the Berry phase and topological invariant, in order to provide a more fundamental understanding of the topologically non-trivial behavior of the material. On the experimental side, conclusive evidence for topological surface states can be obtained with angle resolved photoemission spectroscopy (ARPES). However, the nano-polycrystalline nature of our samples makes ARPES unfeasible. Preliminary electrical measurements indicate an inverse relation between grain size and carrier mobility, suggesting a strong surface contribution (see Supplementary Note 2, Figures S7 and S8). However, further transport measurements at ultra-low temperatures and in the presence of magnetic fields might be used to better characterize the nature of the surface states. Conclusions In the present article, we propose a possible candidate for a disorder-induced TI material, the so-called Topological Anderson Insulator. High-energy reactive ball milling has recently been used to produce a low-temperature disordered cubic (F-43m) phase of the quaternary chalcogenide Cu 2 ZnSnS 4 , with complete randomization in the cation positions. DFT calculations show that this novel disordered polymorph has an inverted band order in the conduction and valence band extrema at and close to the Brillouin zone center, in contrast to the trivial bands of the ordered tetragonal (I-4) counterpart. Furthermore, the band-structure of this phase can be connected adiabatically to the bands for ordered tetragonal (I-42m) Cu 2 ZnSnSe 4 , which is known to be a 3D TI, without closing the inverted band gap. Surface slab calculations reveal the presence of an odd number (three) of quasigapless surface states, which are remarkably robust to surface deformation such as strain and defects. This is in sharp contrast to the fragile surface states in ordered CZTS. The DFT calculations presented here offer a strong argument in favor of this novel topological phase in ball-milled cubic CZTS with full cation disorder. While not claiming conclusive proof, this work opens up a significant possibility for topological matter in the realm of multinary, disordered compounds, where topological behavior is only partially understood. Such materials, which are easily and cheaply synthesized in comparison to perfect crystals for traditional TIs, open up diverse possibilities not just for fundamental research but also the application of topological properties, particularly in the area of thermoelectrics. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/nano11102595/s1, Supplementary Note 1: Band gap in disordered CZTS, Figure S1: Band gap of disordered CZTS from (a) Tauc-plot from UV-Vis spectroscopy and (b) calculated density of states; Figure S2: Bands for multiple configurations of disordered cubic CZTS, Figure S3: Comparison of bands for disordered cubic CZTS (a) without and (b) with spin-orbit coupling, Figure S4: (a) Local potential in the Z-direction for ordered tetragonal (black) and disordered cubic (red) CZTS, (b) Difference between the potentials, Figure S5: Adiabatic transition from (a) stannite CZTSe through (k) cubic CZTS; Figure S6: Surface slab geometry for the 001 surface of cubic CZTS with S termination; Supplementary Note 2: Grain size dependence of mobility, Figure S7: (a&b) TEM images and (c) conductivity, (d) mobility and carrier concentration data for different grain sizes of cubic CZTS; Figure S8: XRD patterns for two different samples of cubic CZTS and domain sizes; Table S1: Total ground state energies of the multiple configurations of disordered cubic CZTS; Table S2: Total ground-state energies for the intermediate configurations in the adiabatic transition. Author Contributions: B.M. was primarily responsible for making the DFT calculations, theoretical analysis, and writing and editing the manuscript. E.I. was primarily responsible for supplying the samples and making the resistivity measurements. C.F. was primarily responsible for making the Hall effect measurements. N.A. was primarily responsible for the experimental setup and participated in interpreting the results. P.S. was primarily responsible for supervising the research and participated in analyzing theoretical and experimental results and editing the manuscript. Conceptualization, Data Availability Statement: The data that support the findings of this study are available from the corresponding authors upon reasonable request.
7,991.2
2021-06-28T00:00:00.000
[ "Physics", "Materials Science" ]
CREAMINKA: An Intelligent Ecosystem for Supporting Management and Information Discovery in Research and Innovation Fields in Universities CREAMINKA: An Intelligent Ecosystem for Supporting Management and Information Discovery in Research and Innovation Fields in Universities This chapter presents a new proposal for supporting the management of research pro- cesses in universities and higher education centers. To this aim, the authors have developed a comprehensive ecosystem that implements a knowledge model that addresses three innovative aspects of research: (i) acceleration of knowledge production, (ii) research valorization and (iii) discovery of improbable peers. The ecosystem relies on ontologies and intelligent modules and is able to automatically retrieve information of major scientific databases such as SCOPUS and Science Direct to infer new information. Currently, the system is able to provide guidelines to create improbable research peers as well as automatically generate resilience graphics and reports from more than 17,000 tuples of the ontological database. In this work, the authors describe in detail an important aspect of support systems for research management in higher education: the development and valorization of competences of students collaborating in research process and startUPS of universities. Furthermore, a knowledge model of entrepreneurship (startUPS) as well as an analyzer of general and specific competences based on data mining processes is presented. Introduction Entrepreneurial spirit is an old field, but it is continuously emerging and it attracts the attention of scholars, politicians and professionals in different fields of economics, finance, management and sociology [1]. In the last decades, it has been studied as a driving force for development and a key factor to attain economic growth, the creation of employment and the increase of productivity [2,3]. Nowadays, the theory of entrepreneurship has extended to new concepts where entrepreneurial spirit is not only known for its business success and benefits, but also for subjective welfare and noneconomic welfare that people can obtain through their skills. Politicians seek to promote entrepreneurial spirit at a macrolevel through education in hopes that a greater understanding will likely create more adept entrepreneurs [4]. In this regard, there is a debate going on related to the academic field if it is really possible to teach students how to be entrepreneurs [5]. Besides creativity and innovation to develop entrepreneurial projects and meet its goals, due to current and fast changes in society, a wide range of skills and competences are needed [6]. In the last few years, higher education institutions at an international level have introduced competences in its educational programs. For example, during the last 5 years, Spain has produced significant advances in the treatment and evaluation of competences, especially in the field of language teaching [7]; Universidad Politecnica de Madrid (UPM) applied the projectbased learning (PBL) approach and analyzed the acquisition of regional and global competences by having industrial engineering students complete a course on project management [8]; they even suggested a framework of learning and evaluation based on competences to facilitate the learning of skills in the development of projects [9]; a study to support the skills and competences under the European Union Framework and the Bologna Agreement was also conducted, assisting its evolution through guidance documents that seek to integrate the European systems of higher education and improve employability of European graduates [10]. Regarding tools to analyze competences, there are solutions suggested through the use of questionnaires along with information technologies. In [11], the COMET test is suggested, it was developed by the TVET research group from the University of Bremen and it is based on a model of competence and measurement through open task tests that have a variety of solutions and the evaluation of its results. As part of the TECH project, students from the universities of Seville and Malaga presented the improvement of their competences of collaborative work, efficient use of time, management of online resources and others by carrying out collaborative work in mixed groups on the online learning platform [12]. ComProfits is a project financed by the EU which analyzes the concept of a profile platform of adaptive competences where its main objective is to (i) strengthen the analysis of competences and (ii) improve the quality of staff selection and work performance in the field of IT [13]; another innovative concept for teaching competences with entrepreneurial spirit is open educational practices that work jointly with a StarUp model and seeks to identify the competences that a person has obtained by carrying out the analysis of open educational resources (OER) that have been used through a recommendation system [14]. Another approach applies KIPSSE, which is a self-reporting instrument to be used in the evaluation of competences of projects developed by university students that take part in online learning projects, which tries to identify knowledge integration skills, project skills and self-efficacy based on the results of the qualitative and quantitative analysis of interviews with the project consultants [15]. In this context, the present work presents a computing strategy for the analysis of competences and the networks that an individual (student/entrepreneur/professor) has developed. Such strategy is based on a previous work suggested by Salgado et al. [16] for the evaluation of an individual's competences when developing a project through a trifocal model "auto/hetero/ coevaluation." The computing model is made up by an ontology that explains the basis of knowledge of the StartUPS ecosystem and makes it possible to generate inferences, and a schematic and mathematical model to approximate a qualitative and quantitative evaluation of the valuations of the competences applied to the different individuals of the innovation ecosystem. Related work Among the first studies about an evaluation through competences to individuals and applying computing, there is a debate on how to generate the dissemination of relevant information to users according to knowledge generated within an institution or organization, an aspect highlighted in [17], where it explains that "one of the challenges of knowledge management is the active and smart dissemination of knowledge to users, without bothering them with information unrelated to their competencies or fields of interest" and suggests a first approximation of an ontology system based on competences that intend to provide assistance in order to increase productivity of users during their activities according to their profile. In [18], the objective is to design an ontological model, based on the competences of each enterprise, to support decisions at the time of creating collaborative networks within virtual environments called virtual breeding environment (VBE) whose aim is to enhance the competences of employees. A similar case occurs in [19], where manufacturers and distributors need to cooperate and create production networks; therefore, they suggest an approach for the configuration of teams based on profiles by competences applying management of ontologies, management of contexts and elaboration of profiles, and with the aim of identifying the members of the team that are the most suitable to carry out a task. When it comes to finding a job, developing a project, implementing a business, etc., one of the concerns of employers, investors, and project managers is to identify qualified and committed personnel. How to solve this enigma, and the concern also goes through the educational model, which besides teaching theory, should also assess the performance of students in life by addressing a new approach, through skills or competences as mentioned above. Bodea and Dascălu [20,21] suggest e-learning as an appropriate activity for the development of competences. Based on the PM competence catalog, which is based on the IPMA competence basis (ICB), they defined an educational ontology for their SinPers e-learning platform, which is structured by a collection of different educational objects (EOs) as elements for the supervision and evaluation of new competences. As analyzed previously, and as mentioned by Hochmeister and Daxböck [22], "Competence management systems are increasingly based on ontologies that represent competencies within a given domain," and as part of the SeCoMine project, they seek to value user competences based on their contribution and social interactions in online communities by developing a user interface for profiles of semantic competences. And regarding work in the field of research, in [23], they suggest the use of the linked open data (LOD) format to describe the competences of researchers, developing the first work flow to generate profiles of semantic users through the analysis of scientific articles by processing natural language, which makes it possible to carry out personalized searches of articles and competent researchers in specific topics. Regarding competence "measurement" models, there are no generic standards or procedures to evaluate or value, and each proposed model is tailored to a specific context and can be extrapolated to others by making appropriate modifications. As emphasized in [24], for the evaluation of leadership competences under a hypothetical hierarchical scheme, the partial least squares (PLS) trajectory models are used, where to collect information, they use questionnaires that are based mainly on the Likert scale and weightings. Like the previous case, in [25], the procedures and tools used for the evaluation of competences in Erasmus nurse students (ENS) clinics are made up of questionnaires, where each competence is valued in different scale metrics such as Likert. Schelfhout et al. [26] are based on a model of levels where they contemplate domains, subcompetences and scaled behavioral indicators as the basis for giving concrete feedback to students rather than using Likert-scale surveys. Therefore, a mixed study method that combines qualitative and quantitative research techniques (selfassessment/evaluation questionnaires) was used; the evaluation of the validity of this model was done through a confirmatory factorial analysis (CFA). According to what has been analyzed, formulating an ontological system of the coworking UPS ecosystem and based on that applying metrics to assess the competences of the agents that are actively involved in it are possible and applicable. StartUPS: an entrepreneurship background The culture of innovation at Universidad Politécnica Salesiana (UPS) seeks to develop a new more complex and formulated concept in [27], which explains that "the university just like a jungle (ecosystem) takes inert and inorganic elements such as knowledge and science to create thriving ecosystems of living organisms whose interactions make up society." This innovative concept seeks to change the educational linearity that governs classrooms toward the productivity of innovation and creativity in spaces or associative groups that share common and multidisciplinary interests (cowork), that break what is conventional, and maintain the center of interest in people, basis of UPS's culture and a primary agent in the interaction and collaboration with diverse talents that seek to transcend social barriers in favor of connectivism [28][29][30], learning to learn [31][32][33] and the common good [34]. The ecosystem of innovation at UPS is intended to be something like a free zone, where the flow of ideas, talents and capital can be maximized in a network of collaborative work. The importance of creating places within the institution to encourage this new university culture has been hard and fundamental work in order to "generate" a new educational model based on an individual's life project; therefore, one of the aims of the StartUPS project is that students/professors from the university integrate all the knowledge they have acquired in real-life projects and that they develop behavioral, contextual and technical competences [6] within spaces like the "coworks." The coworking UPS project is part of UPS's strategy, to become a university of research and innovation, and the culture of entrepreneurship represents a fundamental factor in the achievement of these objectives in the short and long term. In 2015, a series of agreements to integrate the culture of "project work" were adopted in order to develop measures to promote innovation in UPS. This process of change has been accompanied by training for UPS agents (teachers and students) to develop a culture of entrepreneurship and their project management competences. The idea of fostering entrepreneurship from project management competences was aimed at creating an Innovation and Entrepreneurship Ecosystem (coworking StartUPS project). This strategy is part of the implementation processes of Research Groups and Educational Innovation Groups (EIG) at UPS, jointly promoting Research and Educational Innovation, based on the participation of students and teachers who are competent for project management. As mentioned in [6], "the methodology used within the ecosystem to generate the coworking experience is based on the Working with People (WWP) model, aimed at building dynamics of innovation and learning based on projects"; therefore, the executing and catalytic axis of the entire competence assessment within the coworking StartUPS ecosystem has a project key point. The main idea is to incubate and enhance the abilities of each individual based on the activities that he/she performs within a project or in different proposed events such as: boot camps (RECREATE/RETHOS), mini-boot camps, hackathons, workshops, training courses, research groups and others. The components discussed in [6] to sustain the ecosystem mention four, a socio-ethical component, a technical-business component, a political-contextual component and an integrating component, which is social learning, oriented to developing a network of entrepreneurship among the university's entrepreneurs, through spaces of learning, discussion and reflection generated in different areas of the university with the participation of faculties and courses. This component is mainly undertaken by the entrepreneurship centers, or coworking spaces, which serve as support to the entrepreneur and allow their interaction. This way they find the physical space of work and the necessary advice so that their ideas and learnings are connected with the national and international market. This connects the UPS entrepreneurship ecosystem with the local, national and international level. Ecosystem approach The computing model being suggested is part of a more complex system called CREAMINKA, which is a tool designed to support strategic decision-making regarding R + D + i (research + development + innovation) in the university. This component seeks to carry out a specific task, the analysis of competences/skills of the agents that make up this ecosystem by applying the corresponding metrics of these skills through indicators that are valued through a mixed evaluation mechanism. As shown in Figure 1, the structure of the ecosystem is organized in four clearly defined layers: (i) the transactional system for StartUPS, (ii) the microservices component, (iii) the triplet repository and (iv) the mobile/web application. The microservices component is the main layer that supports the entire subsystem; its function is to provide the necessary services so that the flows of information can be matched to the different components. The "StartUPS" transactional system stores information of the agents in the ecosystem, such as data of their competences, projects, evaluation/valuation questionnaires, etc. The triplet repository stores the knowledge model of the innovation ecosystem and previously treated data from the transactional system. The mobile/web application is in charge of the interaction with the different agents and the mechanism of information input and output. The microservice component has five specific services; the "parser service" microservice, which is responsible for the translation/transformation of data obtained from transactional/nontransactional data sources to data for the ontological model of triplets; the "auth service" microservice has the necessary logic to support the processes of authorization and authentication; the "CRUD service" microservice has the task of creating, reading, updating and deleting information; the "report service" microservice is responsible for creating the different reports using the data provided by the "data service" microservice, which provides all the information processed thanks to different inference mechanisms, mining data and artificial intelligence. Competence evaluation model As mentioned in the related work section, there are several models for the analysis or "measurement" of competences. The suggested model is basically based on four "hierarchical" levels and their weightings relations. The levels are made up by: (i) the general competences (generic) and (ii) the specific competences [35][36][37][38], (iii) the indicators and (iv) the trifocal evaluation (auto-hetero-co). The competence evaluation diagram as illustrated in Figure 2 starts by carrying out the "trifocal" evaluation of competences of an agent in the ecosystem after having developed a project or completing a set of activities in an event, training course or workshop within the different innovation spaces created by the university. The evaluation model has two instances, it starts from a qualitative valuation that is subjective toward an attempt of a quantitative valuation that is objective, all this through the use of weights in the relations that exist between the different levels of the competence diagram. The trifocal evaluation/valuation contains three concepts: (i) heteroevaluation, (ii) coevaluation and (iii) self-evaluation. To begin, there is a questionnaire that contains the battery of indicators to evaluate/value, either for a project or a set of activities; it should be noted that these indicators have already defined a weighting that refers to their specific competence, in addition to having their respective scale of measurement, whether a value scale, Likert scale and others. The heteroevaluation is given by one or more valuators, who also have a weight when completing their questionnaire, regarding the set of questionnaires that are generated or completed; a similar case occurs with the process of carrying out the coevaluation questionnaires. Since the self-assessment is filled in by the valued individual, it has its respective weight. It is important to highlight that each type of evaluation has its respective weighting in the trifocal model; therefore, the heteroevaluation, coevaluation and self-evaluation have their weight. The partial results when completing this trifocal measurement of the indicators depend on the sum of their scaled values by their weights and the weights given to both the three types of questionnaires and the valuators or evaluators. Therefore, the weighted values of the indicators maintain different weighted relations or connections with the different specific competences of the model; in other words, an indicator can be related to one or more specific competences; and in turn, these specific competences, like the previous case, have one or more connections with the general competences. The final result obtained in each branch of the suggested competence evaluation/valuation model depends on the sum of the evaluated results found when using the different mathematical operations. With the information mentioned above, it is suggested that "the sum of subjectivities (qualitative measurements) enable the attainment of objectivity (quantitative measurements)." Within the process of evaluation of competences performed by the subsystem of CREAMINKA, the skills that a person has can be qualified based on a scale. In Figure 3, it can be observed how a user of the system has a score for their general skills based on a scale represented by measure scale (MS); and on the right side, we present the process of how the calculation of the weighting for a general competition is performed. Starting from the right side, the assessment score (fs) are related to the indicators, considering that the scale of each fs is within the MS elements. Each of the fs scores has a weight v for the calculation of the weighting of specific SCS competences that can also take a value within the MS scale. Finally, each score of the specific competences has a weight for the calculation of the general GCS competences. Ontology CREAMINKA's ontology, with its CO prefix, models ecosystems immersed in scientific research and coworking. It is an ecosystem where students, teachers and external collaborators interact within different internal and external processes and events, generating different types of scientific products. In the case of this ontology, all concepts related to the coworking ecosystem will be analyzed, a module that extends the functionalities raised in the preliminary phases of CREAMINKA's ontology, where only scientific research was considered within the research groups. Within the framework of the ontology development, it was considered to reuse ontologies such as FOAF [39], which describes various concepts related to individuals and groups; BIBO [40], which describes bibliographic information of the documents that will be generated; VIVO [41], which describes the research community model and extends some of the ontologies named above; BFO [42], which describes a high level ontology for the categorization of concepts and used very frequently in the ontologies reuse phase, when combining. In the case of the CREAMINKA ontology, concepts such as processes and generic independent entities were used to have a grouping reference framework. Definition of the ontology The discourse universe D as seen in Eq. (1) contains all elements of the coworking ecosystem that hold evaluation process, events, classification of knowledge, scalar measures units, projects and participation roles. The main unary relations defined in the ontology are: • Process: indicates the entities that are occurring over time referring to a material entity. • Keyword: represents a keyword related to a concept. • Research line: specific investigation topic of an area. • Role: quality of a material entity that carries a special circumstance within a context. • Entrepreneurship project: a process that takes place over time to carry out an entrepreneurship of an idea. • Research project: a process that occurs over time, to carry out an idea related to the research area. • Prototyping: subprocess of a project in which a subproduct to be valued is obtained as purpose. • Assessment process: process in which the assessment of different indicators is carried out, which has as output the scores of the indicators in relation to a scale. • Competence: represents the abilities that a person has to develop something. • Measurement weight: represents the weight relationship that exists between two concepts. The main binary relationships that were modeled are described below: • Has weight: indicates the weight relationship that exists in a class and its weight-class quantifier. • Evaluated: indicates the evaluation process that was carried out on another process. • Apply evaluation format: specifies the evaluation format on which the evaluation process is based. • Score for: specifies the score that an indicator or test has. • Has measurement unit: indicates the unit of measurement used as a reference in a score. • Has indicator: specifies the indicator to which a concept is linked. • Has subprocess: indicates the belonging of a process to a higher process. • obo: participates in: defines the relationship between continuous objects and occurring objects. • obo: barer of: specifies the relationship between a dependent entity and a dependent entity. The set of relations R is defined as seen in Eq. (2): Specification of the subconcepts of unary relationships in ontology as seen in Eq. (3): Specification of domains and ranges of binary relations as seen in Eq. (4): Conceptualization of competence assessment In order to analyze how the different concepts of the developed ontology for the CREAMINKA subsystem interact, we have to separate the several concepts associated at different levels, starting with the conceptualization of the weights that work as a complex relationship between concepts of the different levels of the competences evaluation model. Then, an analysis of how such levels are related within the evaluation model is addressed, in an evaluation process, and the actors involved. Finally, the approach is based on the analysis of how assessments take place within the different processes that normally take place within the ecosystem of a StartUPS. Within the competence assessment model, we intend to move from a qualitative assessment to a quantitative assessment attempt, as mentioned above, whereby the concept that links the different components between levels of the model that are represented as classes is referred to as weight measurement. This is a complex concept since it works as a link entity that qualifies the relationship between two classes, giving weight to the different associated concepts as it can be observed in Figure 4. When analyzing the domain of the relation has weight, we discovered concepts that were implicit in the scheme of the competence evaluation model, the ontology has to consider the evaluator role within the assessment process and link it to a weight. The "assessment process", as seen in Figure 5, includes both the "person" or "persons" who have been evaluated and the evaluator, distinguishing these persons by the role they have within the process. That is how the CO ontology extends the roles raised in VIVO ontology, adding the "Assessed Entity Role" and "Evaluator Role". Evaluator role is not directly related to assessment process, since, as we saw in the previous section, the relationship between these two concepts is complex and they have to quantify that relationship through "Weight Measurement". This evaluation process has to evaluate a process that, within the StartUPS ecosystem, is usually an entrepreneurial project or a subprocess of it, considering the members of the project. The evaluation process must "have outputs" that in this case are "scores" of the indicators or "tests" evaluated with reference to a "measurement unit". To classify directly if a score belongs to a partial score or total score, equivalence rules were made in the ontology since if the range that passes through the "score for" is an indicator, it is known that the entity must belong to the partial score; but if the rank entity is test, it is known that it is the total score of the test. The outputs of the evaluation process that are scalar measures have to be referenced with a scale as mentioned above; this role is fulfilled by the concept of measurement unit in which there can exist instances such as Likert scale. When performing an evaluation process, a test that links indicators through the relationship "has indicator" is always taken as reference. Previously, we discussed about the different types of evaluations that formed trifocal evaluation/assessment. Within the CO ontology, this knowledge is inferred through the definition of axioms within the equivalences, to distinguish between three types of processes that are subclasses of assessment process, these equivalence rules are: • Coevaluation: the person who is the bearer of an evaluator role participates in a process by means of a role and the process is evaluated by an assessment process that is linked to the evaluator role, and that person does not have a role that participates in the assessment process. • Self-evaluation: the person who is the bearer of an evaluator role participates in a process through a role and the process is evaluated by an assessment process that is linked to the evaluator role, and that person has a role that participates in the assessment process. • Heteroevaluation: the person who is the bearer of an evaluator role does not participate in a process through a role and the process is evaluated by an assessment process that is linked to the evaluator role, and that person does not have a role that participates in the assessment process. The relationships that the evaluations have within the coworking ecosystem were modeled on the ontology and can be observed in Figure 6, where it can be seen how people fulfill different roles within the ecosystem through a participation relationship within events that can be the different workshops, training courses, boot camps or other instances that match the different events in which the skills acquired through assessment process are evaluated. Added to this, within the processes, we can find the entrepreneurship projects in which people fulfill a role, from these projects subprocesses like prototyping can be broken down, where the entrepreneurship project as the prototyping process can be evaluated. As discussed in this section, each of the approaches from the relationship of weights to the different levels of the competence assessment model, the actors within the evaluation process and the relationship of the evaluation process with the different occurrences of which they are part of, the actors of the coworking ecosystem allow us to give an approximation of the competence assessment of an actor who participates in different events and entrepreneurship projects modeled on an ontology. Experimentation and preliminary results In order to check the traceability of people within the different processes that are performed in the coworking ecosystem modeled as a part of the CREAMINKA ontology, a SPARQL as shown in Figure 7 is tested on the database where it can be observed as a result in Table 1 the person next to the role which he participates with, in a process, such as entrepreneurship projects, boot camps and training workshops. SPARQL consultation on actor's participation in the coworking ecosystem processes: Obtained results: In order to provide a tool to analyze the development of both general and specific competences of students/participants involved in entrepreneurship and/or research processes, we have designed two metrics. The first metric to determine the level of development that achieves a student/participant for a general competence as seen in Eq. (5): where: • GC s ( St i , GC j ) represents the score achieved by ith-student St i for the jth-general competence GC j . The number of general competences is defined by the experts in higher education, entrepreneurship and research. • H1 represents the maximum score for each specific competence SC k j . H1 ) is a normalization factor used to scale the sum of weighted scores. is the score achieved by the ith-student St i for a specific competence SC k j , whereas w k is the kth-weight used to define the importance of this score. Each specific competence SC k j is related to the jth-general competence. • N is the total of specific competences considered in the study. On the other hand, the second metric allows us to know the level of development that students/participants achieve for each of the specific competences that make up a general competence. For this, the following equation is used as seen in Eq. (6): where: • f is the value assigned by the expert team according to the development level reached by the student/participant in this indicator. • H2 represents the maximum score for each specific indicator f . On this basis, we have used the metrics described above to create a module that allows performing clustering analysis. This module allows system users testing different values of weights as well as generating dendrograms and cluster graphics. This information is useful in decision-making for managers and research/entrepreneurship group directors. In Figure 8, we can see an example of a dendrogram generated by the system from the specific competences and indicators retrieved from 20 participants in entrepreneurship projects, boot camps and training workshops. The information feed to the clustering analysis module is described below: • Three general competences for each participant ("creativity," "project management," "entrepreneurship and innovation"). • Nine specific competences per participant considering the following number of indicators (for each competence): f → = {3, 3, 3, 4, 4, 3, 3, 3, 2} . The specific competences consider aspects such as "Design a work project without reaching its execution," "Find and propose new procedures and solutions to a given problem with forward thinking and leadership attitudes," etc. • The participants are enrolled in different careers such as systems engineering, electrical engineering, business administration, etc. Figure 8, if we cut the dendrogram at a distance of 1.33, four groups are formed. As shown in For example, with this information, we can observe that participants 8 and 10 have a similar profile in general in their specific competences, although they are from different careers (social communication and mechanical engineering). On the other hand, in Figure 9, we can observe how new groups are formed when the specific competences are considered. As we can see, there are three perfectly defined groups where you can establish leadership, vision, entrepreneurship, etc. characteristics. Conclusions We present a set of knowledge that describes the coworking ecosystem in which several actors participate in various processes that pretend to generate competences in the participants; in that way, it is possible to give a traceability of how an actor gets involved through different roles in the coworking ecosystem, as described in the results phase; and it is even more important, the fact that each of the competences at different levels is developed, and at the same time, they are being evaluated within the processes in which the actors participate. This assessment within the set of knowledge of the ontology allowed to link the concepts of competences and the processes that form these competences in the actors. This link includes the trifocal valuation approach with weights in each of the arcs that join the concepts. This whole set of knowledge was built by reusing ontologies with different approaches in the research, extending some of the concepts to adapt them to the needs of the ecosystem that was searched to model. On the other hand, it is important to mention that the development of competences by students/participants of entrepreneurship or research groups is an area that has not been adequately addressed at the present time. However, this area is very important in any organization conducting research and/or entrepreneurship processes, given that the participant human talent should develop competences which can substantially enrich the performance and production of knowledge. As lines of future work, we propose the following: • To develop a deep learning approach to suggest reinforcement strategies to develop some specific competences related with leadership training. • To develop an intelligent module that allows combining profiles of students and participants in work groups focused on solving problems that require different types of skills (both general and specific).
7,662.2
2018-01-30T00:00:00.000
[ "Computer Science", "Education" ]
Learning clinical networks from medical records based on information estimates in mixed-type data The precise diagnostics of complex diseases require to integrate a large amount of information from heterogeneous clinical and biomedical data, whose direct and indirect interdependences are notoriously difficult to assess. To this end, we propose an efficient computational approach to simultaneously compute and assess the significance of multivariate information between any combination of mixed-type (continuous/categorical) variables. The method is then used to uncover direct, indirect and possibly causal relationships between mixed-type data from medical records, by extending a recent machine learning method to reconstruct graphical models beyond simple categorical datasets. The method is shown to outperform existing tools on benchmark mixed-type datasets, before being applied to analyze the medical records of eldery patients with cognitive disorders from La Pitié-Salpêtrière Hospital, Paris. The resulting clinical network visually captures the global interdependences in these medical records and some facets of clinical diagnosis practice, without specific hypothesis nor prior knowledge on any clinically relevant information. In particular, it provides some physiological insights linking the consequence of cerebrovascular accidents to the atrophy of important brain structures associated to cognitive impairment. In the author summary and introduction an impression is built up that no methods exist for computing mutual information for mixed variables. The authors are clearly aware of these methods (references 15-17), however, the mention of these methods is pushed down deep into the benchmarking subsection of the results section. These must be brought to the forefront (be referenced in the introduction) as not to misrepresent the state of the art. Following the reviewer's suggestion, we now mention these recent methods for computing mutual information for mixed variables in the Introduction as well as in the benchmarking subsection of the Results section. 2. There's no explanation of the principle by which "latent variables" are suggested in the graphical model, i.e. what makes an edge suggest mediation by a latent variable vs a simple correlation/anticorrelation edge. If this is a post-hoc decision in light of expert knowledge the text needs to be explicit about that. We have now added a paragraph on the presence of latent variables in MIIC inferred networks within the new Methods section (see below). Latent variables, while unobserved in the available dataset, manifest themselves in the form of bidirected edges in MIIC inferred networks. The rationale to infer such bidirected edges is not based on post-hoc decision in light of expert knowledge but is actually learnt from the available data as reported with methodological details in our 2017 PLoS Comput Biol paper (Verny et al 2017). Reviewer 2 Review of the PLOS Computational Biology manuscript PCOMPBIOL-D-19-01535 "Learning clinical networks from medical records based on information estimates in mixed-type data" by V Cabeli, L Verny, N Sella, G Uguzzoni, M Verny, H Isambert Summary: This paper presents an extension of the MIIC network learning algorithm for mixed-type (i.e. both continuous and categorical) data. This new approach relies on a new estimation procedure for the (conditional) Mutual Information (MI) for such mixed-type data, also introduced in this manuscript. After introducing the need and relevance of such methods especially in the context of medical records, the authors present new methodological developments for estimating (conditional) MI, that is suitable for mixed-type data, and illustrate its good performance on benchmark synthetic. Then the authors outline their extension of the MIIC algorithm for mixed data, briefly benchmark it, and present an extensive application to medical records of elderly patients with cognitive disorders. Finally, a short discussion quickly highlights the conclusions from that application. General Comments: This manuscript presents an interesting and timely new method for estimating network from mixed-type data such as medical records. While the manuscript is well written, the structure is a bit confusing and impedes both its readability and assessment: first it lacks a materials and methods section which should contains the methodological developments that are currently being presented alongside simulations benchmarks and application in the Results section; secondly the Discussion section should be broader and better acknowledge the assumptions and limitations made by the proposed method. Besides, I have questions concerning the guarantees offered by the proposed method and the assumptions required, as those are not clearly outlined in the manuscript. In particular, I wonder how the authors deal with the scaling of the MI and how it impacts edge pruning and filtering in their network inference. My questions to the authors are detailed below. Major issues: 1. The MI is an unbounded positive quantity, therefore one of the difficulties of using MI for inferring networks from mixed-type data is the scaling of the MI that will usually varies depending on the variable type (binary, categorical, continuous...). This aspect should be discussed in the manuscript. In particular, the MI for categorical variables tends to increase with the number of categories. How do the proposed method deals with this when i) pruning (and filtering) the edges of the inferred network ? ii) representing the association strength such as in Figure 4 ? While the range of (conditional) mutual information indeed depends on the variable types, it is not a difficulty for our method. In fact, our approach exploits these quantitative differences in multivariate information to prioritize all its algorithmic decisions, based on Information Theory principles, while taking into account the finite size of the dataset. In particular, the assessment of variable independence or dependency integrates the number of categories for discrete variables and the number of optimized partitions for continuous variables through a normalized maximum likelihood (NML) complexity cost. Furthermore, as outlined in the Methods section of the revised manuscript (and detailed in Verny et al 2017), (conditional) mutual information estimates integrating NML complexity costs are related to i) the probability to remove the corresponding edges (which can be used to filter the initial skeleton) and represent ii) the association strength between variables (which is displayed through the width of individual edges in MIIC networks). 2. The manuscript lacks a method section. New methodological development should be in a specific Methods section, with a first subsections presenting the new approach for approximating partial MI in mixed-data and a second one presenting the extension of the MIIC algorithm. We now have a Methods section as requested by this reviewer. 3. Discussion section should discuss the whole scope of the manuscript, including assumptions and limitations of the proposed approach for learning network from mixed-data, as well as synthetic benchmark results and application. We now have a Discussion section covering the whole scope of the manuscript. 4. Page 4 line 82-33, the authors seem to make an assumption on the partitioning cut-points that should be clarified, especially if it is required for their approximation to be accurate. There is actually no particular assumption on the partitioning cut-points of continuous variables, just the recognition that the number of cut points needs to be specified and thus encoded in the model within the frame of the Minimum Description Length (MDL) principle, as first argued in Kontkanen et al JMLR 2007 paper on MDL-optimal histograms for continuous variables. Hence, in absence of specific priors for any partition with r bins, the model index should be encoded with a uniform distribution over all partitions with the same number of bins. As there are N −1 r−1 ways to choose r − 1 out of N − 1 possible cut points, it leads to a codelength of log N −1 r−1 to specify the partition of a continuous variable into r bins, which corresponds to the additional term in the complexity cost for each continuous variable Eq. 12 (previously Eq. 6). 5. The authors should detail a bit more how they derived equation 7 or provide a reference. We now provide more detailed insights into the dynamic programming scheme for mutual information optimisation and clarify the different terms of Eq. 13 (previously Eq. 7). 6. It is unclear whether there are guarantees for the convergence of the proposed optimization procedure presented at the bottom of page 4, or if this is more of a heuristic procedure that works in practice. As discussed in the revised manuscript, there is a guarantee for a convergence towards a local maximum of information, although not necessarily the global maximum (unless there is only a single continuous variable). In this sense, the general optimization scheme can indeed be seen as an heuristic procedure that works in practice. 7. The authors should describe what are X and Y represented on Figure 2 and how they are generated in the synthetic benchmark (this is somewhat explained in the SI but should be mentioned and clarified in the main manuscript). We now describe in more details the data of Figure 2 in the main text as well as in SI. 8. Page 6 the authors alludes to the capacity of their approach to identify (conditional) independence. Could they clarify how do they characterize independence from (part) MI in my experience this can be difficult in practice, even with resampling procedures? Independence or conditional independence is characterized by a negative or null (conditional) mutual information including finite size effects, i.e. X ⊥ ⊥ Y | Z ⇐⇒ I (X; Y |Z) 0, as first introduced in Affeldt et al. 2015. For continuous or mixed-type variables, however, the optimization scheme typically returns I (X; Y |Z) = 0 exactly for (conditional) independence, which corresponds to a single bin for X and/or Y . 9. I command the author in making a software available for their method in the form of the R package miic. However, I was unable to find (and so test) the mentioned discretizeMutual function neither from the CRAN version of the package or on GitHub. The authors should provide an url for the code of the proposed approach. As mentionned on page 2 of the SI, we provide on our website all source codes including the discretizeMutual function (https://miic.curie.fr/ download/miic_mixed.tar.gz). We will update both the CRAN and github versions of the R package MIIC as soon as we can include the reference to the present paper.
2,338
2020-05-01T00:00:00.000
[ "Computer Science" ]
Comparative lipid profiling dataset of the inflammation-induced optic nerve regeneration In adult mammals, retinal ganglion cells (RGCs) fail to regenerate following damage. As a result, RGCs die after acute injury and in progressive degenerative diseases such as glaucoma; this can lead to permanent vision loss and, eventually, blindness. Lipids are crucial for the development and maintenance of cell membranes, myelin sheaths, and cellular signaling pathways, however, little is known about their role in axon injury and repair. Studies examining changes to the lipidome during optic nerve (ON) regeneration could greatly inform treatment strategies, yet these are largely lacking. Experimental animal models of ON regeneration have facilitated the exploration of the molecular determinants that affect RGC axon regeneration. Here, we analyzed lipid profiles of the ON and retina in an ON crush rat model using liquid chromatography–mass spectrometry. Furthermore, we investigated lipidome changes after ON crush followed by intravitreal treatment with Zymosan, a yeast cell wall derivative known to enhance RGC regeneration. This data is available at the NIH Common Fund's Metabolomics Data Repository and Coordinating Center (supported by NIH grant, U01-DK097430) website, the Metabolomics Workbench, http://www.metabolomicsworkbench.org, where it has been assigned Project ID: PR000661. The data can be accessed directly via it's Project DOI: doi: 10.21,228/M87D53. a b s t r a c t In adult mammals, retinal ganglion cells (RGCs) fail to regenerate following damage. As a result, RGCs die after acute injury and in progressive degenerative diseases such as glaucoma; this can lead to permanent vision loss and, eventually, blindness. Lipids are crucial for the development and maintenance of cell membranes, myelin sheaths, and cellular signaling pathways, however, little is known about their role in axon injury and repair. Studies examining changes to the lipidome during optic nerve (ON) regeneration could greatly inform treatment strategies, yet these are largely lacking. Experimental animal models of ON regeneration have facilitated the exploration of the molecular determinants that affect RGC axon regeneration. Here, we analyzed lipid profiles of the ON and retina in an ON crush rat model using liquid chromatographyemass spectrometry. Furthermore, we investigated lipidome changes after ON crush followed by intravitreal treatment with Zymosan, a yeast cell wall derivative known to enhance RGC regeneration. This data is available at the NIH Common Fund's Metabolomics Data Repository and Coordinating Center (supported by NIH grant, U01-DK097430) website, the Metabolomics Workbench, http://www.metabolomicsworkbench.org, where it has been assigned Project ID: PR000661. The data can be accessed directly via it's Project DOI: doi: 10.21,228/ M87D53. © 2019 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons. org/licenses/by/4.0/). Data Lipid profiling was performed from the retina and ON samples during Zymosan-induced retinal ganglion cells regeneration through extractive mass spectrometry-based lipidomics. The experimental groups were: intact control (control), optic nerve (ON) crush þ vehicle (crush) and ON crush þ Zymosan þ CPT-cAMP (regeneration). Zymosan is a yeast cell wall preparation traditionally used to induce sterile inflammation experimentally. The addition of a cell-permeable cAMP analog (CPT-cAMP) potentiates Zymosan's action but cannot induce ON regeneration when administered alone [1]. The ONs were collected 3, 7, 14 and the retinas 7, 14 days post-crush (Fig. 1a). Zymosan þ CPT-cAMP treatment potently increased the amount of axon regeneration (Fig. 1b). Time points were chosen according to our and others previous reports. After axotomy, most RGCs die within 2 weeks [2]. The intravitreal inflammatory response presents a hazy vitreous on day 3 post-crush and concomitant Zymosan injection [3]. On day 7, the ON crush site is densely occupied by Iba1 positive macrophages/ microglia [3]. Zymosan þ CPT-cAMP doubles the number of live RGCs in retina 2 weeks after ON crush Value of the data The dataset can serve to inform future functional studies on the involvement of lipids in the RGCs injury and regeneration response. The dataset provides the information of the expression of lipids present in the rat retina and ON at the baseline and over time during injury and repair. Additionally, the data can be used to create lipid spectral libraries for the targeted lipidomic experiments. [1]. After a chloroform-methanol based extraction, lipid samples were analyzed using highperformance liquid chromatography (HPLC) with C30 column coupled to a Q Exactive mass spectrometer operated in a data-dependent mode (Fig. 1c). Peak identification and relative quantification were performed in LipidSearch software. Lists of species and their relative abundances were uploaded to MetaboAnalyst [4] for statistical analysis. ON crush and intravitreal injections All animal procedures were performed in accordance with the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research and policies of the UCLA Animal Research Committee. A rat model of inflammation-induced ON regeneration was established with intravitreal injection of a yeast cell wall preparation (Zymosan A) [5] and a cell-permeant CPT-cAMP [1], immediately after ON crush. Ten-week-old male Fischer rats were deeply anesthetized by inhalation of isoflurane, and the eyes were treated with topical anesthetic (proparacaine HCl 0.5% ophthalmic) and a cycloplegic In the longitudinal sections of rat ON, Zymosan þ CPT-cAMP increases expression of a marker of axon regeneration, GAP43, distal to the crush site (*). (c) Following methanol-chloroform-based extraction, lipids were separated by C30 high-performance liquid chromatography (HPLC) system using Accela 600 pump and measured in (þ)/(À) heated electrospray (HESI) ionization mode using a Q Exactive mass spectrometer. Lipidome identification and relative quantification were performed in LipidSearch, followed by statistical analysis in MetaboAnalyst. (tropicamide 0.5% ophthalmic) to reduce pain and assist with visualization of intravitreal injections. The left ON was exposed by blunt dissection through a temporal, fornix-based conjunctival incision and crushed for 10 seconds with Dumoxel #N5 self-closing forceps (Dumont, Montignez, Switzerland). Post crush, an absence of injury to the retinal vascular supply was confirmed by funduscopic examination. Intravitreal injections (5 mL) of PBS vehicle or a suspension of finely ground, sterilized 4 Zymosan A (Z4250; Sigma-Aldrich, St. Louis, MO, USA; 12.5 mg/mL) plus CPT-cAMP (C3912; Sigma-Aldrich, St. Louis, MO, USA; 100 mM) were performed with a pulled glass pipette attached to a Hamilton syringe on a manual micromanipulator. Injections were made 2 mm posterior to the limbus, and care was taken to prevent lens injury, choroidal hemorrhage, or retinal detachment. Post intravitreal injection, an absence of lens injury, choroidal haemorrhage and retinal detachment was confirmed by fundoscopic examination. Conjunctival incisions were closed with 8e0 polyglactin sutures and petrolatum ophthalmic ointment was applied to the ocular surface. Schematic diagram of optic nerve crush and intravitreal injection is presented in Fig. 2. Immunohistochemistry Optic nerves were fixed in PBS plus 4% paraformaldehyde and cryoprotected overnight at 4 C in 30% sucrose. Cryoprotected tissue was embedded in optimal cutting temperature compound and flash frozen in liquid nitrogen. Longitudinal sections of nerve (14 mm) were cut with a cryostat, mounted on plus charged glass microscope slides, and permeabilized for 30 minutes at room temperature (RT) in TBS plus 0.25% Tween 20 (0.25%TBST). The sections were then blocked with 10% normal donkey serum in TBS for 1 hour at RT and incubated overnight at 4 C with gentle shaking in 0.1%TBST plus 2% BSA (2% BSA-0.1%TBST) and rabbit polyclonal anti-GAP43 (Abcam, Cambridge, MA, USA; ab16053; 1:250). The sections were rinsed with 0.1%TBST and incubated for 1 hour at RT with Hoechst 33,258 in 2%BSA-0.1% TBST plus Alexa Fluor-conjugated donkey secondary antibody. Finally, the sections were rinsed with 0.1%TBST and coverslipped with an aqueous mounting medium. Images of immunostained tissues were obtained with a Fluoview FV1000 confocal microscope (Olympus, Center Valley, PA, USA). Lipid extraction Specimens were stored at À80 C. 6 mL of methanol (LC-MS grade) and 3 mL of chloroform (LC-MS grade) were added to each sample. After 2 min of vigorous vortexing and 2 min of sonication in an ultrasonic bath, the samples were incubated at 48 C overnight (in borosilicate glass vials, PTFE-lined caps). The following day, 3 mL of water (LC-MS grade) and 1.5 mL of chloroform were added, samples vigorously vortexed for 2 min and centrifuged at 3000 RCF, 4 C for 15 min to obtain phase separation. Lower phases were collected and dried in a centrifugal vacuum concentrator. Samples were stored at À20 C until reconstituted in 60 mL of chloroform:methanol (1:1) prior to mass spectrometric analysis. Mass spectrometry The Q Exactive (Thermo) mass spectrometer was operated under heated electrospray ionization (HESI) in positive and negative modes separately for each sample. The spray voltage was 4.4 kV, the heated capillary was held at 310 C (negative mode) or 350 C (positive mode) and heater at 275 C (positive mode). The S-lens radio frequency (RF) level was 70. The sheath gas flow rate was 30 (negative mode) or 45 units (positive mode), and auxiliary gas was 14 (negative mode) or 15 units (positive mode). Full scan (m/z 150e1500) used resolution 70,000 at m/z 200 with automatic gain control (AGC) target of 1 Â 10 6 ions and maximum ion injection time (IT) of 100 ms. Data-dependent MS/MS (top 10) were acquired with the following parameters: resolution 17,500; AGC 2 Â 10 5 (negative mode) or 1 Â 10 5 (positive mode); maximum IT 100 ms (negative mode) or 75 ms (positive mode); 1.3 m/z isolation window. Normalized collision energy (NCE) settings were 40 ± 30% for the negative mode and 30, parallel with 19 ± 5% for the positive mode. Samples list is available in Table 1. Lipid identification and relative quantification Lipid identification and relative quantification were performed with LipidSearch 4.1 software (Thermo). The search criteria were as follows: product search; parent m/z tolerance 5 ppm; product m/ z tolerance 10 ppm; product ion intensity threshold 1%; filters: toprank, main isomer peak, FA priority; quantification: m/z tolerance 5 ppm, retention time tolerance 1 min. The following adducts were allowed in positive mode: þH, þNH4, þHeH2O, þHe2H2O, þ2H, and negative mode: eH, þHCOO, þCH3COO, -2H. All classes were selected for search. LipidSearch nomenclature is used (Table 2). Data processing Positive and negative mode identifications of the retina and ON samples were aligned in Lip-idSearch 4.1, allowing calculation of unassigned peaks. The following settings were applied: product search; alignment method max; retention time tolerance 0.1 min; filters: toprank, main isomer peak; M-score 5. Only peaks with molecular identification grade: A-B (A: lipid class and fatty acid completely identified or B: lipid class and some fatty acid identified) were accepted. Only peaks appearing in all biological replicates were accepted. Peaks with the same annotated lipid species were merged. Lists of Usage notes Files in. csv format can be directly input to MetaboAnalyst: Statistical Analysis. The user should select the following format: samples in columns (unpaired). For the following analysis example, we replaced missing values with a small number (half of the minimum positive value in the original data), applied normalization to sum and log 2 transformation. Data were obtained from 3 to 4 biological replicates for each group. In Fig. 3, distributions of the average area ( Fig. 3a and b) and CV % values ( Fig. 3c and d) for the experimental groups are presented. Biological replicates showed Pearson correlation coefficients ranging from 0.918 to 0.998 (Fig. 3e). In line, within each time point groups of samples were clearly distinguished from each other with 86.7e97.7% of variance accounted for by PC1 and PC2 ( Fig. 3f and j). To identify features undergoing the significant change between experimental groups, we used oneway ANOVA analysis with Tukey's post-hoc test. We examined a number of significant features at different FDR adjusted p values ( Fig. 4 species a and classes b). Next, we performed hierarchical clustering and heatmap visualization of the dysregulated species (FDR adjusted p values < 0.05). The heatmaps of significant species in the retina and ON 14 days post-crush are presented as Fig. 4c and d, respectively. Transparency document Transparency document associated with this article can be found in the online version at https:// doi.org/10.1016/j.dib.2019.103950.
2,756.6
2019-04-25T00:00:00.000
[ "Biology" ]
Ball Milling to Build the Hybrid Mesocrystals of Ibuprofen and Aragonite Mesocrystal formation is one of the new paradigms of the nonclassical crystallization, where the assembly of crystal domains is observed. Also, it has been recently employed in studies on drug formulation to utilize controlled dissolution of the drug domains. In this report, ibuprofen was attempted to form hybrid mesocrystals with calcium carbonate crystals. Two polymorphs of calcium carbonate (aragonite and calcite) were used during the solid-state process of ball milling. Structural analyses confirmed the mesocrystal formation of ibuprofen with aragonite but not with calcite. The origin of the observed behavior was found from the higher affinity of ibuprofen to aragonite, especially its (0 1 0) surface, compared to calcite.The hybrid mesocrystals of ibuprofen and aragonite showed the environment-responsive release behavior, where the stability of aragonite was the controlling factor for the release kinetics of ibuprofen. Introduction Mesocrystals are increasingly found as the products of nonclassical crystallization in the diverse fields of materials [1].The examples are especially abundant in biological and bioinspired crystallization.For example, the nacre of red abalone is constructed as the layers of microcrystals that are also the assembled structures of nanocrystals [2,3].Similar findings have been seen in the nacre of giant oyster, the spicules of calcareous sponges, and the skeletal structure of sea urchins [4][5][6] In addition, synthetic mesocrystals inspired by biomineralization have been reported in the solution crystallization, where the crystal assembly was through interparticle interactions and/or heterogeneous nucleation [7][8][9][10]. Controlled release has attracted great interest in the pharmaceutical research.The basic notion is to maintain the drug concentration in blood above the effective level and below the safe concentration for a sustained period of time, and many efforts have been made at the same time to develop environment-responsive systems that satisfy the specific needs of the drugs [11].Mesocrystals have been also explored in this regard to control the dissolution rate of the active pharmaceutical ingredients (APIs).For example, the sustained release of carbamazepine and adefovir dipivoxil was associated with their mesocrystal formation induced by polymeric additives [12,13]; the enhanced release of ibuprofen was found for its mesocrystals formed with sodium dodecyl sulfate [14]. Mechanical milling was utilized to prepare hybrid structures of APIs with inorganic materials, such as silica, magnesium aluminosilicate, aluminum silicate, and aluminum hydroxide, to alter the physicochemical properties of the APIs [15][16][17][18].It has been also extensively explored in recent years to prepare cocrystals and solvates/hydrates of APIs, and the solid-state process without or minimal use of solvents makes it attractive as a greener process than conventional procedures [19][20][21][22]. In the present study, we have attempted a simple method of ball milling to prepare hybrid mesocrystals of ibuprofen (IBU) and calcium carbonate.We explored two anhydrous polymorphs of calcium carbonate (calcite and aragonite) since the distinctive molecular arrangements of the active surfaces could contribute to the different interactions with IBU.Also, the pH-responsive dissolution behaviors of the mesocrystals were expected because of the high and sparing solubility of calcium carbonate at low and neutral pH, respectively [23]. Preparation of Calcite and Aragonite. Two polymorphs of anhydrous calcium carbonate were prepared following a method using water-alcohol mixtures [24].To obtain calcite, sodium carbonate (26.0 mmol, 2.75 g: Na 2 CO 3 : ≥99.0%,Sigma-Aldrich, Milwaukee, WI, USA) was completely dissolved in the solution of 100 mL ethanol (HPLC grade, 99.0%, Samchun, Pyeongtaek, South Korea) and 900 mL deionized water (DI water: resistivity of 18.2 MOhm⋅cm, Direct-Q from Millipore, Billerica, MA, USA) contained in a 1000 mL volumetric flask at room temperature (ca. 25 ∘ C), and then calcium chloride (26.0 mmol, 2.89 g: CaCl 2 : 99+%, Sigma-Aldrich, Milwaukee, WI, USA) was added to the solution.The solution was vigorously mixed at all time with a stir bar (length, 30 mm) equipped with a magnetic stirrer (HS180, Misung Scientific Co., Seoul, South Korea).The procedure to obtain aragonite was the same as that for calcite except that 50 vol% ethanol (500 mL ethanol and 500 mL DI water) was used.After 24 h, the precipitated products were collected by vacuum filtration (number 20 filter paper, pore diameter 5 m, Hyundai Micro, Seoul, South Korea), washed with DI water, and dried in a convection oven at 40 ∘ C for 12 h before further use. Ball Milling IBU with Calcite and Aragonite.Ibuprofen (IBU: >98%) was used as obtained from Sigma-Aldrich (Milwaukee, WI, USA).IBU (100 mg) was ball-milled with calcite or aragonite (100 mg) at the frequency of 10 Hz, and the milling time was 120 or 240 min.For the procedure, a Retsch ball mill (MM 200, Haan, Germany) was used with a cylindrical stainless steel jar (about 25 mL; inner diameter ca.26 mm and inner length ca.52 mm) and two stainless steel balls (diameter, 9 mm), which was often utilized in pharmaceutical grinding [20].The ball-milled products were immediately used for further characterization. Thermal analyses were performed via differential scanning calorimetry (DSC: DSC 821e, Mettler-Toledo, Columbus, OH, USA) and thermogravimetric analysis (TGA: TGA/SDTA 851e, Mettler-Toledo).DSC was to check the crystalline state of IBU, and it was precalibrated for enthalpy and temperature using indium.The scanning for the IBU containing samples (2-3 mg each in a hermetically sealed aluminum crucible) was from 25 to 100 ∘ C with a heating rate of 10 ∘ C/min, and DSC experiments were repeated in triplicate for each sample.TGA was performed from 25 to 900 ∘ C with a scanning rate of 10 ∘ C/min (10-15 mg each in an open alumina crucible).Both DSC and TGA were under nitrogen environment. Release behaviors of IBU were studied at pH 1.2 (corresponding to the gastric fluid) and 6.8 (corresponding to the intestinal fluid) using buffer solutions of HCl-KCl and phosphate, respectively [25,26].Powders containing IBU (5 mg for IBU only; 10 mg for ball-milled samples of IBU and calcium carbonate) were placed in a 300 mL solution (500 mL round bottom flask) at 37 ∘ C, and the mixture was stirred at 200 rpm using an overhead stirrer (HS-30D, Wisd Laboratory Instruments, Wertheim, Germany).To analyze the concentration of IBU in the solution, 3 mL samples were removed after 5, 10, 20, 30, 60, 120, and 180 min, and the solution was refilled immediately with the original buffer solutions to keep the volume of the solution constant.The solution sample removed at a given interval was filtered through a cellulose acetate filter (pore size 0.45 m, MFS-13, Advantec, Tokyo, Japan), and its UV absorbance was measured at 222 nm (Optizen POP, Mecasys, Daejeon, South Korea) where the absorbance was at its maximum.The UV absorbance was then converted to the concentration using a preconstructed calibration curve.The release experiments were independently repeated in triplicate for each sample. Computational Calculations of Binding Energy. The binding energy of IBU on the surfaces of aragonite and calcite was calculated using the Materials Studio simulation software (version 7.0) from Accelrys (San Diego, CA, USA) equipped with Forcite module and COMPASS force field [27], which was known for being effective for the calculations of the binding on calcium carbonate [28].The (0 1 0)/(1 1 0) faces of aragonite and the (1 0 4) of calcite were selected as the adsorption surfaces based on the previous morphological observations [10,24,29].Electrostatic energy terms were treated using Ewald summation method, and van der Waals energy was calculated with a cutoff distance of 12.5 Å using an atom-based summation method.The crystal structures of aragonite and calcite were obtained from the previous publications [30,31], and the partial charge was force-field assigned.The initial geometry optimization and the charge assignment of IBU were also performed before adsorption simulation.The adsorption surfaces of calcium carbonate crystals were cleaved with thickness of 17 Å, which were then expanded over 40 Å × 40 Å.The centers of geometry of calcium and carbonate were considered for them to be included in the cleaved slab.After placing the IBU molecule at the center of the surfaces, Forcite quench (5 ps with 1 fs step) was performed to find the structure of the minimum energy.The NVT ensemble (Nosé thermostat) was used at 303 K reflecting the measured temperature during the ball milling process [32].Among the 5,000 different structures, the structures at every 500 frames were quenched and optimized.The binding energy ( ) was calculated for the minimum energy structure of each surface: = − ( + ), where , , and are the total energy, the energy of adsorbed IBU, and the energy of surface, respectively. Structures of Ball-Milled IBU with Calcite and Aragonite. The IBU crystals before milling as well as synthesized polymorphs of calcium carbonate (calcite and aragonite) were shown in Figure 1.The IBU crystals were plate-shaped with their length of about 100-200 m (Figure 1(a)).They were of typical morphology as previously observed: the large {1 0 0} faces and the {0 0 1}, {1 1 0} side faces [33,34].The crystal shapes of the synthesized polymorphs of calcium carbonate were also of routinely observed morphology.The aragonite crystals were needle-shaped with their length of about 2-5 m (Figure 1(b)).As shown in the inset, the aragonite needle was composed of submicron domains, and this observation was in agreement with the previous study using the same synthesis method [24].The utilized method is known to generate {1 1 0} and {0 1 0} faces of aragonite as the enclosing surfaces, and the {0 1 0} is also the distinct cleavage plane [24,29].The calcite crystals were rhombohedral with most of their size at 1-5 m (Figure 1(c)).The enclosing surfaces of the rhombs are known as the {1 0 4} faces, which are also the perfect cleavage planes [10,29,35]. When IBU was ball-milled with the crystals of calcium carbonate, significant differences were observed between the cases with aragonite and calcite.When IBU was ballmilled with aragonite, the aragonite needles were mostly .More importantly, IBU and aragonite formed tightly integrated mesocrystal structures of the overall size about 4-10 m.(In the case of IBU/aragonite milling, the microscopic examination did not reveal clear differences between the 120-and 240-minute samples.)In contrast, IBU ball milled with calcite did not show mesocrystal formation.IBU and calcite mostly remained as separate entities, although a small part of them appeared associated as indicated by the arrowheads of Figures 2(c) and 2(d).Note that calcite was easily distinguishable because of its {1 0 4} cleavage surfaces.(In the case of IBU/calcite, the calcite appeared more fractured and smaller after the 240-minute milling than after 120 min.)XRD analysis confirmed the nearly exclusive formation of calcite and aragonite during the preparation step (Figure 3).The prominent diffraction peaks of aragonite were shown at ca. 26.0, 27.0, 35.9, and 38.2 ∘ for the {1 1 1}, {0 2 1}, {2 0 0}, and {1 3 0} planes, respectively (Figure 3 ) The polymorphs of aragonite and calcite were also distinguishable using FT-IR analysis (Figure 4).The former had characteristic doublet peaks at 700 and 713 cm −1 as well as peaks at 1082 and 1786 cm −1 (Figures 4(a Ball milling did not alter the structures of IBU crystals significantly enough to generate new XRD diffraction peaks from different crystal structures (Figure 3).Major XRD diffraction peaks of IBU were at ca. 16.4, 18.7, 19.8, and 21.9 ∘ for {2 1 0}, {2 0 −2}, {0 1 2}, and {2 0 2}, respectively [40].All major diffraction peaks were the same after ball milling, although some alterations in the intensity, most noticeably the increased relative intensity of {0 1 2} peak at ca. indicated the changes in the crystal morphology.In addition, the analysis of the full width at half maximum (FWHM) was in agreement with the microscopic observation.The FWHM was analyzed based on the relatively intensified {0 1 2} peak.The FWHM was about 0.11 ∘ before milling, and it increased after milling with aragonite or calcite.The FWHM values were 0.16 and 0.17 ∘ after 120 and 240 min of milling with aragonite, respectively.They were 0.17 and 0.19 ∘ after 120 and 240 min with calcite, respectively.Since FWHM is inversely proportional to the crystallite size, the analysis indicated the size decrease of the IBU crystallite with ball milling [41].Note that the {0 1 2} plane is parallel to the a-axis, which is also perpendicular to the large {1 0 0} face of the plateshaped IBU crystals before milling (Figure 1(a)) [33,34].This suggested that breakage occurred in the perpendicular direction of the large face as would be expected from the original shape of the crystals. The interactions between IBU and calcium carbonate were further examined with FT-IR and molecular dynamics.FT-IR showed that one of the carbonate-related peaks of aragonite noticeably changed after milling with IBU (Figure 4(a)).The frequency corresponding to asymmetric stretching was at 1495 cm −1 [37], and it became 1485 and 1481 cm −1 after 120 and 240 min of milling with IBU, respectively.In contrast, the corresponding change appeared absent for calcite (Figure 4(b)).Also, note that the change on the IBU side, including that for the carbonyl peak at 1720 cm −1 , was difficult to find. The fundamental aspects of IBU interaction with aragonite and calcite were investigated by studying the binding energy and structures.The binding energy of IBU on the mineral surfaces was in the following order: aragonite (0 1 0), |-212 kcal/mol| ≫ aragonite (1 1 0), |−63 kcal/mol| > calcite (1 0 4), |−44 kcal/mol|, and the binding was predominantly of electrostatic nature in all cases.The corresponding binding conformations of IBU to the surfaces were shown in Figure 5.The binding differences seemed to originate from the dissimilar atomic arrangements of the surfaces.Each calcium ion of aragonite and calcite in bulk was coordinated with nine and six oxygens, respectively; the surface calcium of (0 1 0) and (1 1 0) lacked three coordinating oxygens out of nine, whereas that of (1 0 4) was short of only one oxygen out of six.This appeared to allow stronger electrostatic interactions of surface calcium of aragonite with the oxygens of IBU. Overall, the IR analysis and the computational study of the binding indicated the higher affinity of IBU to aragonite compared to calcite.This was consistent with the microscopic observation, where the intimate association of IBU was found with aragonite but not with calcite. Properties of Ball-Milled IBU with Calcite and Aragonite. Thermal properties of IBU ball milled with calcium carbonate were examined with DSC.The melting point of IBU was about 77 ∘ C before milling.It became about 74 and 73 ∘ C after 120 and 240 min of milling with aragonite; it was about 76 and 75 ∘ C after 120 and 240 min of milling with calcite (Figure 6(a)).The melting enthalpy showed greater changes (Figure 6(b)).The reported enthalpy was the normalized value based on the TGA analysis, which revealed the exact amount of IBU in the hybrid sample.(Calcium carbonate, both aragonite and calcite, started to experience weight loss at around 600 ∘ C to eventually leave ca.56% of the original weight.This corresponded to the formation of calcium oxide by losing carbon dioxide; IBU decomposed nearly completely below 300 ∘ C to leave less than 1% of the original weight.)The melting enthalpy of IBU was about 172 J/g before milling.It became about 162 and 126 J/g after 120 and 240 min of milling with aragonite; it was about 173 and 151 J/g after 120 and 240 min of milling with calcite.Overall, the decrease of melting point and enthalpy of IBU was observed after milling, which was probably due to the decrease of the crystal size and crystallinity [42,43].Also, the effect of aragonite was more significant than that of calcite, which was in accordance with the structural analysis in the previous section. Release behaviors of IBU from the ball-milled IBU/ aragonite and IBU/calcite were studied at pH 1.2 and 6.8 (Figure 7), where the change of pH during the release was less than 0.1 in all cases.The release behavior from the IBU/aragonite hybrid was pH responsive.After milling for 240 min, the initial release (<30 min) accelerated at pH 1.2, while it decelerated at pH 6.8, compared with neat IBU.This was apparently due to the high solubility of aragonite at low pH [23], combined with its intimate association with IBU.Note that 120-minute milling was not as effective at pH 1.2, while its deceleration effect at pH 6.8 was valid.In contrast, ball milling of IBU with calcite was not effective in modulating the release rate.Overall, the release behaviors of IBU confirmed that they could be adjusted only when IBU was intimately associated with the ball-milled substrate. When the substrate was rapidly soluble, the IBU release was expedited; when the substrate was marginally soluble, the release slowed down.Further studies on the quantitative analysis of the release kinetics and on the extended release behavior with the fine-tuned structures would be necessary to establish the utility of the mesocrystals in the controlled drug release. Conclusions In summary, two anhydrous polymorphs of calcium carbonate (aragonite and calcite) were employed during ball milling of IBU to generate IBU/calcium carbonate hybrid materials.Aragonite polymorph was intimately integrated with IBU to form a mesocrystal-like structure, whereas calcite did not seem to be as effective under the experimental conditions employed in the present study.Aragonite and IBU kept their original crystal structures within the mesocrystals, although morphological variations occurred.Aragonite/IBU interaction was verified by the changes in the IR vibration of the carbonate of aragonite, and its strong nature was corroborated by the binding energy computationally obtained. The IBU fused with aragonite showed modulated thermal behavior, further confirming the observed structures.Finally, the IBU release behavior could be regulated through the conditions affecting the aragonite substrate.The IBU release sped up under the conditions disintegrating aragonite, and it slowed down when aragonite was stable.The present study indicates that the substrates that disintegrate at specific conditions can be utilized for the environment-responsive release of APIs.For this purpose, it also appears that the substrates need to form intimately associated structures with APIs to be delivered. Figure 1 : Figure 1: SEM images of ibuprofen (a), aragonite (b), and calcite (c) crystals.The scale bar of the inset of (b) is 1 m. Figure 2 : Figure 2: SEM images of IBU/aragonite after ball milling for (a) 120 and (b) 240 min; IBU/calcite after ball milling for (c) 120 and (d) 240 min.Arrowheads of (a) and (b) indicated the aragonite needles, and those of (c) and (d) indicated IBU.Also shown in the insets of (a) and (b) were the submicron domains as well as the needles of aragonite integrated with IBU.
4,253.2
2015-01-01T00:00:00.000
[ "Materials Science" ]
Precision measurement of the $\Xi_{cc}^{++}$ mass A measurement of the $\Xi_{cc}^{++}$ mass is performed using data collected by the LHCb experiment between 2016 and 2018 in $pp$ collisions at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 5.6 $\mathrm{fb}^{-1}$. The $\Xi_{cc}^{++}$ candidates are reconstructed via the decay modes $\Xi_{cc}^{++}\to\Lambda_c^+K^-\pi^+\pi^+$ and $\Xi_{cc}^{++}\to\Xi_c^+\pi^+$. The result, $3621.55 \pm 0.23{\rm\,(stat)\,} \pm 0.30 {\rm\,(syst)\,}{\rm MeV}/c^2$, is the most precise measurement of the $\Xi_{cc}^{++}$ mass to date. At present, the experimental uncertainty on the Ξ ++ cc mass is still large compared to that of the singly charmed baryons. This paper presents an updated measurement of the Ξ ++ cc mass using the decay modes Ξ ++ cc → Λ + c (→ pK − π + )K − π + π + and Ξ ++ cc → Ξ + c (→ pK − π + )π + . The analysis uses a data sample corresponding to an integrated luminosity of 5.6 fb −1 , collected by the LHCb experiment during 2016-2018 in pp collisions at a centre-of-mass energy of 13 TeV. This measurement supersedes the results reported on the Ξ ++ cc mass in Refs. [3,4], which only use pp collision data at 13 TeV taken in 2016, corresponding to an integrated luminosity of 1.7 fb −1 . Detector and simulation The LHCb detector [22,23] is a single-arm forward spectrometer covering the pseudorapidity range 2 < η < 5, designed for the study of particles containing b or c quarks. The detector includes a high-precision tracking system consisting of a silicon-strip vertex detector surrounding the pp interaction region [24], a large-area silicon-strip detector located upstream of a dipole magnet with a bending power of about 4 Tm, and three stations of silicon-strip detectors and straw drift tubes [25,26] placed downstream of the magnet. The tracking system provides a measurement of the momentum of charged particles with a relative uncertainty that varies from 0.5% at low momentum to 1.0% at 200 GeV/c. The momentum scale is calibrated using samples of B + → J/ψ K + and J/ψ → µ + µ − decays collected concurrently with the data sample used for this analysis [27,28]. The relative accuracy of this procedure is estimated to be 3 × 10 −4 using samples of other fully reconstructed b-hadron, Υ and K 0 S decays. The minimum distance of a track to a primary pp collision vertex (PV), the impact parameter (IP), is measured with a resolution of (15 + 29/(p T / GeV/c)) µm, where p T is the momentum component transverse to the beam axis. Different types of charged hadrons are distinguished using information from two ring-imaging Cherenkov detectors [29]. Photons, electrons and hadrons are identified by a calorimeter system consisting of scintillating-pad and preshower detectors, an electromagnetic and a hadronic calorimeter. Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers [30]. The online event selection is performed by a trigger [31], which consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage, which applies a full event reconstruction. In between the two software stages, an alignment and calibration of the detector is performed in near real-time [32]. This process allows the reconstruction of the Ξ ++ cc decays to be performed entirely in the software trigger, whose output is used as input to the present analysis. Simulated samples are used to model the effects of the detector acceptance, optimise selections and verify the validity of the methods used in the measurement. In the simulation, pp collisions are generated using Pythia 8 [33] with a LHCb specific configuration [34]. The production of doubly charmed Ξ ++ cc baryons is simulated using the dedicated generator GenXicc2.0 [35]. Decays of hadrons are described by EvtGen [36], in which final-state radiation is generated using Photos [37]. The interaction of the generated particles with the detector, and its response, are implemented using the Geant4 toolkit [38] as described in Ref. [39]. Sources of background, such as those from Ξ ++ cc → Ξ + c (→ Ξ + c γ)π + and Ξ ++ cc → Ξ + c ρ + (→ π + π 0 ), are studied using the fast simulation package RapidSim [40]. Event selection The reconstruction of Ξ ++ cc → Λ + c (→ pK − π + )K − π + π + and Ξ ++ cc → Ξ + c (→ pK − π + )π + decays is similar to that used in previous LHCb analyses [3,4]. Candidate Ξ + c (Λ + c ) → pK − π + decays are reconstructed from three charged particles identified as a p, K − and π + using information from the RICH detectors [29]. The charged particles are required to form a good-quality vertex and be inconsistent with originating from any PV. The Ξ + c (Λ + c ) candidate is then combined with one (three) additional charged particle(s) to form a Ξ ++ cc → Ξ + c π + (Ξ ++ cc → Λ + c K − π + π + ) decay candidate. These additional particles must form a good-quality vertex with the Ξ + c (Λ + c ) candidate, which is required to be upstream of the Ξ + c (Λ + c ) decay vertex. Each Ξ ++ cc candidate is required to have p T > 2 GeV/c and to be consistent with originating from its associated PV. The associated PV is that with respect to which the Ξ ++ cc candidate has the smallest χ 2 IP . The χ 2 IP is defined as the difference in χ 2 of the PV fit with and without the particle in question. To avoid candidates including duplicated tracks, each track pair is required to have an opening angle larger than 0.5 mrad or a momentum difference larger than 5% of the minimum momentum of the track pair. In order to improve the signal purity, multivariate classifiers are trained to separate signal from background. The choice of classifier algorithms is based on their performance for each decay mode. A classifier based on the Boosted Decision Tree (BDT) algorithm [41,42] implemented in the TMVA toolkit [43] is used for the the Ξ ++ cc → Λ + c K − π + π + mode, while a Multilayer Perceptron (MLP) algorithm [43] is used for the Ξ ++ cc → Ξ + c π + mode. The BDT for the Ξ ++ cc → Λ + c K − π + π + decay is trained with simulated signal events as a signal proxy and wrong-sign Λ + c K − π + π − combinations (3525-3725 MeV/c 2 ) in data, where the two final-state pions have opposite charges, as a background proxy. Both the signal and background proxies are required to pass the selection described above. Variables associated with the Ξ ++ cc candidates used in the training include the vertex-fit quality, the χ 2 IP , the angle between the momentum and the displacement vector, the flight-distance χ 2 between the PV and the decay vertex. The flight-distance χ 2 is defined as the χ 2 of the hypothesis that the decay vertex of the candidate coincides with its associated PV. Variables associated with the decay products of the Ξ ++ cc candidates (the Λ + c , K − and π + ) used in the training include their p T and χ 2 IP , the vertex-fit quality of the Λ + c candidates and the smallest p T among the Λ + c decay products (p, K − and π + ). Particle-identification information for the final-state particles is also used. The threshold applied to the classifier response is determined by maximising the signal significance S/ √ S + B, where S is the expected signal yield estimated using simulation, and B is the background yield evaluated in the upper sideband of data (3800-3900 MeV/c 2 ) extrapolated to the signal region (3607-3635 MeV/c 2 ). The multivariate classifier for the Ξ ++ cc → Ξ + c π + decay is developed following the same strategy as that for the Ξ ++ cc → Λ + c K − π + π + decay. After the full selection, an event may still contain more than one Ξ ++ cc candidate. According to studies on simulated decays and the wrong-sign control sample, multiple candidates in the same event may form a peaking structure in the mass distribution of the Ξ ++ cc candidates if they are obtained from the same final-state tracks, but via swapping two final state tracks (e.g. the K − from the Λ + c decay and the K − from the Ξ ++ cc decay). In this case, one candidate is chosen randomly. Other kinds of multiple candidates, which account for 8% (< 1%) of the Ξ ++ cc → Λ + c K − π + π + (Ξ ++ cc → Ξ + c π + ) signal events, are not removed since they do not form a peaking background. The m cand (Ξ ++ cc ) mass distributions of the selected Ξ ++ cc candidates are shown in Fig. 1 for the Ξ ++ cc → Λ + c K − π + π + and Ξ ++ cc → Ξ + c π + decay modes. The Ξ ++ cc mass is determined by performing unbinned extended maximum-likelihood fits to the two m cand (Ξ ++ cc ) mass distributions. The signal components are described by a modified Gaussian function with a power-law tail on the left-hand side of the distribution [47], parameterised as The peak position, x, and width, σ, of the function are allowed to vary in the fit. The power-law tail parameters, N and α, are fixed from simulation. The background from randomly associated tracks is modelled using an exponential function. Background contributions from the partially reconstructed decays Ξ ++ cc → Ξ + c (→ Ξ + c γ)π + and Ξ ++ cc → Ξ + c ρ + (→ π + π 0 ), where photons and neutral π 0 mesons are not reconstructed, can contribute to the Ξ ++ cc → Ξ + c π + decay mode. The mass line shapes of these partially reconstructed backgrounds are determined from simulation. The fits return signal yields of 1598 ± 64 and 616 ± 47 for the Ξ ++ cc → Λ + c K − π + π + and Ξ ++ cc → Ξ + c π + decay modes, respectively. The peak positions are determined with the Ξ ++ cc → Λ + c K − π + π + and Ξ ++ cc → Ξ + c π + decay modes to be 3622.08 ± 0.24 MeV/c 2 and 3622.37 ± 0.60 MeV/c 2 , respectively, where the uncertainty is statistical only. Due to multiple scattering, the opening angle between the Ξ ++ cc decay products can be increased or decreased. This can bias both the resulting Ξ ++ cc mass and the measured decay length. Since the selection is more efficient for candidates with larger reconstructed decay lengths, and the decay length is correlated with the mass by the effect of the multiple scattering, this can bias the Ξ ++ cc mass measurement. This effect was studied with charmed hadrons, D + , D 0 , D + s , Λ + c , and was found to be well reproduced by simulation [3]. Corresponding corrections of −0.61 ± 0.09 MeV/c 2 for Ξ ++ cc → Λ + c K − π + π + and −0.45 ± 0.09 MeV/c 2 for Ξ ++ cc → Ξ + c π + are determined using simulated candidates by comparing the fitted mass with signal candidates before and after applying the event selection. These corrections are applied to the fitted mass values. The uncertainties are due to the limited size of simulated samples, and are taken as the systematic uncertainties from the selection-induced bias on the Ξ ++ cc mass. The difference of the kinematic distributions in simulation and data is considered as a systematic uncertainty and is discussed in Sec. 5. Low-momentum photons emitted by the final-state particles are not reconstructed. This distorts the reconstructed mass distribution and can bias the fitted mass value. This effect is studied with the simulation. To disentangle this effect from those due to reconstruction, the mass of the Ξ ++ cc candidates calculated with the true momenta of the final-state particles is smeared with different resolution values. The difference between the fitted and input mass values is studied as a function of the mass resolution, and the difference corresponding to the mass resolution in data is taken as a correction. Alternative signal models are also considered. The largest difference of the fitted mass with final-state radiation corrections between the nominal and the alternative is quoted as the uncertainty. Following the procedure described above, the corrections due to the final-state radiation are determined as 0.06 ± 0.05 MeV/c 2 and 0.03 ± 0.16 MeV/c 2 for the Ξ ++ cc → Λ + c K − π + π + and Ξ ++ cc → Ξ + c π + decay modes, respectively. The uncertainties on the corrections are due to the limited size of the simulated samples, and the difference between the corrections with different signal models. Systematic uncertainties The dominant source of systematic uncertainty on the mass measurement is due to the momentum-scale calibration [27,28]. It amounts to 0.21 MeV/c 2 for the Ξ ++ cc → Λ + c K − π + π + decay, and 0.34 MeV/c 2 for the Ξ ++ cc → Ξ + c π + decay due to larger Q-value. A further uncertainty arises from the correction for energy loss in the spectrometer, which is known with 10% accuracy [23]. This uncertainty was studied in Ref. [28], and amounts to 0.03 MeV/c 2 for D 0 →K + K − π + π − decays. The uncertainties on the Ξ ++ cc mass are scaled from that of the D 0 decay by the number of final-state particles, and are determined to be 0.05 MeV/c 2 and 0.03 MeV/c 2 for the Ξ ++ cc → Λ + c K − π + π + and Ξ ++ cc → Ξ + c π + decays, respectively. Differences between kinematic distributions in simulation and data are treated as sources of systematic uncertainties on the corrections due to the selection procedure. The kinematic variables used in the event selection that are found to affect the corrections are listed below: p T of Ξ ++ cc candidates; the angle between the momentum and the displacement vector from the PV to the decay vertex of the Ξ ++ cc and Λ + c (Ξ + c ) candidates; the χ 2 IP of Ξ ++ cc and Λ + c (Ξ + c ) candidates and their decay products; the BDT (MLP) response; and the particle identification information. The distributions of these variables in simulation are weighted to match those in data where the background is subtracted by means of the sPlot technique [48]. Then, the corrections obtained with the weights are compared to those without weights, and largest variations of the corrections are taken as systematic uncertainties, which are 0.09 MeV/c 2 and 0.05 MeV/c 2 for the Ξ ++ cc → Λ + c K − π + π + and Ξ ++ cc → Ξ + c π + decays, respectively. The uncertainty related to the background description is estimated by repeating the fits with alternative models which include first and second-order polynomial functions. For the Ξ ++ cc → Λ + c K − π + π + decay, the fit with a second-order polynomial function has better fit quality, but returns identical fitted mass. The largest changes on the fitted mass value are found to be 0.01 MeV/c 2 and 0.04 MeV/c 2 for the Ξ ++ cc → Λ + c K − π + π + and Ξ ++ cc → Ξ + c π + decays, respectively, which are assigned as systematic uncertainties. The mass of Ξ ++ cc candidates also depends on the value of the known Λ + c and Ξ + c masses. The uncertainties on the Λ + c mass and on the mass difference between the Ξ + c and Λ + c are propagated to the Ξ ++ cc mass measurement. The corresponding uncertainties on the Ξ ++ cc mass are 0.14 MeV/c 2 and 0.22 MeV/c 2 for the Ξ ++ cc → Λ + c K − π + π + decay and the Ξ ++ cc → Ξ + c π + decay, respectively. The sources of systematic uncertainty considered in this analysis are summarised in Table 1. When computing the total uncertainty, the uncertainty on the momentum-scale calibration of the Ξ + c mass from Ref. [46] is assumed to be fully correlated to that of the Ξ ++ cc mass. The total systematic uncertainty is calculated by summing the individual sources of uncertainty in quadrature. Results and summary The resulting values of the Ξ ++ cc mass using the Ξ ++ cc → Λ + c K − π + π + and Ξ ++ cc → Ξ + c π + decay modes are 3621.53 ± 0.24 ± 0.29 MeV/c 2 , and 3621.95 ± 0.60 ± 0.49 MeV/c 2 , respectively, including corrections and systematic uncertainties. The combination of the two measurements is performed using the Best Linear Unbiased Estimator (BLUE) [49,50]. Table 1: Systematic uncertainties on the Ξ ++ cc mass measurements using Ξ ++ cc → Λ + c K − π + π + and Ξ ++ cc → Ξ + c π + decays. The total systematic uncertainty on each mode is calculated by summing the individual sources of uncertainty in quadrature, except for the uncertainty on the momentum-scale calibration of the Ξ + c mass [46], that is added linearly to that of the Ξ ++ cc mass. In the combination, the correlation between the Λ + c and Ξ + c masses [45,46] is taken into account. Uncertainties arising from the momentum-scale calibration, energy-loss corrections, and final-state radiation are assumed to be 100% correlated while other sources of systematic uncertainty are assumed to be uncorrelated. The individual mass measurements and the resulting combination are illustrated in Fig. 2. The averaged mass is dominated by the result for the Ξ ++ cc → Λ + c K − π + π + mode, due to its larger yield and smaller momentum-scale uncertainty relative to that of the Ξ ++ cc → Ξ + c π + decay. This is the most precise measurement of the Ξ ++ cc mass to date, improving upon the previous weighted average mass value of 3621.24 ± 0.65 (stat) ± 0.31 (syst) MeV/c 2 from Ref. [4]. cc → Λ + c K − π + π + and Ξ ++ cc → Ξ + c π + . The combination is performed using the best linear unbiased estimator [49,50]. The inner error bars represent statistical uncertainties and the outer error bars represent the quadratic sum of statistical and systematic uncertainties. The inner and outer green bands correspond to the uncertainties on the averaged value.
4,571
2019-11-19T00:00:00.000
[ "Physics" ]
Teachers’ use of ICT in implementing the competency-based curriculum in Kenyan public primary schools The use of Information and Communication Technology (ICT) in education has been widely advocated as much needed 21st-century skills by governments and policymakers. Nevertheless, several challenges in integrating ICT into the curriculum have been reported in previous research, especially in studies on Sub-Saharan African countries. Focusing on the case of Kenyan public primary schools, this study investigated the availability of ICT facilities; teacher capacity to integrate technology into their lessons; and teacher perceptions towards technology in schools. In particular, the study is premised on the constructivist learning theory and the Technology Acceptance Model. A total of 351 teachers completed an online questionnaire. Teachers perceived that ICT facilities were inadequate in schools, which presented a challenge in the integration of technology during the implementation of the new curriculum. Most of the teachers answered that they received only basic computer literacy training. Although teachers perceived the use of computers as necessary, they faced difficulties integrating technology in their lessons. The effect of age and gender on teacher capacity was also investigated in inferential statistics, specifically with Welch tests and Games-Howell post hoc comparisons. Teachers in their 40s had a higher perception of usefulness than teachers in the 30s. Implications of the study are discussed as well as future research topics. Introduction Today more than ever before, the world faces competition in all sectors as a result of the advent of a knowledge-based economy. Governments in all parts of the world are striving to achieve access and good quality education for their citizens (UNESCO, 2013). For this reason, ICT in education is seen as a means of increasing access to education especially to the rural population and making teaching and learning enjoyable. Different studies have supported the use of ICT in education as an enabler in the process of teaching and learning by assisting the learners to grasp concepts that would otherwise have remained abstract (Kozma, 1991). Other scholars contend that the use of ICT in education has little benefit because they are merely delivery mechanisms relying on the teacher's pedagogical abilities (Clarke, 1983). Amid these debates, policymakers have continued to lay foundations for the use of ICT to profit from the perceived benefits. Even in developing countries, there have been increased investments in ICTs for schools despite the lack of adequate empirical evidence on the outcomes of such efforts (Piper et al., 2015). However, the Global Innovation Index (GII) 2019 report by the World Intellectual Property Organization ranks South Africa, Kenya, and Mauritius as the leading innovation hubs in Sub-Saharan Africa. This means that there is a need to explore the Open Access Innovation and Education opportunities and the challenges that exist in these countries about technology and its use in education. In Kenya, the policymakers view ICT in education as an enabler for knowledge acquisition leading to innovation and skill development to address the challenges faced by the country's education system (Republic of Kenya, 2019). In line with Kenya's development blueprint, Vision 2030, the education curriculum has been reviewed from the 8-4-4 system to a competency-based curriculum (CBC). The vision of the basic education curriculum reforms is to equip learners with world-class standards and skills needed to thrive in the 21st Century such as digital literacy (KICD, 2017). To achieve this, the integration of ICT in the curriculum is emphasized in the teaching of every subject a shift from the previous system which did not include the integration of ICT in primary schools but only in secondary schools as an elective subject. Distinctly in the year 2020, education systems in all parts of the world were faced with the challenge of the COVID-19 pandemic. Governments in most countries were forced to close schools and minimize any form of gatherings to contain the spread of the deadly respiratory disease. In Kenya, UNICEF estimated that close to 20 million learners spread across the country were out of school because of COVID-19 (Brown & Otieno, 2020). Therefore, to get a better understanding of whether alternative methods of learning such as e-learning would succeed, this study focused on how teachers and schools were prepared for technology integration before the crisis. The study focused on the assessment of the availability of ICT facilities in public primary schools, teachers' ability to use technology in teaching and learning, and the perception of teachers on the usefulness and the ease of use of ICT. Since digital literacy is considered an important skill to cope with the 21st C developments, the teacher is a crucial player in the successful implementation of ICT and should be well prepared through adequate training (Hwang et al., 2010). Furthermore, a look at previous studies shows that some challenges have been hindering technology integration in the country. For instance, in a study conducted by Karsenti et al. (2012) in over ten schools around Kenya, various factors were identified as hindrances to the pedagogical integration of ICT. Some of these factors included: lack of ICT devices, the perception of ICT by teachers as time-consuming and as an additional workload, technophobia by older teachers, teachers' inadequate ICT expertise among others. To address some of the issues, the Jubilee government had a plan in 2013 to integrate ICT in education by providing laptops to all class one pupils (Muinde & Mbataru, 2019). According to Wanzala and Nyamai (2018), by July 2018 19,000 out of 23,951 public primary schools had been provided with technology devices but only 70,000 out of over 300,000 teachers had been trained just months to the rollout of CBC. A survey by the Teachers Service Commission that purposefully targeted some schools and 1200 respondents also revealed that teachers in public institutions had serious challenges in using ICT in their teaching. 84.2% of the teachers who responded to the survey agreed that they had problems with the use of technology in classrooms. The survey ranked technology integration as the top professional skills gap affecting the delivery of services by teachers (Oduor, 2018;Wanzala & Nyamai, 2018). Therefore, although similar studies have been carried out in the country focusing on the integration of ICT in education, they mostly targeted secondary schools and were done under the 8-4-4 curriculum. In the 8-4-4 curriculum ICT integration was not compulsory in the primary level of education and computer studies were taught as an elective subject in secondary schools. The study was guided by the following three research questions (RQ1 to RQ3): Constructivist theory The constructivist approach is based on the belief that learners can construct and create knowledge from prior experiences in their environment (Kalpana, 2014;KICD, 2017;Waweru, 2018). The proponents of this theory shift the focus from the teacher who was traditionally believed to be the source of knowledge to the learner (Wang, 2008;Waweru, 2018). Two approaches of the constructivist theory were used one targeting teachers' understanding of individual learners and the other that focuses on group learning. Constructivism can be approached in a way that targets individual learners as well as groups of learners as advanced by Jean Piaget (Kalpana, 2014;Wang, 2008). The theory explains that a learner assimilates new knowledge that adds to an existing body of knowledge. It is therefore important for teachers in the process of integrating ICT to understand that learning can be based on individual discovery and interpretation of information. This realization would help the teacher to emphasize the active participation and involvement of learners to harness their creativity and produce individuals fit for the 21st Century (Kalpana, 2014). The second approach to the constructivist theory is Vygotsky's social constructivism that emphasizes collaboration as opposed to individual learning (Waweru, 2018). The proponents of this theory argue that learners grasp concepts better when they work in mixed-ability groups where they share experiences and come up with a common understanding. In such a scenario, the teacher must create a classroom environment that is based on cooperation, democratic principles, and shared creation of content that makes the learners have a sense of ownership of knowledge (Sang et al., 2009). This theoretical understanding was crucial for this study because, in lowresource settings where ICT facilities may not be enough for every learner, the teachers can encourage collaborative learning through device sharing. Technology acceptance model The Technology Acceptance Model (TAM) is based on the user's perception of usefulness and the perceived ease of use as cited by Sharples and Modules (2014). The theory has been used widely by researchers in the field of technology in education with various modifications as well as criticism (Bagozzi, 2007). The perceived usefulness of technology relates to the conviction among users such as teachers that it will make their work or that of their learners easier thus enhance job performance (Muinde & Mbataru, 2019). This means that if teachers think that the use of computers would make their day-today activities such as preparation of lesson plans, lesson materials, or analyses of student's results more organized and accurate, then they would probably use them. The perceived ease of use of new or existing technology would mean that the users view technology as one that does not require a lot of effort to learn how to use (Venkatesh et al., 2003). This suggests that teachers would possibly adopt technology that they consider easy to learn and use with minimal need for expert consultation. Venkatesh et al. (2003) have modified the TAM to include other models in a study that created the Unified Theory of Acceptance and Use of Technology (UTAUT). The study came up with three variables that were thought to directly influence behavioral intention in the use of technology: performance expectancy (perceived usefulness), effort expectancy (perceived ease of use), and social influence. Venkatesh et al. (2003) posit that gender, age, experience, and voluntariness could be classified as moderator variables in the studies on the intention to use technology. They argue that based on socialization, men will prefer to use a certain technology if they perceive that it would help them to accomplish a task. The theory also suggests that the moderating effect of age could be based on the tendency for younger people to be motivated by extrinsic factors such as rewards. We used the moderator variables of age and gender of teachers to compare the differences in perception of the use of technology in education. This was based on the presumed effect of the compulsory use of ICTs in education at the primary level (KICD, 2017) in implementing the new curriculum on the constructs of voluntariness and experience. Therefore, the inclusion of voluntariness in studying a mandatory use system as well as experience in a new system would lead to inconsistencies. Global perception of ICT in education Globalization and rapid changes in technology have created a knowledge-based economy in the 21st Century. Consequently, governments have invested in the integration of ICT in education at all levels to equip the learners with the skills needed for modern life and beyond (Wambiri & Ndani, 2016). This inclusion and massive investment in educational technology is believed to have had a positive effect in some countries like South Korea where extraordinary economic growth has been experienced since the 1970s (Sanchez et al., 2011). In addition, Kozma (2003) in a cross-national comparative study of technology and classroom practices involving 28 states posits those different countries such as Taiwan, Finland, the Netherlands, Norway, and Singapore, have had educational reforms to align with global changes. The study adds that the educational reforms in these countries focused on what students learned in school and placed more emphasis on ICT training and interpersonal skills. Various studies have also reported the benefits that technology in education brings to the teachers and learners in different contexts including in developing countries. For instance, Kozma (1991) summarizes his support for the use of technology in education by arguing that different voices and sounds attract the attention of children leading to mental processes that create meaning. Aktaruzzaman et al. (2011) further assert that, when used in the right manner, ICTs in education can bring several benefits such as increased access to education making it more relevant, as well as improving the quality since they make teaching and learning an active process. The World Wide Web has revolutionized access to information and brought opportunities for e-learning and lifelong learning. Omwenga et al. (2004) argue that this kind of access will not replace the teacher but will provide an opportunity for the learners to meet experts in various fields, researchers, and fellow students. This way they can get firsthand information as well as exchange ideas with their peers from all parts of the world (Redempta, 2012). Hennessy et al. (2010) add that ICTs help in shaping a continued desire for learning that can develop throughout a person's lifetime, a skill that is needed to survive in a rapidly changing society. Technology in education also brings a change to the teaching methods used by teachers from the traditional teacher-centered approaches to heuristic styles (Mingaine, 2013a). This change makes classrooms interactive as learners get the opportunity to manipulate technology adding to their creativity and thinking skills needed in the 21st Century (Mwangi & Mutua, 2014). Even in large class size situations where heuristic methods could be difficult to apply, the use of technology can be of great benefit to a teacher in capturing and retaining the attention of learners (Majumdar, 2005). ICT integration in education in Kenya Kenya like other Sub-Saharan African countries has over the years embedded ICT in its education policies (Mariga et al., 2017;Muinde & Mbataru, 2019). Despite the scarcity of empirical research to show the impact of ICT in learning improvement in the country, the Kenya National Education Sector Plan 2013-2018 focused heavily on ICT integration (Piper et al., 2015). This plan had followed the National ICT policy that was enacted in 2006 to enhance the availability of efficient, affordable, and reliable technology services across all sectors of the economy (Republic of Kenya, 2006). The value for and recognition of the importance of ICT in education in achieving Kenya's development blueprint 'Vision 2030' led to the provision of tablets to all grade one learners in public primary schools in the country (Langat, 2015;Mariga et al., 2017;Muinde & Mbataru, 2019). This was followed by curriculum reforms aimed at providing every learner in the country with core competencies and world-class digital literacy skills needed to be competitive in the 21st Century (Maluei, 2019). Status of ICT infrastructure in schools For effective implementation of the policies on ICT in education, there should be adequate infrastructure and facilities. Liang et al. (2005) in a study that draws from 6 years of experience in analyzing the digital classroom environment suggest that some basic facilities are fundamental for ICT integration. They posit that for effective use of technology in education classrooms should be equipped with learner's devices, teacher's devices, shared display projectors, network connectivity as well as other enabling installations. This argument is corroborated by Mingaine (2013b) who notes that facilities such as power, computer devices, software, and connectivity are essential for effective ICT integration. Further, a study by Langat (2015) found out that, infrastructure and ICT equipment shortages were among the challenges facing the implementation of ICT in primary schools in Kenya. The study that targeted 40 primary schools and 450 teachers noted that 94% of the schools did not have ICT equipment, all schools had a shortage of classrooms and only two private schools had functional computer laboratories. Similar challenges were noted in other studies that identified inadequate or limited academic use of computers in primary schools in Kenya as well as a lack of digital customization of classrooms (Tonui et al., 2016;Muinde & Mbataru, 2019). Teacher capacity for ICT integration Research has demonstrated that ICT in education helps in creating opportunities for the learners to develop 21st Century skills but this depends on the digital literacy of teachers (UNESCO, 2012). Studies on the capacity of teachers in primary schools in Kenya show that, despite the policy formulation for ICT in education and financial investment, the integration of technology in Kenyan classrooms remains low (Piper et al., 2015). For instance, Langat (2015) found that most of the teachers in the study on barriers hindering the implementation of ICT in primary schools in Kenya lacked computer literacy skills. Despite being aware of the importance of technology in education, the teachers blamed the government for the lack of effective planning to offer them in-service training on the use of technology in teaching and learning. Similar sentiments were made by teachers in a study by Abobo (2018) who asserts two-thirds of the respondents could not integrate technology in the teaching of Kiswahili language. Further, Omolo et al. (2017) also found that student-teachers were able to practice the use of technology in the teaching of Kiswahili in classrooms after learning from their tutors. Both studies suggest that the teachers were willing to apply technology in their teaching after going through training sessions. However, in some cases where teachers received training, it was basic computer literacy on computer programs such as Microsoft Office and Excel that did not equip them for technology integration in classrooms (Mwangi & Khatete, 2017). Comparably, Wambiri and Ndani (2016) opine that their analysis of documents on primary teacher training in Kenya proved that there was a gap in the pedagogical use of ICT. A study by Muinde and Mbataru (2019) in Machakos County, found that 85% of teachers had received ICT training from the ministry of education. However, 62.3% of the trained teachers felt that the training was not appropriate for teaching and learning. The findings in this study corroborate Majumdar (2005) who observed that most teachers who receive ICT training as part of the professional development (PD) programs still lacked the self-reliance needed to integrate ICT in teaching and learning because in most cases due to time limitations the training only focused on computer applications. Further, a study to establish teachers' computer skills in public primary schools was carried out in Homa Bay County by Omito et al. (2019). They used a crosssectional survey design to collect data from 362 teachers and 85 headteachers. The findings indicated that the number of teachers trained by the government was low, and as argued by Omito et al. (2019) the situation was so since the trained teachers were supposed to train their colleagues. Ngeno et al. (2020) had a similar finding in Ainamoi sub-county that the PD training for teachers did not include all teachers. This study by Ngeno et al. (2020) relates to research by Sharples and Moldeus (2014) that sought to establish the perception of teachers on the readiness for the adoption of technology in public primary schools. The mixed-method case study focused on multi-sites covering different parts of Kenya such as Nairobi, Nakuru, Mandera, and Turkana to compare the integration in both urban and rural areas. Their findings show that only 8% of the teachers felt adequately prepared to use technology in their day-to-day teaching despite 78% of the respondents saying that they perceived computers as easy to use. The study concluded that this difference between the perception of the ease of use and actual use in classrooms was occasioned by poor training on ICT integration. Teacher perceptions on ICT integration Studies on how perception affects the integration of ICT in education show that what teachers think about the use of technology affects their acceptance and subsequent application in their activities (Wambiri & Ndani, 2016). They argue that the government's investment through the provision of devices without addressing teachers' attitudes and beliefs may not yield the desired results. In a study to assess teachers' beliefs, attitudes, self-efficacy, computer competency, and age, Wambiri and Ndani (2016) found out that younger teachers had a high positive attitude towards technology. This finding they add could be attributed to the younger teachers having received technology training in the teacher training colleges. However, Bebell et al. (2004) observe that teachers' age and the years of service should be used and interpreted sparingly concerning technology use in schools. They argue that in some specific uses of technology the difference by age would be insignificant if a multi-faceted approach were to be applied in measuring technology usage. A study on the perception of teachers towards the usefulness of ICT in schools was also conducted by Buliva (2018) in Vihiga County in Western Kenya. The study that used a convenient sample of teachers from the county used the variable of gender to determine whether there were statistically significant differences between male and female teachers. The results obtained from an independent samples t-test suggested that there was no statistically significant difference between the mean scores of male teachers. The study concluded that there was no statistically significant difference in perception of the usefulness of computers between the teachers by gender in the County. While studying the implementation of the laptops project in public primary schools, Muinde and Mbataru (2019) found that 68.5% of the sampled teachers had a high perception of the use of laptops in their teaching and learning. However, they established that 39% of the teachers felt that the time allocated for the integration of technology was not adequate and that most of their lessons were spent assembling the gadgets. In such circumstances, teachers are more likely to resist the use of ICTs in their teaching if they feel that they will spend more time and effort to make them work (Omwenga et al., 2004). The perception of time and ICT integration was also noted by Heinrich et al. (2020) in a study on the potential and prerequisites of effective tablet integration in rural Kenya. The mixed-method study that involved classroom observation, teacher interviews, student surveys, and focus groups, found that teachers often excluded students perceived to be slow learners during technology integration. Some of the teachers interviewed said that they could not cater to the learners experiencing academic challenges due to the limited time in a lesson. The study recommends more professional development of teachers to equip them with the pedagogical ability to accommodate all learners including those with disabilities in a technology-integrated classroom. Participants Among the population of 1,436 teachers, this study targeted 30% of them (Mugenda & Mugenda, 2003), which was 430. Specifically, convenience and snowball sampling were executed, which was inevitable in the prevailing circumstances occasioned by the global COVID-19 pandemic. By employing snowball sampling, a small number of teachers in the target population responded to the questionnaire and then were asked to assist in reaching out to other prospective participants (Cohen et al., 2018). As teacher gender and age were frequently utilized in previous research, they were put into consideration in sampling. Given that previous research on ICT integration in Kenya has focused on urban areas, more representative sampling incorporating non-urban teachers is warranted (Newby, 2014). Among the 430 sampled teachers, 351 teachers completed the questionnaire with a response rate of 81.6%. The participants were teachers in urban (54.7%) and non-urban (45.3%) areas. They consisted of 4 age groups: 20s (15.1%), 30s (55.3%), 40s (23.6%), and 50s (6.0%). Male teachers comprised 61% of the sample. Research instrument and data analysis A pilot study was conducted to obtain the content validity of the instrument. The process of pre-testing the instrument was done in a neighboring Sub-County outside the area of study but with similar conditions. The respondents were purposively selected from experienced teachers who were asked to comment on the relevance of the content, clarity of the questions, and the time taken to complete the questionnaire. Some items were modified or deleted to accommodate the feedback, which led to the revised questionnaire of 17 items. Frequencies and percentages of the 17 survey items were presented to answer the descriptive part of the three research questions: Items F1 to F6 for RQ1; C1 to C5 for RQ2; and P1 to P6 for RQ3. With regards to the inferential part of the research questions of RQ2 and RQ3, Cronbach's alphas of the subscales were calculated before proceeding further. The Cronbach's alpha of all the 17 items was 0.754, but some of the items were removed to increase the internal consistency of the subscales to answer inferential research questions. Specifically, items C1, C2, C4, and C5 had the Cronbach's alpha of 0.70, and the average of the four items served as the dependent variable of RQ2, teacher capacity for ICT integration. Likewise, the average of P1 and P3 to measure teacher perception on ICT usefulness served as the dependent variable of RQ3, the Cronbach's alpha of which was 0.66. According to Nunnully (1978), Cronbach's alpha at or above 0.70 is acceptable as a test for the internal consistency of an instrument. The subscale internal consistency of teacher perception on ICT usefulness was slightly lower but close to the nominal value of 0.70. For inferential statistics, two-way ANOVAs were initially conducted with gender and age as independent variables for each of the dependent variables. However, Levene's tests indicated violations of the equal variance assumption. We instead employed the Welch test, a robust statistic used in violations of the equal variance assumption (Welch, 1947). When the Welch test was statistically significant, Games-Howell post hoc tests were conducted for pairwise comparison groups. For the 4 age groups, there were a total of 6 (= 4 combination 2) comparisons per dependent variable. Availability of ICT facilities The first research question (RQ1) was to investigate the ICT infrastructure availability in public primary schools for the effective implementation of digital learning. The results on the availability of ICT devices are summarized in Table 1. Most of the schools (87.7%) lacked internet connectivity (F1). Approximately 70% of the teachers also answered that their schools did not have projectors as a part of the shared devices essential for the integration of technology in schools (F2). Further, teachers indicated that their schools lacked the customization required for the introduction of digital devices. Specifically, 80% of them answered that their classrooms and computer laboratories did not have sockets and power extension cables (F6) and 73.5% of them also said that they did not have access to the laptops provided by the government (F4). Despite the challenges faced by teachers in accessing devices, 55.8% of the teachers reported that learners had relatively high access to tablet PCs (F5) and 82.9% of them reported reliable power supply (F3). Teacher capacity for ICT integration The second research question (RQ2) investigated teachers' ability to use technology in the performance of their duties ( Table 2). Most of the teachers in public primary schools had basic computer skills. The high percentage of teachers with basic computer skills was corroborated by the finding that 77.7% of the respondents had basic computer training as part of their teacher training course. Although many teachers received technology training as part of their pre-service course, we found that there was a challenge in the follow-up in-service training. When asked whether they attended in-service training Teacher perceptions on usefulness Despite the challenges faced by teachers in terms of the availability of facilities and inadequate training, our study demonstrated that teachers had a high perception of technology use (Table 3). The results show that almost all the teachers (98.9%) had the belief that technology would make them more organized and enable student-centered learning to take place in their schools. Further, there was a high belief that the integration of technology would enhance collaboration among learners as shown by 67.5% of the teachers who responded in the affirmative (RQ3). Teachers also had a high attitude towards the usefulness of technology to them as 97.7% of the respondents felt that the integration of technology would make the teachers more organized in their duties. However, the study found that 52.7% of the teachers perceived ICT to be time-consuming and would need more time allocation in the school timetable for successful integration. The findings also suggest that teachers were worried about the learners' access to the internet as perceived by 60.1% of the teachers who considered it unsafe. Inferential statistics on the teacher capacity and perceived usefulness The effect of age Age had a statistically significant effect on the perception of usefulness (RQ3, p = 0.000), but had no statistical significance on teacher capacity (RQ2, p = 0.059) ( Table 4). The Games-Howell post hoc tests indicated that teachers in their 40 s (M = 3.40, SD = 0.34, n = 83) had a higher perception of usefulness than those in their 30 s (M = 3.15, SD = 0.36, n = 194). Other groups were not statistically different in terms of the perception of usefulness or teacher capacity. The effect of gender Both teacher capacity (RQ2) and perceived usefulness (RQ3) were not statistically different by gender (Table 5). Male teachers and female teachers did now show a difference in terms of teacher capacity and perceived usefulness. Discussion Following the importance attached to technology in most parts of the world in almost all sectors, developing countries also have had to make the necessary investments and changes to cope with the 21st Century developments. As a result, education systems have been changed and curricula adjusted to have technology integration in schools. Our study sought to establish the preparedness of Kenyan primary schools for the rollout of mandatory technology use in all subjects of the new curriculum. On infrastructure development, our findings show that shared devices (i.e., projectors, sockets, and extension) cables were not available in most public primary schools. Although access to a computer or laptop by teachers is key in the integration of technology in education (Liang et al., 2005), teachers in most primary schools did not have access to these devices. The findings were consistent with other studies that pointed at the lack of devices for teachers as a threat to technology integration in Kenyan schools (Langat, 2015;Tonui et al., 2016;Mingaine, 2013aMingaine, , 2013b. This reveals a challenge that has existed over the years despite the significance attached to ICT availability (Langat, 2015;Liang et al., 2005) a situation that calls on stakeholders to prioritize infrastructure installation (Mingaine, 2013a). On the other hand, learners had relatively high access to technology devices such as tablet PCs. The power supply in schools also appears reliable, which could be attributed to the government's commitment and investment towards digital learning in public primary schools in the country (Muinde & Mbataru, 2019;Piper et al., 2015). Since not all schools had a one-to-one ratio in terms of technology devices like tablet PCs, Heinrich et al. (2020) suggest that the teachers in such settings could change their approach by encouraging peer collaborative learning as learners share the available devices. This argument supports the social constructivist approach by Vygotsky that emphasizes collaboration as opposed to individual learning (Waweru, 2018). As Sang et al. (2009) explain, teachers in areas without adequate ICT devices need to apply teaching methods that create an environment of cooperation and democracy to enable content sharing among learners. Nonetheless, for this to happen a teacher needs to be equipped with the requisite technology integration skills to be able to assess the learners' use of technology and their use in instruction. For this reason, we sought to investigate the teachers' capacity for technology integration in primary schools. The findings pointed to an increase in computer literacy among primary school teachers which has been highlighted as a key determinant in the successful integration of technology in various studies (Hwang et al., 2010;UNESCO, 2012). The results were consistent with previous research which attributed the increase in the number of computer-literate teachers with the introduction of computer courses in the Kenyan teacher training colleges (Omito et al., 2019;Muinde & Mbataru, 2019). However, although computer literacy among teachers is important, it does not guarantee that teachers would use technology in their lessons (Mwangi & Khatete, 2017;Wambiri & Ndani, 2016) because of gaps in the pedagogical application in actual teaching. Relatedly, we found that most teachers did not integrate ICT in their lessons and had not attended in-service training after the start of the implementation of the new curriculum. This corroborates other studies which concluded that computer literacy training was not enough to guarantee the integration of technology and that teachers needed a deeper understanding of the pedagogical use of ICT (Omito et al., 2019;Ngeno et al., 2020;Sharples & Moldeus, 2014). Further, we found that younger teachers had better technology integration skills compared to older teachers consistent with previous studies which showed that age correlates negatively with skill level in the use of technology (Harrison & Rainer, 1992 cited by Wambiri & Ndani, 2016). However, as noted by Bebell et al. (2004) teachers' age and years of work may not be conclusive in the measurement of teachers' technology use. Therefore, a study designed to include a variety of technology uses in schools would give a more detailed account of how teachers interact with technology daily. Despite the skill gap that exists among teachers in technology integration, our study shows that generally, teachers had a high perception. Similarly, Wambiri and Ndani (2016) concluded that teachers in Kenyan primary schools had high attitudes towards the use of various technologies indicating that with the requisite support the use of ICT in schools would be achieved. This is also supported by the finding that teachers had the high belief that ICT use would not only benefit them in the organization of instruction but also their learners. The perception of the usefulness of technology to learners by teachers is important because it helps the teacher to invoke the innovativeness and creativity of the learner (Kalpana, 2014;KICD, 2017;Wang, 2008;Waweru, 2018). The perception of technology as time-consuming, however, can be attributed to inadequate training on the pedagogical use of ICT as found in previous studies (Sharples & Moldeus, 2014). This means that due to inadequate preparation, such teachers would need the help of computer technicians for successful integration. According to Heinrich et al. (2020), the teachers' beliefs about time and the effort needed for technology integration generally affect their perception of the ease of use and perceived usefulness to their learners. The perception of learner safety while using the internet could be attributed to inadequate teacher preparation for the safe use to both learners and teachers. We also analyzed the effect of age and gender on the perception of usefulness and age. Teachers in their 40 s found ICT more useful than their counterparts in the 30 s. This finding was different from previous research that found the perception to be higher among younger teachers (Wambiri & Ndani, 2016). This difference could have been occasioned by sample composition in our study since the number of teachers in the 30 s was two times more than those in the 40 s. However, Bebell et al. (2004) warn that it is not obvious that younger teachers would have a higher perception of technology. A test of how teachers of different ages perceive the usefulness of specific technologies in the performance of their duties would lead to a more detailed analysis. Additionally, our analysis on the effect of gender on the perceived usefulness of technology among teachers did not show any statistical difference. This was consistent with Buliva (2018) who found no significant difference in the perception of technology use among teachers by gender. It, therefore, suggests that exemplary performance in the integration of technology should be expected from all teachers. The results also indicate that policymakers should formulate ways to equip male and female teachers with technology integration skills since they all have high perceptions and significant skill gaps. However, Venkatesh et al. (2003) noted that based on socialization, men would perceive certain technology as more useful if it allowed them to accomplish a task faster. Limitations and areas of future research The sampling schemes can be improved in subsequent research. The online survey combined with convenience sampling was an unavoidable choice at the time of data collection; the Global COVID-19 pandemic led to the closure of schools in Kenya, which may have caused sampling bias and limit the generalizability of the findings. Particularly, only 6% of the respondents were in the age bracket of 50 s, while there were 29% of them in the population. Male teachers were also oversampled in our study. While we had 61% male and 39% female teachers, the proportion in the population was 3:7. We should be cautious in interpreting the findings relating to this class of respondents. Follow-up studies are also recommended to take additional steps to increase validity of the instrument such as obtaining content validity ratio (CVR). Further, our use of the Technology Acceptance Model (TAM) as the theoretical base of the study could have left out other constructs that would give further understanding of acceptance of ICT. We, therefore, recommend the use of other models such as the United Theory of Acceptance and Use of Technology (UTAUT) in further studies to include other constructs such as social influence and facilitating conditions which would improve the prediction of the intention to use technology. A replication of this study using a mixed-methods approach would give an in-depth understanding of the issues affecting the implementation of ICT integration in Kenya and other developing countries. More research is needed on the perceptions of technology use among teachers in their 30 s and 40 s as well as the effect of gender on the capacity and perception of teachers. A study on how teachers are using technology for the formative assessment of learners in various subjects would also contribute to accumulating knowledge on the progress of ICT integration in all areas of the curriculum. It would be important to study head teachers' use of technology in the supervision of curriculum implementation. Future research may also focus on the perception of male and female teachers on the usefulness and ease of use of a specific technology in accomplishing various tasks. Finally, it would be important to do a comparative study between the East African countries since they are in the process of implementing the harmonized curriculum structures and framework for primary education. Conclusions The findings from this study suggest that the ICT facilities were inadequate including laptops for teachers, projectors, tablets PC devices for pupils, as well as other enabling installations. There is a need to provide computers to teachers so that they can easily access materials and prepare for technology integration. This will help to familiarize the teachers with computer hardware and software hence reducing the need for computer technicians in schools. Secondly, we noted that although most of the teachers had basic computer literacy there was a challenge in technology integration due to inadequate pedagogical knowledge on integration. Teachers implementing the new curriculum should be involved in frequent PD programs and training that goes beyond basic computer literacy to technology integration in various subjects. In circumstances where the shortage of devices is inevitable, teachers should be trained on how to encourage collaboration among learners through the sharing of the technology devices and working on tasks as a team.
9,077.6
2021-08-23T00:00:00.000
[ "Education", "Computer Science" ]
An Economic Analysis of Pigeonpea Seed Production Technology and Its Adoption Behavior: Indian Context The present study was based on primary data collected from 100 farmers in Gulbarga district of Karnataka, India, during the agricultural year 2013-2014. Study shows that average land holding size of pigeonpea seed farmers was higher in comparison to grain farmers and district average. The study illustrates a ratio of 32 : 68 towards fixed and variable costs in pigeonpea certified seed production with a total cost of ₹ 39436 and the gross and net returns were ₹ 73300 and ₹ 33864 per hectare, respectively. The total cost of cultivation, gross return, and net return in pigeonpea seed production were higher by around 23, 32, and 44 percent than grain production, respectively. Hence, production of certified seed has resulted in a win-win situation for the farmers with higher yield and increased returns. The decision of the farmer on adoption of seed production technology was positively influenced by his education, age, land holding, irrigated land, number of crops grown, and extension contacts while family size was influencing negatively. Higher yield and profitability associated with seed production can be effectively popularized among farmers, resulting in increased certified seed production. Introduction Pigeonpea (Cajanus cajan (L.) Millsp.) is one of the proteinrich legumes of the semi-arid tropics grown throughout the tropical and subtropical regions of the world. In India its major area is lying between 14 ∘ and 28 ∘ N latitude, where the majority of the world's pigeonpea is produced [1]. According to FAO statistics [2], worldwide pigeonpea was grown in about 4.23 million hectares with a production and productivity of 4.68 million tons and 751 kg/ha, respectively. India is the largest producer of pigeonpea accounting for 66 percent of total production and the other major pigeonpea producing countries are Myanmar (17.09 percent), Malawi (6.15 percent), Kenya (4.36 percent), and United Republic of Tanzania (5.29 percent). Pigeonpea ranks second after chickpea among all the pulses in the country and normally cultivated during kharif season. In India, it occupies an area of 3.81 million hectares with a production and a productivity of 3.07 million tons and 806 kg/ha, respectively [3]. Pigeonpea is an important crop of Karnataka state in India and contributing around 18 percent and 12 percent to total area and production, respectively [3]. As far as importance of seed is concerned, it is the vital input for attaining sustained growth. Quality seed production is a specialized activity that paves way for initial assurance towards realization of higher output. The general farm saved seed cannot be substituted for quality seed, as it generally lacks genetic vigour and has poor germination [4]. A sustained increase in agricultural production and productivity depends on development of new improved varieties and adequate supply of quality seed to the farmers at the right time. It is estimated that the direct contribution of quality seed alone to the total production is about 15-20 percent depending upon the crop and it can be further raised up to 40 percent with effective management of other inputs [5]. Various factors influence costs and returns in pigeonpea seed production, affect its profitability, and account to different impacts on adopters of seed production as well as grain producers, which necessitates for studies regarding production economics of quality seed production and its adoption among farmers. 2 The Scientific World Journal Following the agricultural technology revolution of the 1970s in India, there is huge improvement in adoption of quality seed production technology among farmers. Realizing the potential of quality seed sector, the Government of India (GOI) initiated various policies and projects towards public and private seed sector development in the country. Indian Council of Agricultural Research (ICAR), which is apex body in India for undertaking and coordinating agricultural research, shoulders major responsibility towards varietal development especially in case of pulses, which comes under high volume, low value category. In case of pulses, promising varieties and hybrids reach farmers through various extension activities and government initiatives aimed at promotion of new varieties and quality seed usage in farming. As adoption of quality seed production targets to meet existing regional demand for new and promising varieties of various crops, there is need towards understanding of factors affecting farmers' decision making on adoption. Studies of Mariano et al. [6] and Feder and Umali [7] suggest that farmer's decision to adopt agricultural technology depends on farm household characteristics such as socio-economic, institutional, and environmental factors. According to Alene et al. [8], individual household level analysis is the main approach to adoption studies, where the factors influencing farmers' behavior are analysed in understanding the reasons behind adoption of an improved agricultural technology under question. The adoption study assumption is that there exists an innovation and the study of adoption decisions evaluates determinants of its adoption. Guei et al. [9] reported that improving skills and knowledge of farmers in aspects such as seed storage, seed quality management, marketing, accounting, and assessing new varieties could enhance uptake, spread of new varieties, and improved practices and will help to keep the small-scale seed production enterprises commercially viable. Diverse studies present a range of factors such as gender, age, education, land holding, livestock holding, and extension visits to explain the adoption of technology in farming. From the above discussion it is obvious that different factors need to be studied for positive or negative influences, which either contribute or undermine the development process in technology adoption regime in Indian seed sector. Considering these facts, the present study was taken with the objectives to examine the socio-economic condition of pigeonpea seed growers, economics of certified seed production of pigeonpea in comparison to grain production, and constraints in certified seed production of pigeonpea and to analyse the factors that governed the farmers' decision to adopt seed production technologies in selected study area of Karnataka state in India. Methodology Umpteen reviews are available on the use of models to analyse the determinants of new technology adoption in farming. The influence of various socio-economic factors on the willingness of decision makers to adopt new technologies has been investigated by a number of studies [10][11][12][13]. In most of the studies on adoption behavior, the dependent variable is constrained to lie between 0 and 1; the model used is exponential functions [14]. Adeogun et al. [15] summarized in one of their studies the use and choice of models from various available models for analysis of determinants of technology adoption decisions. The study stated that, in case of adoption behavior, the dependent variable is constrained to lie between 0 and 1; the models used were exponential functions while univariate and multivariate logit and probit models including their modified forms have been used extensively to study the adoption behavior of farmers and consumers. According to Shakya and Flinn [16], probit model is recommended for functional forms with limited dependent variables that are continuous between 0 and 1; logit models are for discrete dependent variables. Both logit and probit models appear similar and the major difference is that the logit model shows distribution which has slightly fatter tails. Dependent variable used in this study is discrete and dichotomous in nature (mutually exclusive and exhaustive) and thus followed binary logit model that contains one dependent variable with two categorical outcomes (if the farmer belongs to category of adopter or not). This study examined more than one independent variable to predict the outcome probability and thus multivariate binary logit model is used for analysis. The logit model, which is based on cumulative logistic probability functions, is computationally easier to use than other types of models and it also has the advantage to predict the probability of farmers adopting any technology [15]. Logit Model. Multivariate logit model and its forms have been used extensively to study the adoption behavior of farmers based on the general recommendation on its use in predicting dichotomous outcomes associated with farmer decisions on adoption. The multivariate logit model as specified below was estimated using the maximum likelihood method. The two basic equations of multivariate logit regression model are as follows. Equation (1). The logit model assumes that the underlying explanatory variables are random variables which predict the probability of seed production technology adoption: which gives the probabilities of outcome events given the explanatory variables 1, 2, . . . , . Equation (2). According to Menard [17], logit can be specified as which shows that logistic regression is really just a standard linear regression model, once we transform the dichotomous outcome by the logit transform which transforms the range of ( ) from 0 to 1 to −∞ to +∞, as usual for linear regression. The Scientific World Journal 3 The probability of quality seed production adoption is specified as a function of economic and social factors. It is represented as follows: where is the probability that the th farmer is adopter of seed production technology, 1 − is the probability that the th farmer is nonadopter of seed production technology, is logit coefficients ( = 0, 1, 2, . . . , 7), and is random disturbances ( = 1, 2, 3, 4, . . . , 100). The definitions and measurement of explanatory variables are presented in Table 1. Several independent variables are used in the analyses which are education status of the farmer, size of land holding, size of irrigated land, size of the family, age of the farm household head, number of extension contacts of the farmer, and number of crops grown in his farm. The choice on the above factors is based on the assumption towards the influential capability of these factors in acting as determinants of technology adoption decision by the farmer. Most of the factors used for analysis base their possibility in such a way that the more favorable or intensive the factor might be the more it is likely to contribute towards adopting the new technology. The above logit model and marginal effect of selected variables on the probability of adoption of seed production technology have been analysed using statistical software SAS 9.3. Data. The data in which the empirical model is based were drawn from a sample size of hundred farmers (includes fifty seed growers and fifty grain growers of pigeonpea) in Karnataka state using random sampling procedure. Structured questionnaire was used in collecting information from the farmers and primary sample survey was conducted in Gulbarga district of Karnataka. Purposive sampling procedure was used for selecting the study area as the district is having highest area under pigeonpea in Karnataka state which was around 56 percent of total area under pigeonpea Seed production 38.01% Grain production 61.99% Figure 1: Area under seed and grain production among pigeonpea seed growers. during 2009-2010 [18]. Dependent variable is dichotomous in the way that farmers, who were using the technology, were categorized as adopters while those not using were nonadopters. Data on socioeconomic parameters, various inputs used in the grain and seed production of pigeonpea, and their costs and returns were collected for the agricultural year 2013-2014. Tabular analysis was used to compare the different values of farm economy and other aspects of farm business and weighted average was used for average analysis. Land Holding. The data pertaining to average land holding of sample pigeonpea farmers is given in Table 2. The analysis of data shows that the majority of seed farmers belong to medium category (4-10 ha) followed by large (10 ha and above) and semimedium (2-4 ha) category. The overall average land holding size of pigeonpea seed farmers was 9.46 ha followed by grain farmers (3.71 ha) and district average (2.37 ha). The area under certified seed production among pigeonpea seed growers is given in Figure 1. Around 38-percent area of pigeonpea seed growers was under pigeonpea seed production and 62-percent area under grain production. Cropping Pattern. Major crops grown in the study area were pigeonpea, jowar, and Bengal gram. Cropping pattern results of the study area are depicted in Figure 2. This shows that pigeonpea ranked 1st (38.10 percent of gross cropped area) followed by jowar (21.38 percent), Bengal gram (17.17 percent), sunflower (3.67 percent), black gram (3.11 percent), green gram (2.25 percent), bajra (2.15 percent), wheat (1.70 percent), and others (10.47 percent). The cropping intensity of the study area was 109. Figure 3. In Gulbarga district, only 10.32-percent area was irrigated, while net irrigated area of pigeonpea grain producer and seed producer was 10.49 and 20.44 percent, respectively. The major source of irrigation was well and tube well (around 66-percent irrigated area). Pigeonpea Varieties. Varieties used by seed growers in the study area are presented in Figure 4. Around 78-percent area was under TS 3R and 22-percent area under ICPL 8863. The major characteristics of these two varieties are presented in Table 3. in Table 4. The ratio of fixed and variable cost in pigeonpea seed production was 32 : 68. Human labour was the major component of cost on inputs applied for seed production of pigeonpea; its share in total costs was about 32.46 percent, which was followed by bullock and machine labour accounting for about 12.29 percent of the total cost of pigeonpea seed production. The share of seed cost to total input cost was about 2.49 percent. Cost of manures and fertilizers accounted for about 8 percent and cost of plant protection measures accounted for about 6.85 percent. The total cost in certified seed production of pigeonpea was s 39436 per hectare. The average yield of pigeonpea quality seed and reject seed was 12.5 quintal and 1.7 quintal, respectively, with a byproduct turnout of 31 quintal. The gross and net returns were s 73300 and s 33864 per hectare, respectively. Comparison of Pigeonpea Grain and Seed Production. The total cost of cultivation in pigeonpea seed production was around 23 percent higher than grain production, while gross return was about 32 percent higher in seed production (s 73300/ha) than grain production (s 55700/ha). Consequently, net return from seed production of pigeonpea was 44 percent (s 33864/ha) higher than grain production (s 23502/ha). Hence, production of certified seed has resulted in a win-win situation for the farmers with higher yield and increased returns. Graphical presentation of costs and returns in pigeonpea grain and seed production is given in Figure 5. Constraint Analysis. The factors constraining adoption of pigeonpea seed production technology as perceived by grain producers are presented in Table 5. Small holding size was the most important constraint hindering adoption of pigeonpea seed production technology, as opined by 76 percent of the farmer respondents. The other reasons constraining The Scientific World Journal 5 seed production technology were nonavailability of labour, nonavailability of quality seed, high cost of cultivation, and lack of knowledge and marketing of product, in that order. Adoption Behavior. The logit framework discussed has described that probability of a respondent to adopt seed production technology was dependent on the socio-economic characteristics of the respondents, that is, education, age of household head, land holding, family size, irrigated land, number of crops grown, and extension contacts. In the present specified logit model, model fit statistics shows that the likelihood ratio chi-square value of 59.0828 with a value of <0.0001 describes that our model as a whole fits significantly. The Score and Wald tests also indicate that the model is statistically significant. The estimates of the logit model are presented in Table 6. Logit model revealed that the decision 6 The Scientific World Journal Number of observations 100 Note: * * and * indicate significance at 5-percent and 10-percent probability level, respectively. of the farmer on adoption of seed production technology was positively influenced by his education, age, land holding, irrigated land, number of crops grown, and extension contacts while family size influenced negatively on adoption of seed production technology. Only two variables, out of seven variables included in the model were significant. Extension contacts were significant at 5-percent probability level and land holding was significant at 10-percent probability level. Both significant variables were positively influencing the farmers' decision on adoption of seed production technology. All the other variables included in the model were nonsignificant. To analyse the magnitude of variable marginal effect of variables on the probability of adoption has been calculated. Marginal effect of selected variables on the probability of adoption of seed production technology is also presented in Table 6. All variables have positive marginal effect on probability of adoption for seed production technology except family size. Variable extension contact has highest marginal effect on probability of adoption (mean value 0.1038) indicating that each more number of extension contact is likely to have 10.38 percent more adoption of seed production technology by the farmers. Similarly, each more unit of irrigated land is likely to have 5.86 percent more adoption of seed production technology by the farmers. Conclusion and Implications Crops such as pulses, particularly major pulse crop of pigeonpea, have the potential for profitable adoption of quality seed production technology which will further result in improvement of current seed replacement rate and varietal replacement rate scenario in pulses. Thus, identification of various determinants of farmers' choice on technology adoption may help in future strengthening of public sector efforts for popularization of the technology. The results of tabular analysis indicate that the majority of seed farmers belong to medium, large, and semimedium categories. Major crops grown by farmers are pigeonpea (38.10 percent of gross cropped area) followed by jowar (21.38 percent) and Bengal gram (17.17 percent). Results on net irrigated area under pigeonpea grain and seed production are 10.49 and 20.44 percent, respectively. Results on varieties used by farmers show that varieties such as TS 3R and ICPL 8863 are dominating the seed production scenario in the study area. The net return analysis shows that there is 44 percent higher income from certified seed production of pigeonpea (s 33864/ha) than from its grain production (s 23502/ha). Higher return in seed production is mainly due to increased productivity and better price realization of output. The cost of production results indicate that around 23 percent higher cost incurred in case of seed production of pigeonpea because of high labour requirement, seed certification charges, and higher level of other inputs used in its production. Constraints analysis shows that smaller holding size of farmer is the most important factor, which hinders the technology adoption in pigeonpea. This study was conducted to understand the farmers' adoption decisions on quality seed production with the assumption that farmers maximize their expected utility; a binary choice model with dichotomous decision to adopt the quality seed production technology was applied. A logit procedure was used for fitting the model. The probability of adoption of quality seed production technology was assumed to be determined by factors such as farm size, education status, age, land holding size, irrigated area, number of crops grown, and contact with extension agents. Individual farmer responses were evaluated after estimation of logit model and marginal effects were computed to understand the effects of changes in the independent variables in the model on adoption probability. The results indicate that education status, age, land holding size, irrigated area, number of crops grown, and extension contact are positively influential in a farmer's decision to adopt quality seed production technology while size of the family shows a negative influence towards adoption decisions. Only two variables, that is, extension contact and size of land holding, are positively significant in the results. Pigeonpea farmers with larger farms are more likely to adopt this technology. Likewise, farmers who have more extension contacts are more likely to adopt, which shows the importance of extension and training programmes towards determining the farmer adoption decisions. Therefore, focused efforts towards training and education by extension personnel for promoting quality The Scientific World Journal 7 seed production technology will increase the probability of adoption of the particular technology among farmers. This study suggests that higher yield and profitability in seed production may be popularized among the farming community through more extension efforts to increase the gain from certified seed production among farmers in India. Therefore, Major Implication of this study is towards implementation of awareness programmes on adoption of quality seed production. Also, there is a need for strengthening on programs which will focus on creation of timely seed availability for quality seed production. This study reveals that adoption of certified seed production of pigeonpea in farmers' fields is helpful in providing a profitable enterprise for increasing the net farm income.
4,754.4
2016-07-13T00:00:00.000
[ "Agricultural and Food Sciences", "Economics" ]
Hem12, an Enzyme of Heme Biosynthesis Pathway, Is Monoubiquitinated by Rsp5 Ubiquitin Ligase in Yeast Cells* Heme biosynthesis pathway is conserved in yeast and humans and hem12 yeast mutants mimic porphyria cu-tanea tarda (PCT), a hereditary human disease caused by mutations in the UROD gene. Even though mutations in other genes also affect UROD activity and predispose to sporadic PCT, the regulation of UROD is unknown. Here, we used yeast as a model to study regulation of Hem12 by ubiquitination and involvement of Rsp5 ubiquitin ligase in this process. We found that Hem12 is monoubiquitinated in vivo by Rsp5. Hem12 contains three conserved lysine residues located on the protein surface that can potentially be ubiquitinated and lysine K8 is close to the 36-LPEY-39 (PY) motif which binds WW domains of the Rsp5 ligase. The hem12-K8A mutation results in a defect in cell growth on a glycerol medium at 38oC but it does not affect the level of Hem12. The hem12-L36A,P37A mutations which destroy the PY motif result in a more profound growth defect on both, glycer-ol and glucose-containing media. However, after several passages on the glucose medium, the hem12-L36A,P37A cells adapt to the growth medium owing to higher expression of hem12-L36A,P37A gene and higher stability of the mutant Hem12-L36A,P37A protein. The Hem12 protein is downregulated upon heat stress in a Rsp5-independent way. Thus, Rsp5-dependent Hem12 mon-oubiquitination is important for its functioning, but not required for its degradation. Since Rsp5 has homologs among the Nedd4 family of ubiquitin ligases in humans, a similar regulation by ubiquitination might be also important for functioning of the human UROD. INTRODUCTION Rsp5 is a unique yeast member of the Nedd4 family of ubiquitin ligases which have a common C2-WW-HECT modular structure with the lipid-binding domain C2, protein-binding domains WW and catalytic HECT domain (Kaliszewski & Zoladek, 2008).The Rsp5 ligase monoubiquitinates and polyubiquitinates substrates with K63-linked ubiquitin chains in vivo and in vitro (Kee et al., 2006).Some Rsp5 substrates are then transported via endocytosis to the vacuole for degradation (Lauwers et al., 2010), others are proteolytically processed (Hoppe et al., 2000) or have reduced activity (Novoselova et al., 2013).Rsp5 ubiquitinates a transcriptional activator, Spt23, thereby promoting its release from the endoplasmic re-ticulum, also affecting its nuclear transport and transcriptional activation of the OLE1 gene encoding a desaturase of fatty acids.This is an essential function since rsp5∆ strain cannot grow unless cells bear a plasmid encoding a constitutively active variant Spt23 1-689 (Hoppe et al., 2000).Rsp5 is also involved in proteasomal degradation of Rpb1, the largest subunit of the RNA polymerase II (Harreman et al., 2009).This protein is ubiquitinated in a two-step mechanism in which Rsp5 first adds monoubiquitin and then a second ubiquitin ligase produces a K48-linked polyubiquitin chain, which triggers proteasomal proteolysis (Harreman et al., 2009).The rsp5 mutants are hypersensitive to various toxic compounds, including hydrogen peroxide which causes oxidative stress (Hoshikawa et al., 2003). One of the proteins which binds to and is ubiquitinated by Rsp5 in vitro is Hem12 (Hesselberth et al., 2006;Gupta et al., 2007), a cytoplasmic uroporphyrinogen III decarboxylase (UROD), the fifth enzyme in the heme biosynthesis pathway.The HEM12 gene is essential in yeast and hem12∆ cells do not grow unless the medium is supplemented with ergosterol and unsaturated fatty acids since Erg5 sterol desaturase and Ole1 fatty acid desaturase are hemoproteins.The hem12∆ mutant cells supplemented with those lipids are still respiratory-deficient and do not utilize glycerol, since they lack mitochondrial cytochromes.The heme biosynthesis pathway is conserved in humans and mutations in the human UROD gene resulting in UROD dysfunction cause a hereditary disease, porphyria cutanea tarda (PCT) (Frank & Poblete-Gutierrez, 2010), in which accumulation of porphyrins, phototoxic heme precursors, results in liver and skin damage (Mendez et al., 2012).Some mutations in the UROD gene result in lower activity and instability of UROD.Mutations in other genes may affect UROD activity and predispose to sporadic PCT by unknown mechanisms (Garey et al., 1993).In yeast cells, point mutations in the HEM12 gene also lead to the accumulation of porphyrins (Zoladek et al., 1996) and the hem12 mutants mimic PCT (Kurlandzka et al., 1988).Yeast strains with mutations in unknown genes that affect expression of HEM12 have also been isolated, resembling the situation in humans (Zoladek et al., 1995).Moreover, UROD was recently identified as a potential anticancer target and UROD inhibitor combined effectively with radiation and other anticancer drugs (Yip et al., 2014). Heme, as the prosthetic group of numerous hemoproteins, is crucial in electron transport in the mitochondrial oxidative chain, in several metabolic pathways, and in the defense against reactive oxygen species (Heinemann et al., 2008).Heme is essential for life but in excess it can be toxic to cells, therefore a crucial aspect of heme biosynthesis is its regulation.In mammals, the activity of 5-aminolevulinic acid (ALA) synthase, catalyzing the first step of heme biosynthesis pathway, is rate-limiting (Panda et al., 2002).Recently, a negative feedback regulation of heme homeostasis has been identified with Rev-erba heme receptor interacting with the NCoR protein to repress its target genes in human cells (Wu et al., 2009).Yeast growth on glycerol medium results in a ~3-fold induction of heme biosynthesis compared with glucosegrown cells (Diflumeri et al., 1993).In yeast cells, ALA synthesis is not rate-limiting since ALA is present in excess; instead the rate-limiting are other enzymes, including Hem12 UROD (Hoffman et al., 2003).Hem12 protein could potentially be regulated post-translationally since it has been found to interact with WW domains of the Rsp5 ubiquitn ligase (Hesselberth et al., 2006) and be ubiquitinated in vitro (Gupta et al., 2007).Here, we analyzed a possible regulatory mechanism, ubiquitination of Hem12 protein in vivo, its dependence on the Rsp5 ubiquitin ligase, and significance for yeast physiology. Yeast were grown in YPD with 2% glucose, YPGly with 2% glycerol, SD or SC with 2% glucose (Sherman, 2002).YPD+G418 was used to select strains resistant to kanamycin, and sporulation medium (Sherman, 2002) for spore production.To monitor the effect of elevated temperature on the level of wild type and mutant HA-Hem12, the respective yeast cells were grown at 28 o C in SC-leu medium, transferred to YPD to OD 600 ~0.3, grown for two generations and shifted to 38 o C for 2 or 4 hours.For cycloheximide-chase analysis of Hem12 degradation, yeast were grown in YPD medium to the logarithmic phase, and 0.5 ml aliquots were removed for immunoblotting at indicated times following the addition of 500 mg/ml cycloheximide (Sigma). Plasmids and plasmid construction.Plasmid YEp-HIS-UBI was used (Stawiecka-Mirota et al., 2007).Plasmid pBS-HA-HEM12 was constructed by transfer of NotI digested DNA fragment containing 3HA tag into the NotI site which was introduced after ATG codon of HEM12 orf by PCR in vitro mutagenesis of pBS-HEM12 (Zoladek et al., 1995).Then SacI-HindIII fragment containing HA-HEM12 was transferred to SacI-and HindIII-digested pRS415 and pRS425 (Invitrogen) to obtain pRS415-HA-HEM12 and pRS425-HA-HEM12.Mutations in HEM12 were introduced by PCR mutagenesis of pBS-HA-HEM12, respective fragments transferred to pRS415 and pRS425 and confirmed by sequencing.Also, the SacI-HindIII fragment of pBS-HEM12 was used to obtain pRS415-HEM12. Purification of ubiquitinated proteins.Purification of His-tagged ubiquitinated protein was performed as described previously (Stawiecka-Mirota et al., 2007).Total extracts, Ni-NTA-sepharose (Qiagen)-bound and unbound fractions were analyzed by Western blotting using anti-HA antibody. Real-time PCR.Real-time RT-PCR was performed using the LightCycler ® 480 System (Roche Laboratories) with SYBR Green detection.The primers' specificity was verified by melting curve analysis.Sequences of primers used are available upon request.RT-PCRs were performed in triplicate.cDNA was synthesized from 5 μg of total RNA using RevertAid TM H Minus M-MuLV Reverse Transcriptase kit (Fermentas).Average cycle thresholds were calculated and the Pfaffl method (Pfaffl, 2001) was used to calculate relative HEM12 expression levels with respect to 5S rRNA. Computational analysis.A protein multiple sequence alignment was obtained with the MAFT program (Katoh & Toh, 2008) using homologs identified with BLAST (Schaffer et al., 2001).Several homologs of S. cerevisiae Hem12 with solved crystal structures have been indicated by BLAST and HHpred (Soding et al., 2005) servers.The structure of human uroporphyrinogen decarboxylase (PDB entry 1URO; (Whitby et al., 1998)) was chosen as a template to construct a model of Hem12 protein. Structural models of wild type and mutant Hem12 were obtained using the Sybyl-x1.2package (TRIPOS, Inc., USA).The model structures were subjected to energy minimization using AMBERFF99 force field as implemented in Sybyl-x1.2. The Hem12 protein is monoubiquitinated by the Rsp5 ligase in vivo Since in vitro experiments had identified Hem12 as a substrate of Rsp5 (Gupta et al., 2007), we inquired if Rsp5 contributes to regulation of the heme biosynthesis pathway via in vivo ubiquitination and regulation of Hem12.To answer this question we used a HA-tagged version of HEM12 expressed from a plasmid restoring viability of a hem12∆ strain on rich glucose medium at 28 o C (Fig. 1A).The wild type or the rsp5∆ strain, where a plasmid encoding Spt23 1-689 ensured viability (Hoppe et al., 2000), were transformed with single-or multicopy plasmids bearing HA-HEM12.Western blot analysis revealed that the level of HA-Hem12 overexpressed from a multicopy plasmid was increased 13-16 fold (Fig. 1B).To test if Hem12 is ubiquitinated in vivo we used strains expressing HIS-UBI encoding His-ubiquitin (Stawiecka-Mirota et al., 2007) and HA-HEM12 from the multicopy plasmid.His-ubiquitinated proteins were purified on Ni-sepharose beads and were analyzed by Western blotting with anti-HA antibody.Results shown in Fig. 1C document that HA-Hem12 is monoubiquitinated in vivo. When the Rsp5 ubiquitin ligase is absent, ubiquitination of Hem12 is abolished to nearly background levels, which may indicate that in the absence of Rsp5 another ligase partially takes over.An additional copy of RSP5 giving rise to a 3.5-fold elevated level of Rsp5 essentially did not increase the ubiquitination of Hem12.Thus, Rsp5 ubiquitinates Hem12 in vivo and some factors control the level of this ubiquitination tightly.The presence of an additional copy or deletion of RSP5 essentially did not affect the steady state level of the Hem12 protein (Fig. 1B, C).Unlike polyubiquitination, monoubiquitination does not direct proteins to proteasomal degradation since proteasomes only recognize proteins tagged with K48-linked polyubiquitin chains containing more than four ubiquitins.Instead, monoubiquitination recruits proteins containing ubiquitin-binding domains (UBDs) and provides a signaling mechanism that regulates several cellular pathways (Ikeda & Dikic, 2008).Thus, the Rsp5dependent monoubiquitination probably does not direct Hem12 for degradation but could affect its functioning through some other mechanism. K8A substitution in Hem12 affects growth of cells on glycerol medium at an elevated temperature Hem12 is highly conserved in evolution.UROD has a homodimeric structure with the active site clefts facing each other and all decarboxylation events occurring only in one active site (Martins et al., 2001;Bushnell et al., 2010).Assuming that the regulation of UROD by ubiquitination might be also conserved, we compared 34 UROD amino acid sequences to find conserved lysine residues which could be potential ubiquitination sites.Alignment of three sequences is presented in Fig. 2A.We found that five lysines: K8, K99, K174, K242 and K260 of Hem12 are conserved.Computer modeling of the Hem12 structure (Fig. 2B) showed that two of them, K174 and K260, are located on the dimer interface but three others are potentially available for ubiquitination.Since K8 is very close to the PY motif 36-LPEY-39 which binds the WW1 and WW3 domains of Rsp5 (Hesselberth et al., 2006) we assumed that this lysine is likely the ubiquitination site.Computer modeling indicated that substitution of K8 with alanine should not affect the protein structure (not shown).Therefore, the HA-hem12-K8A mutant allele was constructed by in vitro mutagenesis and tested for complementation of hem12∆ inviability.A heterozygous HEM12/hem12∆ strain was transformed with a plasmid bearing the mutant allele, sporulated and tetrads were dissected.The hem12∆ [HA-hem12-K8A] spores were viable and their growth was comparable to that of wt [HA-hem12-K8A] and hem12∆ [HA-HEM12] strains on medium containing glucose or glycerol, a nonfermentable carbon source, at 28 o C (Figure 3A and not shown).However, the hem12∆ [HA-hem12-K8A] mutant strain grew significantly slower on the glycerol medium at 38 o C when compared with wt [HA-hem12-K8A] and hem12∆ [HA-HEM12] (Fig. 3A and not shown).We observed a similar steady state level of HA-Hem12 and HA-Hem12-K8A proteins in cells grown at 28 o C, and a similar 5-fold decrease of the steady state level after 4 hours of incubation at 38 o C (Fig. 3B).These results indicate that Hem12 is subject to downregulation upon heat stress and suggest that the hem12-K8A mutation does not affect the stability of Hem12 but negatively affects its activity or interaction with other proteins at the elevated temperature.Lysines other than K8 of Hem12 could be ubiquitinated and direct the wild type and mutant Hem12-K8A protein for degradation at an elevated temperature.In fact, Hem12-K8A mutant protein is still monoubiquitinated (not shown). To investigate if Rsp5 is important for the degradation of Hem12 upon heat stress, the level of wild type Hem12 was monitored in wild type yeast and rsp5-1 mutant with the L733S substitution in the catalytic HECT domain showing a temperature-sensitive growth defect (Wang et al., 2001).Yeast were grown at 28 o C, shifted for 2 or 4 hours to 38 o C, and Western blotting was performed.The steady state level of Hem12 was lower in rsp5-1 than in wild type cells but this protein was similarly degraded upon heat stress in both strains showing again that Rsp5 does not affect its degradation (Fig. 3C).This is in contrast to the recent finding that Rsp5 is the main ubiquitin ligase that targets cytosolic proteins following heat stress (Fang et al., 2014).Thus, another ligase must be involved in forming polyubiquitin chains that direct Hem12 for proteasomal degradation.Previous in vitro experiments have revealed that Hem12 may be also a substrate of the cytosolic ligase Ubr1 (Lu et al., 2008) which functions in quality control of cytoplasmic proteins (Eisele & Wolf, 2008;Xia et al., 2008).It is possible that Rsp5 and Ubr1 function together in Hem12 ubiquitination in vivo.Moreover, Ubr1 could be responsible for the residual monoubiquitination of Hem12 observed in the rsp5∆ strain. Double mutation hem12-L36A,P37A destroying the PY motif in Hem12 prevents cell growth on glycerol medium To eliminate Rsp5 binding and ubiquitination of Hem12, L36A and P37A mutations which make the PY motif not functional (Merhi & Andre, 2012) were introduced into a plasmid carrying HA-HEM12.Computer modeling showed that these mutations should not affect the structure of the Hem12 catalytic center (not shown) and thus should not affect its catalytic activity.The heterozygous HEM12/hem12∆ strain was transformed with the above plasmid, sporulated and spore clones were isolated.The hem12∆ [HA-hem12-L36A,P37A] spores were viable but grew extremely slowly at 28 o C on the glucose medium and not at all on the glycerol medium (Fig. 4A) suggesting that the mutant protein is barely active.This phenotype is much stronger than phenotype of hem12-K8A mutant suggesting that another lysine could be ubiquitinated when K8 is absent.Interestingly, after 2-3 passages on the glucose medium hem12∆ [HA-hem12-L36A,P37A] spore clones achieved a normal rate of growth.Western blotting showed that the level of the mutant Hem12-L36A,P37A was greatly elevated in those strains, up to 6.5-fold over the wild type level (Fig. 4B, left panel).The glucose-adapted hem12∆ [HA-hem12-L36A,P37A] spore clones also grew on the glycerol medium (not shown).The level of HA-Hem12-L36A,P37A in wild type and rsp5∆ strain transformed with a plasmid bearing HA-hem12-L36A,P37A gene was similar to that of the wild type HA-Hem12 protein (Fig. 4B, right panel).This shows that HA-Hem12-L36A,P37A is present at increased levels only in the absence of wild type Hem12 protein and also that Rsp5 is not involved in its degradation. The high steady state level of HA-Hem12-L36A,P37A protein in hem12∆ [HA-hem12-L36A,P37A] cells could result from an increased stability of the protein, from the induction of HA-hem12-L36A,P37A expression, or both.However, in our previous experiments (Fig. 1B) deletion of the RSP5 gene did not significantly affect the level of HA-Hem12, suggesting that the mutation of Hem12 preventing its modification by Rsp5 should not have caused its stabilization.To clarify that, Hem12 stability was investigated.Cycloheximide was added to logarithmic phase cultures of wild type and hem12∆ [HA-hem12-L36A,P37A] spore clones to inhibit cytosolic protein synthesis and cell samples were taken at 30 or 60-min intervals.Western blot analysis of cell extracts showed that in fresh spore clones the level of mutant HA-Hem12-L36A,P37A was lower compared with that of the wild type (Fig. 4C, left panel time 0).The low level of the mutant protein cannot be solely responsible for growth defect of hem12∆ [HA-hem12-L36A,P37A] spore clones since the Sm39 mutant showing a similar low level of wild type Hem12 grows well on glycerol medium (Zoladek et al., 1995).The wild type Hem12 protein was rather stable and was degraded in two phases, a faster one with a half-life below 50 min and a slower phase with a half-life above 200 min (Fig. 4C, right panel).This result indicates that two pools of Hem12 protein are present in wild type cells, one that is prone to degradation and one protected from degradation, possibly by binding to other proteins.Degradation of the mutant HA-Hem12-L36A,P37A was slower than that of wild type Hem12, especially in the first phase.Thus, preventing the binding of Hem12 by Rsp5 resulted in Hem12 stabilization.Considering that Rsp5 does not affect the degradation of Hem12 this effect is probably indirect and unrelated to Hem12 ubiquitination by Rsp5. To learn if changes in expression of the HEM12 gene contribute to the observed high level of HA-Hem12-L36A,P37A mutant protein, the HEM12 transcript was analyzed.RNA was isolated from wild type and hem12∆ [HA-hem12-L36A,P37A] spore clones propagated on glucose medium for three days, and quantified by RT-PCR.The level of HA-hem12-L36A,P37A mRNA was 2.15 fold higher than that of HEM12 mRNA.These results collectively show that the HA-hem12-L36A,P37A mutant does not initially grow on the glycerol medium possibly because of low activity of the mutant HA-Hem12-L36A,P37A protein but easily adapts to that medium by increasing the level of the mutant protein through increased expression of the encoding gene and increased stability of the mutant HA-Hem12-L36A,P37A protein. The obtained results led us to propose a model in which Hem12 is bound and mono-ubiquitinated by Rsp5, possibly at K8.The Rsp5-dependent ubiquitination does not direct Hem12 to degradation.The Rsp5 binding and the monoubiquitination of Hem12 indepen-dently help it to bind some unidentified protein which is required for efficient heme synthesis, cell survival and ability to grow on the glycerol medium. Yeast Hem12 protein shows 50% identity and 67% similarity to the human UROD enzyme, which contains the LPEF motif.Therefore, human UROD could be also ubiquitinated and regulated by a human ubiquitin ligase of the Nedd4 family.Mutations affecting monoubiquitination or promoting degradation of UROD could possibly be one of the reasons of PCT in humans for which no mutation in the UROD gene has been found.Further work will be required to confirm the postulated ubiquitination of UROD and to establish its physiological role in humans. Figure 1 . Figure 1.Hem12 protein is monoubiuqitinated in vivo and this ubiquitination depends on Rsp5.(A)HA-HEM12 complements lethality of hem12∆.Diploid HEM12/hem12∆::kanMX4 was transformed with pRS415-HA-HEM12 plasmid, tetrads were obtained and spore clones tested for growth on YPD and YPD+G418.(B) Additional copy of RSP5 or deletion of RSP5 does not affect the level of Hem12.Wild type strain (MHY501) was transformed with an empty vector, pRS415-HA-HEM12 or multicopy pRS425-HA-HEM12 ([HA-HEM12] N ) and with YCp33-RSP5 or empty vector for control.Strain rsp5∆ bearing plasmid expressing SPT23 1-689 was transformed with pRS415-HA-HEM12 or pRS425-HA-HEM12.All of these strains also contained the YEp-HIS-UBI plasmid.Protein extracts were analyzed by Western blotting with anti-HA antibody.(C) Hem12 is ubiquitinated.Transformants expressing HIS-UBI and HA-HEM12 from multicopy plasmids as in B, were grown, protein extracts were prepared, and His-ubiquitinated proteins were purified using Ni-sepharose resins.Total extracts and Ni-bound fractions were analysed by Western blotting with anti-HA antibody.Rsp5 was detected by anti-Nedd4 antibody and anti-Pgk1 was used to control for protein loading.HA-Hem12 and Rsp5 levels were quantified. Figure 2 . Figure 2. Five lysines are conserved in UROD.(A) Alignment of amino acid sequences of UROD from S. cerevisiae, Danio rerio and Homo sapiens.Evolutionarily conserved lysines are purple, LPEY/F motif is in green.(B) Model of S. cerevisiae Hem12 structure.The active enzyme is a homodimer, individual monomers are shown in light and dark blue.Amino acid residues forming the active site are marked in yellow.Conserved lysines K8, K99, K174, K242 and K260 are in pink.Conserved LPEY motif which binds WW domains of Rsp5 is in green.
4,676
2015-01-01T00:00:00.000
[ "Biology" ]
A Smart Power System Operation Using Sympathetic Impact of IGDT and Smart Demand Response With the High Penetration of RES The enhanced penetration of available renewable energy sources (RES) is preferred over-utilizing the maximum cost budget for the conventional power system operation. Severe uncertainty and power generation and load demand balance are the pre and post-challenges of RES penetration respectively. Penetration of RES can be made effective by modeling the RES uncertainty with a computationally efficient technique and controlling the load demand smartly. In this paper for the smooth and stable penetration of RES, the uncertainty of RES is modeled using the sympathetic impact of information gap decision theory (SI-IGDT) to deal with minimum possible uncertainty. Smart demand response (SDR) is modeled using a virtual layer as a smart demand response operator (SDO) between the main grid and consumers for the post-challenge of RES penetration. The SDO categorizes consumers into virtual prosumer (VP), real prosumer seller (RPS), and real prosumer buyer (RPB) using a power flow conditional algorithm (PFCA). The uncertainty of RES is subsequently optimized and implemented using the firefly optimization algorithm (FOA) and the power flow algorithm (PFA). To achieve technical and economic benefits for the main grid and all consumers, a Stackelberg game is formulated using PFCA and multi-objective FOA (MFOA). MATLAB is used for the implementation of the algorithms and the test system. Simulation results show that the maximum available RES power is penetrated up to 300 %, and load demand reduction is observed up to 62% which ultimately reduces the power flow loss by 70%. RES Climate change has a perceptible effect on the forecasting 30 of RES. Because of the COVID-19 pandemic, the global 31 weather showed diverted behavior from its normal data. 32 In such situations, the forecasting of RES has become 33 extremely challenging. The World Meteorological Organi-34 zation has released a report that the COVID-19 pandemic 35 has negatively affected the quantity and quality of weather 36 forecasts, and climate monitoring [1]. Consequently, the level 37 of uncertainty has increased. In this paper, uncertainty is 38 defined as the actual information gap between forecasted 39 data and real-time data. Thus, the integration of RES with 40 the running stable system demands dealing with the level of 41 uncertainty. In the past, stochastic and probabilistic methods 42 were used to deal with uncertainty. However, this could only 43 be achieved with essential historical data, probability density 44 functions, and a high computational burden. IGDT is an 45 alternative method that is considered to be better equipped 46 to deal with uncertainty. After the integration of RES two 47 possible situations must be considered: 48 1) Power supplied from the RES is less, or 49 2) Greater than the demand at the point of common cou-50 pling of the RES. 51 In the first case, an incentive-based SDR is a remedial 52 solution, while an ESS can be used to reserve the surplus 53 power from the MG in the second case. The second case can 54 also be handled through the local energy trade between real 55 prosumers. 57 The motivation of this study is to model the uncertainty, 58 which is essential for power system design as it has uncer-59 tain parameters like RES. In such situations, uncertainty 60 can be excessively risky. Therefore, decision-makers deter-61 mine the parameters of the power system to ensure that 62 the power system does not cross the allowed limits due to 63 uncertainty. Various techniques have been used to model 64 uncertainty, such as possibilistic, Z-number, interval analysis, 65 robust optimization, and information gap decision theory 66 [2]. The probabilistic approach has been used to model the 67 uncertainty effectively [3]. Stochastic programming is the 68 main tool used in designing the parameters and finding their 69 probability and optimal solutions. To deal with the uncer-70 tain load values, electricity market price, and daily distance 71 traveled by the electrical vehicles, Monte Carlo simulations 72 response can be changed as per the requirement. Future smart 126 grids and MGs will have communication and IoT sources 127 for efficient coordination. The DR can also coordinate using 128 the available sources in normal and emergency events [17], 129 [18], [19], [20], [21]. In case of power fluctuations, the 130 requirements of power ramp-up and ramp-down are a bit 131 challenging. Nevertheless, the DR can provide a better and 132 faster ancillary service for a stable economic operation [22]. 133 If flexible demand response and uncertainty of RES are 134 considered together then a stable integration of community 135 energy users to the main power system can be developed. 136 A strong community-integrated energy system considering 137 electric vehicle charging stations using sequence operation 138 theory is developed by [23]. To test the balanced coordina-139 tion between the community-integrated energy system and 140 electric vehicle charging stations and the effective role of 141 flexible demand response and RES, a real-time case study in 142 North China has been considered. Stackelberg game-based 143 DR models have been a great source of benefits for provid-144 ing power system stability to both consumers and the main 145 grid [24], [25], [26]. From [8], [9], [10], [11], [12], [13] 146 to [27], [28], [29], it has been observed that the research 147 interest was on the robust solution and lack of focus on the 148 opportunistic solution. Here, the sympathetic impact of IGDT 149 (SI-IGDT) has been exploited to boost the opportunistic solu-150 tion that would maintain system stability from minimum to 151 maximum RES available. It means that maximum energy 152 is harvested from RES whether it is greater or lower than 153 the forecasted energy. Although DR has been modeled in 154 [24] and [25] to fill the gap between power generation and 155 demand, there is still space to improve the consumer behavior 156 to the main grid. In this work, consumer behavior analysis 157 and control are designed by a novel factor using SDR. In this 158 model, power is exchanged between the prosumers by con-159 sidering not only their technical and economic benefits but 160 also the selection priority which causes the power system loss 161 reduction. 163 The main contributions of this paper are summarized as 164 follows: 165 1) To utilize the maximum RES power, uncertainty is 166 modeled using the SI-IGDT which has made possible 167 the integration of RES whether it is greater or less than 168 the forecasted value. The rest of the paper is organized as follows: the sym- The discrepancy between known and unknown data is inter-198 preted as uncertainty, which is modeled by the IGDT. This Here, the slope-bound model has been adopted. Related 213 function M (α, m) is given in (1), Here m (t) and m(t) are the real and forecasted values of the 216 uncertain parameters respectively while α and ϕ(t) denote Table 1 Before detailing the IGDT model used in this study, it is 227 beneficial to briefly review the uncertainty model used in [27] 228 and [28]. Here, an assumption was made that actual uncertain and the minimum available RES in [27] and [28] respectively. 233 In addition to the same assumption made in [27] for a robust 234 solution, another assumption was made in [29], that unknown 235 values of RES would be greater than forecasted values to find 236 the opportunistic solution. In both cases, the operating cost 237 of the smart grid and MG was the objective function. So, the 238 maximum possible cost with the minimum RES power in the 239 robust case, and the minimum cost with the maximum possi-240 ble RES power in the opportunistic case were determined. All 241 the above cases were more focused on the robust solution with 242 the pre-assumptions, which availed the least power of PV and 243 WT. However, the goal of this study is to utilize the maximum 244 possible available RES power with minimum possible cost 245 without any pre-assumptions which is more close to the real 246 uncertain behavior of RES. 247 In general, the two main functions of IGDT are; 248 1) The RF is related to the risk-averse method, and 249 2) The OF is related to the risk-seeker method. 250 A deep study of [7] shows that RF shows immunity to 251 failure while OF shows immunity to possible success. With 252 uncertainty limits α 1 and α 2 , general form of RF and OF are 253 given in (3) and (4) respectively, where x is the decision variable, W r and W o are the reward 257 values for RF and OF respectively with W t_r and W t_o as the 258 threshold values. RF tends to gain the maximum permissible 259 cost (minimum reward) with a wide range of uncertainty 260 resulting in improved immunity to failure. While OF has the 261 immunity to possible success but within a minimum possible 262 range of uncertainty. The immunity expected response is 263 shown in FIGURE 1. 264 The left side of FIGURE 1 shows a tendency of the robust 265 function, which seeks maximum uncertainty with low reward 266 but it is immune to failure. Thus, it is considered a risky (5) and (6). where (5) deals with the situation when the actual unknown 284 RES profile value is greater than the forecasted value and (6) 285 is applied when the forecasted RES profile value is greater 286 than the actual value. It is found that the data is found 287 more uncertain in the robust case, that's why it is better to 288 deal with the uncertainty margin for RF and OF separately. 289 Overall, the real RES profile data is estimated without any 290 assumption. It is important to note that both α 2 and α 1 are 291 minimized which ultimately tends to achieve the maximum 292 target reward. If the condition in (7) is fulfilled, then both 293 immunities will sympathetically support each other, From (7) it is clear that if the rate of change of opportunistic 296 function value concerning robust target reward is positive 297 then it means both functions are sympathetic otherwise antag-298 onistic. Now we can conclude that a new function named here 299 as the sympathetic impact function (SIF) can be developed as 300 in (9): The SIF combines the features of RF and OF from (3) and 304 (4) in (9) by using (5), (6), and (8). The unique feature of 305 SIF is that it tends to minimize uncertainty and maximize the 306 target reward. 308 Demand response has a strong relationship with RES and 309 smart grids [30], [31]. Balancing the load and power genera-310 tion, specifically at peak hours of the main grid, is especially 311 challenging. This issue can be resolved either by managing 312 the power generation sources or the loads. The first solution is 313 relatively difficult as it is complex and costly due to the power 314 ramp-up and ramp-down operations of the power generation 315 sources. The second solution balances the power by control-316 ling loads of consumers as per the required power manage-317 ment. Despite the difficulty of this task, emerging algorithm 318 development, advancement in communication technology, 319 and advanced metering infrastructure have made it possible 320 to develop an SDR [32]. 321 There are two types of DR programs: incentive-based DR 322 and price-based DR. For the price-based DR, the consumer 323 manages the load to get the minimum price available at a 324 specific time slot. Despite this, the price-based DR consumer 325 does not get any benefit from the utility except the reduced 326 cost. In contrast, for the incentive-based DR, the load of 327 consumers is controlled either by the utility or the consumer 328 itself under any DR type. The consumer gets the incentives 329 either in the form of a reduction of energy units or cash 330 payment [33]. Consequently, more consumers are encouraged 331 to participate in incentive-based DR. Thus, consumers can 332 play the role of a VP as well. As shown in FIGURE 2, 333 SDR coordinates between the SDO and prosumers. The SDO 334 interprets the real-time prices and incentives of the main grid 335 and subsequently communicates them to the prosumers. It is assumed here that the consumer either behaves as a VP 338 or RP. Let k denote the number of VPs and n denote RPs. 339 Generation of the RP either consists of PV or WT or both. 340 The RP is further divided into two types RP seller (RPS) and 341 RP buyer (RPB). 342 VOLUME 10, 2022 Consumers are differentiated by using the following 343 generation-to-demand ratio GD n (10), where subscript s and b shows seller and buyer index number 346 respectively. While G n and D n are power generation and 347 demand at the consumer node respectively. 348 The behavior of the RPS can be modeled using the UF the UF is given by (11), 354 where σ n > 0 is a prosumer preference parameter, θ n is a 355 predetermined constant, and t is the time interval in hours. 356 The UF of RPB and RPS is given by (12) and (14) respec- otherwise the RPB will be shifted to the grid, as shown in (16). 373 The UF of the VP is defined using R t k , as given in (17) where in (16) ρ t d,b is the power bought from the main grid 378 (subscript 'd' is used because of SDO), in (17) D t k is the 379 power reduced by the VP ω k is the dissatisfaction cost, which 380 shows how much the VP has violated the agreement with the 381 main grid through the SDO, in (18) D base k is the base-load of 382 VP, L R,k is load reduction limit. In (17) k is defined as the 383 dissatisfaction factor of the agreed load reduction. 384 The dissatisfaction cost is given in (19), where σ k denotes the type of prosumer and θ k is the load 387 reduction behavior. 389 The UF of SDO includes the UF of RPS, RPB, VP, generation 390 cost, and the cost of the reserve power. It is given by (20), In (20) 0.03 is 3% benefits of seller paid to the main grid 399 through SDO, R g is the cost, P g is the power generation of the 400 main grid, rg is the reserve power generation factor, P rg is 401 the reserve power and R rg is the generation cost for the reserve 402 power. Here rg has the same value as that of k . This means 403 that the dissatisfaction with agreed demand reduction leads 404 to the generation of the reserve power. The total incentives 405 offered and paid to the VPs must be less than the maximum 406 allowed budget of the main grid, as shown in (22 The existence is ensured through an algorithm. 421 Definition and Rules: The ultimate goal of this game is to 422 find an optimal beneficial solution for the leader (SDO) and 423 the followers (VP, RPS, and RPB). Each game is played under 424 some specific rules which are listed below [24]. 425 Each player in the game adheres to the following rules, 426 1) The strategy set of each player is nonempty, convex, 427 and compact. 428 2) Each player has a unique optimal best-response strat- 3) The SDO adopts a unique optimal strategy by identify- 441 By calculating the first derivative of (24) w.r.t. D t k , we get 442 as, 444 By equating the derivative to zero in (25), the maximum 445 value of D t k is D t k given in (26), 447 The second derivative of (24) has a negative value 448 (− k θ k ); this means that the objective function is concave. the UF of the RPB can be written as, 456 where ρ e,b is the power purchased either from the RPS or the 457 main grid. By using the values from the UF of the RPB, (27) 458 becomes, Now taking the first derivative, (28) becomes, To find the maximum power purchased ρ e,b , we equate the 464 derivative to zero and solve by (30), The second derivative of (28) is negative (−θ b ) and there-467 fore concave, which shows maximization function. 468 Proof 3: The objective function of the RPS maximizes 469 the revenue obtained from selling the power to the RPB. 470 This means that power will be sold in equal proportion to its 471 demand. The UF of the RPS is given in (31), Here, S t s is the total power demand from RPS, and it is 475 assumed to be the nominal case, S t s = ρ t s . By taking the 476 derivative of (31) w.r.t ρ t s , we get The second derivative of (31) is also negative (−θ s ) and 483 concave. 484 Proof 4: After finding the maximum values of D t k , ρ e,b , 485 and ρ t s as D t k , ρ e,b , and ρ t s , respectively, the maximum possi-486 ble value of the main grid energy price R d,b can be found from 487 (21) by using the aforementioned values. Before we derive the 488 expression for R d,b , it is important to note the following, (20% is assumed 494 as the incentive from the main grid at peak hours). 495 Hence, the UF of the SDO from (20) using the values of D t k , 496 ρ e,b , and ρ t s , and considering all the points mentioned above 497 is given as, ing the derivative to zero, as shown in (35) and (36), In (36) STF is termed as the SDR tuning factor for the cost of 516 power from the main grid through SDO which is given as, The unique novel factor STF derived in (37) is the best source 521 to observe the behavior of VP, RPS, and RPB to the main grid. The brief execution procedure has been given below; 12), (14), (16) and (21)) and maximum possible 577 output values ((26), (30), (33) and (1)) of Stackelberg 578 players are found. 580 For better planning and design of any power system, the 581 objectives must be clearly defined and all sensitive constraints 582 of the system must be considered. The objective of this study 583 is divided into three parts: So here (39) can be redefined by putting the x and the 627 multi-objective target is given in (40) and (41), Here one thing is noted the minimized values of uncertain 632 profiles of RES will come from algorithm 1. As a result of the 633 (40), maximum power can be penetrated by factor A which 634 VOLUME 10, 2022 would ultimately reduce the operational cost of the main grid. Here A is named as the amplifying factor of the RES power 636 to penetrate. 637 The total net power P t net should satisfy the power reserve 638 required for a reliable main grid. where P t g , G t b , S t s , P t g0 , and P t rg are the power generated by (15) and (47) respectively, 664 which depends on their comfort and allowed budget. Before analyzing the impact of the proposed methodology, 702 it is essential to show some aspects in the base which will be 703 considered as reference. The buses where RES are connected 704 are also considered the local MGs. Voltage situation at these 705 buses, energy generation and demand, and energy loss at 706 each bus are shown in FIGURE 4, 5, and 6 respectively. 707 In FIGURE 4, it is observed that at buses 7-8 and 33-35 there 708 is a voltage drop below the -5% threshold. The reason for 709 voltage drop is due to energy supply from main grid sources 710 only at bus 26 and 33 and more energy demand than the 711 generation. It is also worth noting that the generation cost 712 of the main grid is 3264887 KRW (Korean Currency). Cost 713 function coefficients are taken from [40]. From FIGURE 6 it 714 is clear that more energy loss is observed on buses 1-9 due 715 to power flow from buses 26 and 33. The gross energy loss 716 during 24 hours is 78 kWh which can accumulate to a huge 717 amount if it is not reduced. 719 The impact of the proposed methodology is analyzed in 720 three parts: analysis of SI-IGDT, the role of the SDO to 721 The opportunistic region limit is 10% more than the fore- for D t k , ρ s,b and ρ d,b at each bus concerning the STF is shown 815 in FIGURE 14. 816 It is clear from FIGURE 14 that most of the energy at 817 buyer buses is purchased from the main grid but there is 818 also maximum penetrated power by RES which is sold from 819 RPS to RPB. The optimal value of energy-reduced at relevant 820 buses of VP is shown which is based on the incentives from 821 the main grid. Energy reduction at VP buses 1, 3, 7, and 33 is 822 5. Effects of θ k , θ s and θ b at optimal power and utility functions of vp, rps, rpb and sdo (main grid). Similarly, a decrease in values for θ k , θ s , and θ b lowers 845 the R t d,b and u(SDO), whereas the opposite effect is observed 846 when RPS and its UF are considered. 847 The variation of the STF values that control R t d,b for 848 various θ values is shown in FIGURE 16. The variation of 849 STF has an inverse effect on R t d,b and u(SDO) as given in 850 (36), which can be easily deduced by analyzing FIGURE 16 851 and Table 5 simultaneously. Conclusively, it is observed that 852 at SE energy generation or consumption, energy price and 853 UF values for SDO, VP, RPS, and RPB in Table 5 are the 854 best possible values. Either energy reduced by VP or energy 855 trading between local consumers i.e between RPS and RPB, 856 both ways are effective for the economic operation of the 857 main grid which can be realized by comparing base case grid 858 operational cost with any one of the above cases. 859 To summarize the effect of prosumers behavior on the STF 860 factor that affects the main grid energy price which ultimately 861 affects the participation of VP, RPS, and RPB in the SDR. 862 This is the solid source that helps to achieve the Stackelberg 863 equilibrium for SDR. Participation of VP, RPS, and RPB 864 for the change of θ k , θ s , and θ b ±10% has been drawn in 865 FIGURES 17 and 18. presented in [24], [25], [26], [27], [28], and [29], which 902 provides a smart control of consumer behaviors. The proposed SI-IGDT based model effectively integrated 904 with the Stackelberg game is an additional advantage 905 over [24], [25], [26], [27], [28], [29]. A brief compari-906 son is given in TABLE 6. 907 It can be observed from TABLE 6 that the proposed 908 research has the advantage of a novel application of SI-IGDT 909 over the articles which have applied IGDT. Most of the 910 game lack RES penetration focus, reserve power, reduction 912 of power loss and any novel STF canbe used to design an 913 autonomous control of consumer loads. 915 In conclusion, our study shows that the SI-IGDT model can
5,570.6
2022-01-01T00:00:00.000
[ "Engineering" ]
Internal force distributions in 't Hooft-Polyakov monopole and Julia-Zee dyon The energy-momentum tensor of the 't Hooft-Polyakov monopole and the Julia-Zee dyon are studied. This tensor contains important information about the pressure and the shear force distributions which define the mechanical properties of systems. Obtaining the violation of the local stability criterion for the magnetic monopole and dyon we decompose the EMTs into long- and short-range parts. This decomposition depends on the abelian field strength tensor which can not be uniquely defined. We suggest to use the modified 't Hooft definition for the tensor. Finally, the long- and short-range parts of the EMTs are computed and new equilibrium equations are obtained. Numerical values for masses, $D$-terms and various mean square radii for the monopole and the dyon are also computed. The most interesting part of the EMT for our purpose is the ij-components, which define the stress tensor. The T ij components can be associated, according to Ref. [5], with the distribution of the shear forces s(r) and the elastic pressure p(r) inside the system. For a spherically symmetric system in static approximation in three dimensions the stress tensor is decomposed in s(r) and p(r) as follows T ij ( r) = r i r j r 2 − 1 3 δ ij s(r) + δ ij p(r). The pressure and shear force distributions can be connected with the gravitational D(t)-form factor, which is involved in the parametrisation of the matrix element of the EMT operator, see e.g. Ref. [5]. As it follows from Ref. [5], the D-term (or the Druck-term) D ≡ D(0) can be obtained as where M is the mass of the system. The value of the D-term for a particle can be measured experimentally, like its mass, however, the value is difficult to extract from experimental data. For example, the experimental value of the D-term for the proton is D = −1.47 ± 0.06 ± 0.14, where the first error is the statistical uncertainty, and the second error is due to the systematic uncertainties, see Ref. [6]. The value of the D-term for the nucleon was first obtained theoretically in the Skyrme model in Ref. [7], the authors got the value D = −3.6 and in the bag model the value of the D-term for the nucleon is D = −1.1 [8], other values of the D-term together with the corresponding references can be found in the table 2 of Ref. [9]. The values of the D-term for the monopole and the dyon are given in Secs. III and IV, respectively. B. Static EMT The EMT is the conserved Noether current associated with spacetime translations, i.e. it satisfies the following condition For the static EMT the Eq. (5) turns to ∂ i T iν = 0. For the parametrisation of Eq. (3) it implies that the pressure and shear force distributions satisfy the following equation which is also called equilibrium equation ∂p(r) ∂r + 2 3 ∂s(r) ∂r + 2 r s(r) = 0. (6) Assuming that the pressure and shear force densities decay at large distances faster than ∼ 1 r 3 , multiplying on r 3 and integrating over r one obtains the von Laue stability condition given in Ref. [10] d 3 rp(r) = 0, which is necessary for stability, but not sufficient, since it is also satisfied for unstable systems, see discussion in Ref. [4]. This condition is satisfied for any system, whose EMT is conserved and implies that the pressure distribution has at least one mode. Integrating the differential equation (6), from positive value r to infinity and requiring that s(r) and p(r) vanish at the infinity, one obtains This equation describes the equilibrium of the internal forces inside a system. The combination 2 3 s(r) + p(r) describes the normal component of the total force exhibited by the system on an infinitesimal piece of area dS i , which is denoted as p r (r): F i (r) = T ij (r)dS j = 2 3 s(r) + p(r) dS i = p r (r)dS i (9) where dS i = dS r i /r. From the equilibrium equation (8) follows that for the positive shear force distribution s(r) the normal force is always positive, for the negative s(r) the normal force is always negative. In Ref. [4] it was argued that the normal force has to be directed outwards otherwise the system would collapse, this means that it has to be non-negative, i.e. 2 3 s(r) + p(r) ≥ 0. (10) Thus, according to the Eq. (8) for negative shear force distribution the system is definitely unstable. The local stability condition is also necessary, but not sufficient for stability analogously to the von Laue condition. However, due to its local character, it is stronger than the von Laue condition. C. EMT and external forces The EMT conservation taking the form ∂ i T ij (r) = f j (r) can be interpreted according to Ref. [11] as the equilibrium equation for internal stress and external force f j (per unit of the volume). According to the parametrisation in Eq. (3) the equilibrium equation gets the following form d dr 2 3 s(r) + p(r) + 2 s(r) r = f (r), (11) where f (r) is the normal component of the external force per unit of the volume. This equation describes the balance between internal forces pushing out from center and external force pulling inwards to the center. When the corresponding forces are equal, the system is at equilibrium. The von Laue stability condition for such equilibrium equation gets the following form One can also rewrite the equilibrium equation (11) as with σ(r) = ∞ r dxf (x). The left-hand side describes the normal component of a total force of the system acting on the infinitesimal unit area dS i as it follows from below Then for systems affected by an external force, i.e. for systems described by the equilibrium equation (11), the local stability criterium of Eq. (10) can be modified as follows For a time-dependent system the conservation of EMT gives the following equation ∂ i T ij = −∂ 0 T 0j , where the left-hand side again describes internal force of the system and the right-hand side can be interpreted as an external force. D. Mean square energy radius and mechanical radius For a positive energy density T 00 (r) the mean square radius of the energy density can be introduced as which characterises the size of the system in which the energy is distributed. According to the local stability condition, for a stable system the radial force must be positive, so the mechanical mean square radius where the normal force is distributed can be defined as in Ref. [12], for a system with the equilibrium equation (6) it takes the following form and for the system with the equilibrium equation (11) it takes has the form: where p r (r) and σ(r) are defined above. The local stability condition has not been mathematically proven and thereby is still questioned, see e.g. criticism in Ref. [13]. Moreover, as it was mentioned in recent studies of Refs. [14][15][16], the local stability condition is not satisfied in the presence of the long-range forces. Additional to that, as pointed out in Refs. [14,15,17,18], there is another problem for systems with the long-range contribution, namely, the divergence of essential quantities, that describe mechanical properties of a system, such as the D-term and the mean square radii corresponding to the EMT densities. At this point, it is important to mention, that the 't Hooft-Polyakov monopole is accepted to be a stable system, see arguments of Refs. [19,20]. As we will see later, the local stability condition is violated for the monopole and the dyon. On the first glance, thereby one could think that the criterium is not correct. However, the monopole and the dyon involve electromagnetic interaction, which supports the idea that the local stability condition does not apply correctly in the presence of long-range forces. A. Equations of motion The Grand Unified Theories combine the electromagnetic, weak, and strong forces into a single force. One of the first such theories was suggested by Howard Georgi and Sheldon Glashow in 1974 in Ref. [21]. Later Alexander Polyakov and Gerard 't Hooft have independently found that magnetic monopoles automatically appear in all Grand Unified Theories [1,2]. The most simple model, where magnetic monopole exists is the gauge SU(2) Georgi-Glashow model with Higgs triplet field ϕ a , a = 1, 2, 3, which belongs to adjoint representation. The corresponding gauge invariant action of the model is where λ is a dimensionless coupling constant, v 2 is the squared vacuum expectation value of the Higgs field; µ and ν are Lorenz indices. The covariant derivative and the non-abelian field strength tensor are defined as follows where g is the gauge coupling constant and A a µ is the gauge vector field. Choosing the vacuum expectation value as ϕ a 0 = (0, 0, v) and considering small fluctuations of the ground state in unitary gauge, one can show that the Georgi-Glashow model has one massive scalar field η 3 (x) with mass m H = √ 2λv, one massless vector field A 3 µ corresponding to the U (1) subgroup of SU (2) gauge group and two massive vector fields A 1 µ and A 2 µ both with masses m V = gv. In this model we are interested in a soliton solution, in other words, in a static solution of classical field equations with finite energy. As we want to study static soliton configuration, we consider the fields A a i ( x) and ϕ a ( x) independent of time. We also fix zero component of a vector field through the gauge A a 0 = 0 to have zero electric field. Requiring the energy to be finite the following configuration of fields can be found ϕ a = n a vh(r), with the unit vector n a = r a r , the unknown profile functions h(r) and F (r) and the following boundary conditions F (r) = 0 at r → ∞, F (r) = 1 at r → 0, h(r) = 1 at r → ∞, h(r) = 0 at r → 0. The explicit form of the profile functions can be obtained from equations of motion which can be computed by varying the action with respect to scalar and vector fields, which for the static case reduce to the following form In terms of the profile functions the equations transform to Rescaling the argument r with dimensionless argument ρ as ρ = r R0 , where R 0 = 1 m V = 1 gv is a typical size of the solution, and introducing the new parameter we obtain the following system of equations of motion with the same boundary conditions as before. This system can be solved analytically only for the limit β = 0, otherwise one solves it numerically. However, one can find approximate solutions at small and large distances: where C F , C h , a and b are free constants. Note, that for β = 2 the constant C F = 0. The coefficients c n and d n can be expressed in terms of a and b as follows In Ref. [22] another system of differential equations is obtained and analytically solved for complex non-abelian monopole and dyon fields. B. EMT densities The EMT for the 't Hooft-Polyakov monopole can be obtained variating generally covariant form of Georgi-Glashow action (19) with respect to the metric For the static case one gets Using the 't Hooft-Polyakov ansatz given in Eq. (21) and the decomposition in Eq. (3) the EMT densities can be expressed in terms of the profile functions F (ρ) and h(ρ): The T 00 (ρ) component provides the information about spatial distribution of the monopole mass, p(ρ) and s(ρ) describe the pressure and shear force distributions inside the monopole. The first two terms in these densities are originating from the term ∼ F a µσ F aσ ν in action, the next two terms are coming from ∼ D µ ϕ a D ν ϕ a , and the term proportional to β 2 comes from the Higgs potential. Before we discuss the numerical results for the EMT density distributions and their properties, we compute their asymptotic behaviour at large and small distances using the behaviour of the profile functions from Eq. (26): and where C h , C F and a, b are free constants from the asymptotic behaviour of profile functions in Eq. (26). From the large distances behaviour it is clear that the power-law decay (∼ ρ −4 ) is dominating, this behaviour corresponds to the contribution of the electromagnetic interaction. 1 This is not surprising if one remembers that the mass spectrum of the Lagrangian (19) has one massless vector particle corresponding to the U (1) group. The asymptotic behaviour of the energy density is definitely positive at the infinity as well as at the origin. This hints that we are dealing with an usual system. The outer region of the pressure distribution has definitely positive sign and that of the shear force distribution -negative one. Such behaviour leads to violation of the local stability criterion (10). However, this behaviour is related only to the electromagnetic contribution. Note, that because of this long-range contribution both mean square radii in Eqs. (16) and (17) as well as the Druck-term in Eq. (4) diverge. In Fig. 1 the energy density for various values of β is shown. Although the energy is growing with β, according to Ref. [23] it does not diverge for β → ∞. The distributions of the shear force, pressure and normal force are shown in Fig. 2. Although the pressure is mostly negative, it changes the sign which allows the von Laue condition (7) to be satisfied, see e.g. Fig. 6. It is also interesting that the maximum of pressure is three times larger than the maximum of shear forces like in many other systems studied before, see e.g Refs. [4,5,7,12,14]. From the figure one also sees that the stability condition (10) is violated everywhere and for every choice of β, although it is proved in Refs. [19,20] that the monopole is stable. Since the local stability condition is not mathematically proved, we cannot conclude from its violation that the monopole is unstable. This violation can be also explained by the fact that the stability criterion is inapplicable for systems with the long-range force that is present in the monopole. The authors of Refs. [14,15] also obtained the violation of the stability criterion for stable systems like proton in the presence of long-range forces. All these aspects motivate us to think that the stability condition in Eq. (10) can be applied only for the systems where only the short-range interactions are present. C. BPS limit The equations of motion can be solved analytically only for the limit β = 0. The solution of these equations was first obtained in Ref. [24]. In Ref. [25] Bogomolny solved these equations by reducing the system of the second order equations to the first order by considering the energy functional for the Georgi-Glashow model. By integrating the energy density over the volume one gets the energy functional Rewriting it in terms of the chromomagnetic field H a i = − 1 2 ijk F a jk , Bogomolny introduced the following inequality for the monopole energy, which is also called BPS bound where the Bianchi identity (D i H i ) a = 0 and the Gauss's theorem were used. The BPS limit gives the possible minimum of the static energy of the monopole and the equality holds for the Bogomolny equation From the definition of chromomagnetic field the following expression can be obtained Using this expression and Bogomolny equation (35) it can be shown that the spatial components of the EMT disappear: independently of the choice of the ansatz in Eq. (21). The vanishing of the pressure and shear force distributions at the BPS limit allows to assume that the main contribution for the stress tensor T ij is given by the Higgs potential. As it can be seen from Fig. 1 the largest contribution to the pressure distribution is indeed due to the term originating from the Higgs potential in Eq. (19). Although the shear force distribution does not depend explicitly on β, it is related to the pressure distribution due to the differential equation (6). We think that in all models where it is possible to construct the BPS limit, the corresponding ij components of EMT would vanish. We have also obtained such vanishing, for example, for baby Skyrme model. The vanishing of the pressure and shear force distributions for BSP limit indicates that the 't Hooft-Polyakov monopole in this limit has the isotropic matter distribution [12]. D. Abelian field strength tensor and electromagnetic EMT In the previous subsection we obtained the EMT in non-abelian SU (2) gauge theory, however, the theory has one massless vector field corresponding to the abelian U (1) subgroup. This massless vector field is responsible for the electromagnetic contribution to the EMT and for the divergence of such mechanical properties of the monopole as the D-term, mean square energy and mechanical radii. We will subtract this long-range contribution from the EMT and study the remaining short-range structure. For this we have to determine the U (1) abelian field strength tensor. Let us denote the abelian field strength tensor as F µν . Since in the unitary gauge the massless vector field of the theory is the third component of the vector field A µ 3 (see Sec.III A), the expression for the abelian field strength tensor must be SU (2) gauge invariant and coincide with . We introduce a general definition of the abelian field strength tensor without any requirements on φ as The constant c 1 is not fixed by the above requirements for the abelian field strength tensor F µν , however, it is needed to define the magnetic charge density in such a way that it coincides with the topological charge density. G. 't Hooft suggested in Ref. [2] the tensor with the unit vector φ a =φ a = ϕ a |ϕ| and c 1 = 1, Faddeev offered in Ref. [26] the tensor with φ a = ϕ a v and c 1 = 0, where ϕ a is the ansatz from Eq. (21) and Boulware in Ref. [27] suggested the tensor with φ a =φ a = ϕ a |ϕ| and c 1 = 0. Since the under-lying theory combines the long-and short-range forces it is not possible uniquely defined the U (1) field strength tensor to separate the short-range interaction from the long one. For the general choice of the abelian field strength tensor the magnetic charge density gets the following form One can show that for the 't Hooft's definition of the abelian field strength tensor the magnetic and topological charge densities coincide and describe single point-like particle with the magnetic charge 4π g at the origin, see Ref. [2]. Such monopole is called Dirac's monopole. In Ref. [28] Dirac derived the quantisation of the magnetic charge as In contrast to the 't Hooft's definition of the abelian field strength tensor the Faddeev's and Boulware's definitions describe the magnetic charge density smoothly without singularities at the origin. The topological and magnetic charge densities are not equal in these cases, but the quantisation of the magnetic charge is satisfied. It is also interesting to notice that the magnetic charge density in Eq. (39) for the Faddeev's and and Boulware's definitions of the abelian strength tensor coincides with the energy density in BSP limit in Eq. (34) up to some dimensional normalisation factor. For the positive magnetic charge distribution the mean square radius can be determined We define the electromagnetic EMT analogously to the EMT in electrodynamic as where F µν is the U (1) field strength tensor. As we have discussed F µν is not uniquely defined. Since we are interesting in spatial structure of the monopole it makes no sense to define T C µν according to the 't Hooft's definition of the F µν because of singularities at the origin. Faddeev's and Boulware's smooth definitions of the abelian strength tensor F µν could be a better candidate to define the T C µν , however, we suggest another definition of the U (1) field strength tensor which is also smooth as the Faddeev's and Boulware's definitions and it coincides with the 't Hooft's definition at long distances. For this definition of the abelian strength tensor the electromagnetic EMT distributions have the following form where The function Q(r) is directly related to the magnetic charge density through The electromagnetic EMT densities are shown in Fig. 3. The corresponding asymptotic behaviour can be found in App. A3. Note, that since the full spatial part of the EMT in BSP limit where β = 0 vanishes, see Sec. III C, and the electromagnetic EMT does not depend on β (and thereby does not vanish in this limit) the short-range part of the EMT in Eq. (45) has to be equal to the long-range part. It means that the scalar interaction in BSP limit becomes long-range interacting. E. Short-range part of EMT and its consequences We denote the short-range part of the EMT as T SR µν and define it as follows where the full EMT T µν is obtained in Eq. (30) and electromagnetic EMT -in Eq. (43). After some simple algebraic calculation the following expressions for the short-range part of the EMT densities can be found The corresponding asymptotic behaviour can be found in App. A4. The short-range part of the pressure and shear force distributions are presented in Fig. 4. The short-range part of the pressure distribution is negative and the shortrange part of the shear force distribution is positive for every choice of β in considered range of r. The total normal force distribution is positive for any value of β, which satisfies the stability criterium (15). It is also interesting to notice that the pressure distribution of the monopole does not change drastically after the subtraction of the long-range part, however, the shear force distribution does. As we have already discussed in the first section, the static EMT according to the Noether theorem must be conserved. The conservation of EMT couples pressure and shear force dinsities by the differential equation (6). However, after the decomposition of the EMT into the short-and long-range parts, the separate EMTs are not conserved and the corresponding short-and long-range parts of the pressure and of the shear force densities couple due to the new equilibrium equations: d dr where Q M (r) = | x|<r d 3 xρ M ( x) = 4π g Q(r) is magnetic charge contained in a sphere of radius r. So the right-hand side describes "Coulomb force" of the magnetically charged sphere acting on the magnetic charge density. 2 Thereby the first equation is the equation of magnetostatic equilibrium between the "Coulomb stress" pushing the monopole outwards and the magnetic "Coulomb force" pulling the monopole inward to the center. In contrast, the second equation describes the balance between the "short-range stress" pulling the monopole inward to the center and the repulsive magnetic "Coulomb force" pushing the monopole outward. We notice, that since the right-hand sides of the equilibrium equations are associated with the magnetic charge density, it is not possible to define the long-and short-range parts of the EMT using the ambiguity of the F µν in such a way, that the decomposed EMTs would be conserved separately, i.e. the right-hand sides would vanish, except the case with zero magnetic charge density. As we discussed, the D-term of the monopole diverges because of the long-range contribution. After subtracting the long-range part from the EMT, the corresponding D-term still diverges for small values of β. We illustrate this divergence in the following. The D-term is defined through the shear force distribution as in Eq. (4). Since for small values of β 1 the main contribution to the short-range part of the shear force density is provided by the asymptotic behaviour at large-distances, see App. A4, the D-term diverges for the large enough R as The mean square magnetic charge radius in Eq. (40), the mean square energy radius of the short-range part in Eq. (16) and the mean square mechanical radius in Eq. (4), the mean square radius of the magnetic charge density is computed due to the definition in Eq. (40), the mean square radius of the short-range part of the energy distribution is computed according to the definition in Eq. (16) and the short-range part of the mechanical square radius -according to Eq. (18). It is remarkable that for the small values of β the main contribution to the monopole mass gives the short range part and for the large values of β 5 -the long range part. It would be interesting to find the analytic form of the pressure and shear force densities depending on the parameter β like it was done in Refs. [23,29,30] for the static energy. A. Equation of motion In the previous section we considered monopole solution carrying magnetic charge. In this part we will consider a dyon, that is a hypothetical particle carrying both electric and magnetic charges, such particle was first suggested by Julian Schwinger in 1969 in Ref. [31]. The dyon solution in Georgi-Glashow model was first obtained in 1975 by Anthony Zee and Bernard Julia in Ref. [3]. We will use the same Georgi-Glashow model as we did in the first part with the same action (19). However, we will not fix the zero component of the vector field as we did it for the monopole case, A a 0 ( x) = 0. We are again interested in the static soliton solution with finite energy. Thereby, we choose the same spherically-symmetric ansatz as we chose for the 't Hooft-Polyakov monopole in Eq. (21) with the same boundary condition as in Eq. (22). In contrast to monopole case, for the dyon the A a 0 is non-zero and one chooses the ansatz which follows from requiring the energy to be finite and definition of the abelian field strength tensor as in Eq. (42). The boundary conditions for the function J(r) are where Q D is an electric charge of a dyon, Q M = 4π g is a monopole charge and m is a free constant, that has dimension of mass. From the variation of the action follow the equations of motion The equations for the profile functions in dimensionless variable ρ = gvr are Note, that the boundary condition for J(ρ) also changes: The approximate solution for small and large distances can be found: The a, b, c are free constants, all other constants for the small r behaviour can be expressed in terms of these constants, for example, Since we searched the asymptotic behaviour at large distances for real and falling functions we define a new function J(ρ) =J(ρ)ρ and find that the constant C has restricted region: 0 ≤ C < 1. From numerical analysis of the solutions we obtain that the charge ratio is restricted in the following range 0 ≤ Q D Q M ≤ 1. More detailed analysis can be found in Ref. [32], where also the dependence of the parameter C on the charge ratio Q D Q M is found. B. EMT densities The EMT for the Julia-Zee dyon can be obtained analogically to the 't Hooft-Polyakov monopole by variating generally covariant form of the Georgi-Glashow action (19) with respect to the metric and it can be decomposed into magnetic and electric parts: where the magnetic part equals to the EMT of monopole which we have already computed in the previous section, see Eqs. (29), (30) and (31). The electric part of EMT is Note, that same as for the monopole case the spatial components of the EMT for the dyon vanish in BSP limit, where β = 0. For electric part of the energy density, pressure and shear force distributions of the dyon in dimensionless variable ρ the following expressions are obtained With the help of asymptotic behaviour of profile functions in Eq. (53) the asymptotic behaviour of the EMT for the dyon can be obtained From this behaviour one sees that the long-range contribution presented in the dyon is even stronger than in the monopole case, see Eq. (31). In App. A 2 one can find the asymptotic behaviour of the EMT near the origin as well as behaviour of the electric part of the EMT only. Again already from the asymptotic behaviour it is clear that the stability condition of Eq. (10) is violated. In Fig. 5 the full pressure and shear force distributions of the dyon are presented. The shear force and pressure distributions have very similar form as it was in the monopole case, see Fig. 2. These distributions are mostly negative for every choice of parameters. However, pressure distribution changes its sing, what allows the von Laue condition (7) to be satisfied, see e.g. Fig. 6. It is also interesting to notice that the main contribution to the dyon energy comes from the monopole part as it can be seen in Fig. 7. Moreover, for the growing β this contribution is increasing. The pressure distribution has one mode for any choice of the parameters. C. Electromagnetic and short-range parts of the EMT According to the definition of the electromagnetic EMT in Eq. (41) the long-range part of the EMT for dyon is Using the modified 't Hooft abelian field strength tensor in Eq. (42) one gets the following expressions for the electromagnetic EMT densities of a dyon where the function Q is defined in Eq. (43) and is related to the magnetic charge density in Eq. (44) and the functioñ Q is related to the electric charge density of a dyon Comparing these expressions with the expressions for the monopole case in Eq. (43), it is clear that both have similar behaviour. Hence, it is again not possible to define the various mean square radii or D-terms, because they diverge. We will exclude the long-range contribution given in Eq. (60) from the EMT of dyon in Eq. (55) in the same way as we did it for the monopole case using Eq. (45). After simple algebraic calculations we obtain the following expression for the short-range part of the EMT of the dyon Here The equilibrium equation of the long-and short-range parts of the EMT analogously to Eq. (47) is given as d dr where Q M (r) andQ D (r) are magnetic and electric charges contained in the sphere of radius r: The first equilibrium equation describes the balance between two types of "Coulomb forces", namely magnetic and electric ones, pulling the dyon inwards to the center and the "Coulomb stress" pushing the dyon outwards. The second equation describes the balance between the two types of the repulsive "Coulomb forces" pushing the dyon outward and the "short-range stress" pulling the dyon inwards to the center. Same as for the monopole case it is not possible to define the short-range part of the EMT using the ambiguity of the F µν in such a way that the long-and short-range parts of the EMT are conserved separately unless the electric and magnetic charge densities both vanish. In table II the full mass of dyon is denoted as M , the monopole contribution to the full dyon mass as M M , the M C and M SR are the long-and short-range contributions to the full mass, correspondingly, the D-terms are computed with the help of the short-range part of the shear force distribution in Eq. (62). The following quantities can be also found in the table: the energy mean square radius r 2 E , the magnetic and electric charge mean square radii r 2 V. SUMMARY AND CONCLUSIONS In this work the EMTs of the 't Hooft-Polyakov monopole and the Julia-Zee dyon are studied. These EMTs contain long-range contributions which present analogies to the "Coulomb interaction". The local stability condition containing the pressure and shear force distributions is violated for both cases, the monopole and the dyon. Thereby, the applicability of the condition in the presence of long-range contributions is questioned. Moreover, such important quantities which give information about mechanical properties of a system as the D-term, energy and charge mean square radii of the monopole and the dyon can not be computed due to the presence of the long-range interaction. To shed more light on the mechanical properties of the monopole and the dyon and on the local stability condition we exclude the long-range contribution from the EMT of the monopole and the dyon. The difficulty of such calculation is that this contribution can not be uniquely defined. We suggest the modified 't Hooft definition of the abelian field strength tensor for this purpose. Dealing with the separate long-and short-range parts of the EMTs we obtained the equilibrium equations which couple the pressure, shear force distributions and the external force acting on the system. In this line we have also modified the local mechanical stability condition for systems in the presence of external forces. According to this condition the short-range part of the 't Hooft-Polyakov monopole as well as of the Julia-Zee dyon is stable for every choice of the parameter β. Further, in this paper numerous figures describing mechanical properties of the 't Hooft-Polyakov monopole and the Julia-Zee dyon can be found as well as tables with varied contributions to the masses, D-terms and mean square energy, magnetic and electric charge radii. The asymptotic behaviour of the magnetic charge density is The "Coulomb EMT" behaves asymptotically as (A3) The asymptotic behaviour of the short-range part of the EMT is For dyon part The functionQ(ρ) has the following asymptotic behaviour The asymptotic behaviour of the electric charge density is The asymptotic behaviour of the full EMT of the dyon near the origin is The asymptotic behaviour of the electric part of the full EMT for the dyon: The asymptotic behaviour of the long-range part of the EMT is The asymptotic behaviour of the short-range part of the EMT of the dyon is as follows: (A10) Appendix B: Divergence of radii As it follows from the Eqs. (A2) and (A4) the main contribution to the magnetic charge density and to the shortrange part of the energy density, respectively, is provided by the asymptotic behaviour at large-distances. Then the corresponding mean square radii for the small values of β and for the large enough R diverge as According to the modified definition of the mechanical radius in Eq. (18), to the external force in equilibrium equation (47) for the short-range and to the asymptotic behaviour in Eqs. (A1) and (A4) the mean square mechanical radius of the monopole for small values of β diverges as follows Proceeding analogously to the monopole case, we obtain for the dyon the same divergences as in Eqs. (48), (B1), (B2) and (B3). Namely, the D-term diverges as in Eq. (48), the mean square electric and magnetic charge radii as in Eq. (B1), the mean square energy radius of the short-range part as in Eq. (B2) and the mean square mechanical radius as in Eq. (B3).
8,281
2023-02-23T00:00:00.000
[ "Physics" ]
Research A Transformation Method for Delta Partial Difference Equations on Discrete Time Scale The aim of this study is to develop a transform method for discrete calculus. We define the double Laplace transforms in a discrete setting and discuss its existence and uniqueness with some of its important properties. The delta double Laplace transforms have been presented for integer and noninteger order partial differences. For illustration, the delta double Laplace transforms are applied to solve partial difference equation. Introduction e origin of calculus of finite differences is found from Brook Taylor (1717), rather it was Jacob Stirling, who found the theory (1730) and introduce the delta Δ symbol for the difference, which is in common use nowadays. e development on calculus of finite differences in the beginning of the nineteenth century by Lacroix and remarkable work of George Boole, Narlund, and Steffensen appeared later in the nineteenth century. Jordan discussed calculus of finite differences with the classical approach in [1]. In modern era, the focus of mathematician is to correlate the continuous and the discrete, to shape in comprehensive unified mathematics, and to eliminate ambiguity. e calculus of finite differences is applicable to both continuous and discrete functions. For difference equations, Bohner and Peterson treat the dynamic equations on time scales in [2] and get surprisingly different results from continuous counterpart. Some results can be found in [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19] which has helped to construct the theory of discrete fractional calculus. Coon and Bernstein [20][21][22] defined the double Laplace transforms (continuous) and investigated many properties. Debnath [23] modified the properties and use the double Laplace transforms (continuous) to solve functional, integral, and partial differential equations. Dhunde and Waghmare [24] discussed convergence and absolute convergence of the double Laplace transforms (continuous) and, by application of double Laplace transforms, presented the solution of Volterra Integropartial differential equation. For applications of triple, quadruple, and n-dimensional Laplace transforms (continuous), we refer the readers to [25][26][27]. Goodrich and Peterson [10] developed discrete delta Laplace transforms analogous to Laplace transforms discussed by Bohner and Peterson [2] in the continuous case, to solve difference and summation equations with initial data by applying the delta Laplace transforms. e delta Laplace transforms is given for newly defined Hilfer difference operator [28] and substantial difference operator in [29]. Bohner et al. [30] generalized properties of the Laplace transforms to the delta Laplace transforms on arbitrary time scales and discussed translation theorems and transforms of periodic functions. Compatible discrete time Laplace transforms with Laplace transforms was introduced in [31]. Savoye [32] highlighted the importance of discrete time problems and relationship of Z transforms to Laplace transforms on time scale. Fractional double Laplace transform was introduced in [33]; during derivation of Corollary 1, authors neglected the violation of semigroup property of Mittage-Leffler functions, and a counter example for semigroup property of Mittage-Leffler functions is given in [34]. e qualitative analysis of delay partial difference equations is considered as discrete analog of delay partial differential equations by Zhang and Zhou [35]. For solving partial difference equations Ozpinar and Belgacem introduced discrete homotopy perturbation Sumudu transform method in [36]. For solving partial differential equations, double Laplace transform was applied in [37,38]. Here, we introduce the delta double Laplace transforms similar to the one presented by Bernstein [20] in such a way that properties and expressions bear a resemblance to that appearing in Debnath [23] for the continuous calculus. e double convolution product, we consider in this article, resemble with the convolution product defined for delta calculus in [2,10], but it differs from the one defined by Atici in [8]. We consider the problem with constant coefficients in two independent variables and solve by applying the delta double Laplace transforms to partial difference equations with initial data. is paper is divided into five sections. In Section 2, we shall present basic definitions and results from discrete calculus. Definition, existence, uniqueness, and series representation of the delta double Laplace transforms are given in Section 3. Some properties of the delta double Laplace transforms are proved in Section 4. In Section 5, we present the delta double Laplace transforms of partial differences. Preliminaries For convenience, this section comprises of some basic definitions and results from discrete calculus for later use in the following sections. e functions we consider usually are defined on the set N a ≔ a, a + 1, a + 2, . . . a ≔ a, a + 1, a + 2, . . . , b { }, for fixed a, b ∈ R. e following concepts are discussed in [10,16]. Falling function is defined for positive integer n by e circle plus sum of p 1 , p 2 ∈ R is given by e additive inverse of p 1 ∈ R is given by Definition 1 (see [10]). Assume p 1 ∈ R and s ∈ N a . en, the delta exponential function is given by By the empty product convention, s− 1 t�s [h(t)] ≔ 1 for any function h. , then the delta exponential function for a constant is given by (3) For a particular choice of s � a, that is, the initial point of the domain of definition, Definition 2 (see [10]). Assume f: N a ⟶ R and b ≤ c are in N a ; then, the delta definite integral is defined by Note that the value of integral Definition 3 (see [10]). Assume f: N a ⟶ R. en, the delta Laplace transform of f based at a is defined by for all complex numbers p ≠ − 1 such that this improper integral converges. Note that throughout this article, we take the delta Laplace transform at the initial point a of the set N a , unless stated otherwise. e following concepts are also discussed in [10,16]. Definition 4 (see [10]). A function f is of exponential order r 1 > 0 if there exist a constant A 1 > 0 and the following inequality: If f is of exponential order, then L x f (p) converges absolutely for |p + 1| > r 1 , which ensures the existence of the Laplace transform. Even though the converse in not true, we restrict ourselves to only exponential order functions. For f: N a ⟶ R, the following are useful expressions for the delta Laplace transform of f based at a: for all complex numbers p ≠ − 1 such that this infinite series converges. Definition 5 (see [10]). Assume f, g: N a ⟶ R. e convolution product is defined by (12) Note that by the empty sum convention (f * g)(a) � 0. Lemma 1 (convolution theorem, see [10]). Assume f, g: N a ⟶ R. If both L x f(x) and L x g(x) exist, then the delta Laplace transform of the convolution product is given by Lemma 2 (see [10]). Assume two functions v, w: N a ⟶ R. and we have the summation by parts formula: Definition 6. e generalized falling function is defined in term of gamma function by given that the expression in the above equation is justifiable. It is convenient to take t μ � 0, whenever t + 1 is natural number and t − μ + 1 is a zero or negative integer. Definition 7. e discrete Taylor monomial based at s � a is defined by and the μ th order Taylor monomial is defined by Lemma 3 (see [10]). e following hold for delta Laplace of Taylor monomial: In the next definition, we consider only delta difference with increment 1, and do not bother the different operators that we will not be using here. One can find the details of Definition 8 in [1,39]. Definition 8. Assume u: N a × N a ⟶ R, a function of two independent variables. en, the partial difference of u(x, y) with respect to x, regarding y as a constant is given by e partial difference of u(x, y) with respect to y, regarding x as a constant, is given by Partial difference equation is an equation containing partial differences. Note that Δ xy � Δ y Δ x � Δ x Δ y � Δ yx . Followed by the rule for integer order difference operator Δ n � ΔΔ n− 1 , we adopt the symbol for partial differences as follows: The Delta Double Laplace Transforms In this section, we give abstract definition of the delta double Laplace transform. For convenience, we simplify definition to series representation followed by Goodrich and Peterson [10] simplification of the delta Laplace transform. Also, condition for existence, uniqueness, and linearity of the delta double Laplace transform has been revealed. Definition 9. Assume f: N a × N a ⟶ R. en, the delta double Laplace transform of f based at (a, a) is the successive application of the delta Laplace transform on x and y in any order where L x and L y are the delta Laplace transforms (single) based at a with respect to x and y, respectively, and L 2 is the delta double Laplace transform based at (a, a). e delta double Laplace transform of a function f(x, y) of two variables x and y is defined in p-q plane provided the following double sum converges: for all complex numbers p ≠ − 1 and q ≠ − 1. One can easily verify by using Lemma 4 that L x L y � L y L x . Later, in eorem 2, we will prove that the double infinite series is absolutely convergent. It is well known that absolutely convergent series behave nicely and change in the order of summation ∞ k�0 ∞ j�0 allowed. erefore, we can operate in either way L x L y � L y L x . for all complex numbers p ≠ − 1 and q ≠ − 1 such that the infinite series converges. Proof. By using the definition of the delta double Laplace transform, consider the following: Now, by the definition of delta integral from discrete calculus, we obtain In preceding steps, we use the definition of delta exponential function and the fact that 1⊖p � 1/(1 + p) and 1⊖q � 1/(1 + q), since p and q are regressive functions. In the following step, we use x − a � j and y − a � k to reindex the sums as follows: : N a ⟶ R, and h(y): N a ⟶ R such that the delta double Laplace transforms exist, then the following holds: Proof. Under the assumption stated above and by Lemma 4, (ii) e proof is similar to part (i). (iii) For p ≠ − 1 and q ≠ − 1, we have (iv) By using eorem 1 part (iii), we obtain By using Lemma 3, If we choose either m � 0 or n � 0, then as a special case of the above Coon and Bernstein [20,21] defined the double Laplace transforms and discussed convergence and existence for the continuous case. We discuss discrete analogue of the double Laplace transforms. Definition 10. A function f(x, y): N a × N a ⟶ R is of exponential order r 1 , r 2 > 0 with respect to x and y, respectively, if there exist a constant A > 0 and m, n ∈ N 0 such that, for each x ∈ N a+m and y ∈ N a+n , the inequality en, there exists a constant A > 0 and m, n ∈ N 0 such that, for each x ∈ N a+m and y ∈ N a+n , |f(x, y)| ≤ Ar x 1 r y 2 . us, for |p + 1| > r 1 and |q + 1| > r 2 , we consider the following: Since |p + 1| > r 1 and |q + 1| > r 2 , therefore |p + 1| − r 1 > 0 and |q + 1| − r 2 > 0. Hence, the delta double Laplace transform of f converges absolutely. eorem 2 ensures the existence of the delta double Laplace transform. In general, the converse does not hold. We should consider functions f of some exponential order r > 0, to ensure the delta double Laplace transform of f which does converge somewhere in the complex plane outside the both closed balls of radius r 1 , r 2 , centered at − 1, that is, we can choose r � max r 1 , r 2 for |p + 1| > r 1 and |q + 1| > r 2 . □ Theorem 3. Suppose f, g: N a × N a ⟶ R. If the delta double Laplace transform of f, g converges for |p + 1| > r 1 and |q + 1| > r 2 , where r 1 , r 2 > 0, and let c 1 , c 2 ∈ C, then the delta double Laplace transform of c 1 f + c 2 g converges for |p + 1| > r 1 , |q + 1| > r 2 , and L 2 c 1 f + c 2 g (p, q) � c 1 L 2 f (p, q) + c 2 L 2 g (p, q), converges for |p + 1| > r 1 and |q + 1| > r 2 . Proof. By hypothesis, we have for |p + 1| > r 1 and |q + 1| > r 2 . is implies that for |p + 1| > r 1 and |q + 1| > r 2 . Since, by eorem 2, the double infinite series is absolute convergent, therefore comparison of both sides implies that f(a + j, a + k) � g(a + j, a + k), for all j, k ∈ N 0 . (36) For each fix j and for all y ∈ N a , this implies that For each fix k, we obtain Basic Properties of the Delta Double Laplace Transform In this section, following Bohner et al. [30], we prove some properties of the delta double Laplace transform. We also define double convolution product of discrete functions followed by Goodrich and Peterson [10] convolution product (single) of discrete functions. We present, the delta double Laplace transform of double convolution product for later use to solve difference equations. where H(x, y) is the Heaviside unit step function defined by Reindex by τ � j + a − c and s � k + a − c, (ii) By use of Lemma 4 and reindex by τ � j + c − a and s � k + c − a, Proof. Under the assumption, we have, by Lemma 4, In the last step, we used j � T 1 + u and k � T 2 + v to reindex second double summation. In second double summation, periodicity of f implies that e double convolution product is defined by Note, by empty sum convention, (f * * g)(a, a) � 0. Lemma 5. Assume f, g: N a × N a ⟶ R. e double convolution product is commutative: Proof. By Definition 11 and the change of variables x − r − 1 + a � u and y − s − 1 + a � v, we have Mathematical Problems in Engineering a)f(u, v), , y), for x, y ∈ N a . Proof. Under given assumption, we have, by Lemma 4 and the fact (f * * g)(a, a) � 0, (54) In the last step, we used Definition 11; next, making the change of variables r ⟶ a + r and s ⟶ a + s gives us that � L 2 f(x, y) L 2 g(x, y) . (55) In the previous steps, we interchanged the order of first pairs and second pairs of summation and change variables 1 (y) and g(x, y) � u 2 (x)v 2 (y) and the delta Laplace transform exists, then where the product on right-and left-hand sides is given by Definitions 5 and 11,respectively. Proof. By double convolution theorem, we have (58) e last step is followed from single convolution Lemma 1. The Delta Double Laplace Transforms of Partial Differences In this section, we examine the action of the delta double Laplace transforms on first order partial differences. e results developed for first order partial differences are further used to establish properties of the delta double Laplace transforms of generalized order partial difference, similar to that appeared in [40] for fractional order partial derivatives. We usually consider functions u(x, y): N a × N a ⟶ R, of exponential order r 1 , r 2 > 0 with respect to x and y, respectively, ensuring that delta Laplace and the delta double Laplace transforms of u(x, y) and its partial differences does exist. Lemma 6. Assume u(x, y): N a × N a ⟶ R, such that the delta Laplace transforms exist for constants p ≠ − 1 and q ≠ − 1. en, Proof. By definition of the delta Laplace transforms on x, Mathematical Problems in Engineering Apply summation by parts (Lemma 2) on x, and using the fact Δ x [e ⊖p (σ(x), a)] � ⊖pe ⊖p (x, a), we have that Use the fact e ⊖p (x, a) � 1/(p + 1) x− a and ⊖p � − p/(p + 1), , a), u(a, y) + pL x u(x, y) , Let L x u(x, y) � u(p, y). Consider the left-hand side of equation (61) and use the definition of delta difference: By using linearity property of the delta Laplace transforms, we obtain Now, consider the right-hand side of equation (61) and use L x u(x, y) � u(p, y): By using the definition of delta difference, we obtain � u(p, y + 1) − u(p, y). Equality holds in equation (61) Proof. Since, by definition, the delta double Laplace transforms is the successive application of the delta Laplace transforms on x and y in any order, therefore By using equation (59) of Lemma 6, we obtain � L y pL x u(x, y) − u(a, y) . Use linearity property of the delta Laplace transforms for L y , (ii) e proof is similar to part (i). Note that, for constant a, Δ x u(a, y) � u(a, y)− u(a, y) � 0. We adopt the following symbols in our result which are nonzero, in general, Δ x u(a, y) � Δ x u(x, y) | x�a and Δ y u(x, a) { } � Δ y u(x, y) | y�a , that is, first we take difference and then evaluate at a. □ Lemma 7. Assume u(x, y): N a × N a ⟶ R, such that the delta Laplace transforms exist for constants p ≠ − 1 and q ≠ − 1. en, Proof (i) We prove this part by induction on n, and result for n � 1 has been proved in Lemma 6. Assume the result is true for n ≥ 1: y). (74) We will try to establish result for n + 1, beginning with the following: Let w(x, y) � Δ n x [u(x, y)], and we have that Again using equation (59) of Lemma 6, � pL x w(x, y) − w(a, y) By using assumption for n, Mathematical Problems in Engineering e result holds for n + 1, whenever it holds for n. Hence, by induction, result in part (i) holds. Let v(x, y) � Δ m y u(x, y), and use part (i) of the same Lemma: Proof of (ii) and (iv) is similar as proof of part (i) and (iii), respectively. □ Theorem 10. Assume u(x, y): N a × N a ⟶ R, such that the delta double Laplace transforms exist for constants p ≠ − 1 and q ≠ − 1. en, Proof. Since by definition, the delta double Laplace transforms is the successive application of the delta Laplace transforms on x and y in any order; therefore, (i) Using Lemma 7 part (i) and linearity of Laplace, we consider the following: (ii) Proof is similar as in part (i). (iii) Using Lemma 7 part (iii) and linearity of Laplace, we consider the following: In the previous step, we used Lemma 7 part (iv). In the following step, using eorem 10 part (ii), F(p, q), then for constants p ≠ 0, − 1 and q ≠ 0, − 1, and we have Proof. For x, y ∈ N a , let en, the difference Δ xy is By separating last term for t � x, from the first double sum, we obtain By separating last term for τ � y, from the first sum, we obtain f(x, y) � Δ xy u(x, y). (88) Now, for constants p ≠ 0, − 1 and q ≠ 0, − 1, taking the delta double Laplace transforms on both sides, By application of eorem 10 (iii) for m � 1 � n, (a, a). On right-hand side, we obtain In the last step, u(x, a), u(a, y), u(a, a) are zero by empty sum convention, and on further simplification, we obtain (a) Solve the partial difference equation: Application of the delta Laplace transforms to initial conditions by Lemma 3, Apply the delta double Laplace transforms to difference equation and then use linearity property: Using eorem 9, (97) Application of the delta Laplace transforms to initial conditions by Lemma 3: L y u(a, y) � L y (y − a) 2 � 2 q 3 . Corollary 2. Assume u(x, y): N a × N a ⟶ R, such that the delta double Laplace transforms exists for constants p ≠ − 1 and q ≠ − 1 and denote L 2 u(x, y) � u(p, q). Data Availability e data used to support the findings of this study are available from the corresponding author upon request.
5,111.4
2020-07-10T00:00:00.000
[ "Mathematics" ]
QD-Based FRET Probes at a Glance The unique optoelectronic properties of quantum dots (QDs) give them significant advantages over traditional organic dyes, not only as fluorescent labels for bioimaging, but also as emissive sensing probes. QD sensors that function via manipulation of fluorescent resonance energy transfer (FRET) are of special interest due to the multiple response mechanisms that may be utilized, which in turn imparts enhanced flexibility in their design. They may also function as ratiometric, or “color-changing” probes. In this review, we describe the fundamentals of FRET and provide examples of QD-FRET sensors as grouped by their response mechanisms such as link cleavage and structural rearrangement. An overview of early works, recent advances, and various models of QD-FRET sensors for the measurement of pH and oxygen, as well as the presence of metal ions and proteins such as enzymes, are also provided. Introduction Among various bioimaging/biosensing techniques, fluorescence-based methods are generally preferred over absorption [1,2], especially with the ability to detect single photons with avalanche photodiodes and intensified camera systems [3]. One must employ an emissive chromophore such as an organic dye or fluorescent protein; however, they generally suffer from a lack of photochemical stability and may require sophisticated tunable excitation sources. The effect is to reduce researchers' ability to perform OPEN ACCESS studies that require multiplex detection over long timescales [4]. Quantum dots (QDs) are semiconductor nanocrystals with unique electro-optical properties due to quantum confinement effects [5]. Specifically, they have broad absorption spectra with large extinction coefficients, narrow and tunable photoluminescence (PL), high emission quantum yields, improved resistance against photobleaching, and long fluorescence lifetimes. As a result, QDs are exceptional bioimaging and biosensing tools [6][7][8][9][10]. As a comparison of QDs and organic dyes as fluorescent labels has been discussed in a recent review [11], we only summarize the main differences here. Although the aqueous synthesis of QDs results in instantly water soluble materials, the crystalline quality, quantum yield, and the emission bandwidth of the dots are sub-optimal. On the other hand, the synthesis of hydrophobic QDs in organic solvents imparts a higher quantum yield, enhanced monodispersity, and a narrower emission bandwidth [12]. As such, this review focuses on hydrophobic as-prepared dots. One of the fundamental aspects of manufacturing QD-based sensors is water solubilizing the organic capped QDs and functionalizing them for conjugation to biologicals or organics such as dyes. Furthermore, surface passivation via an inorganic shell that covers the cores can reduce the surface defects significantly and improve the QDs photoluminescence properties [13][14][15]. The core/shell structure also improves the stability of the QDs and reduces cytotoxicity by preventing the leaching of toxic elements, generally cadmium, from the cores. Transferring organic capped QDs into an aqueous solution and functionalizing them was a significant challenge for many years [16]. Several water solubilizing methods have been developed over the past decades, with the majority falling under the categories of micelle encapsulation [17][18][19][20], ligand exchange [21][22][23][24][25], and silanization [26][27][28]. The water soluble capping layer on the surface of the QDs usually possesses reactive groups that can be used for conjugation of target molecules using variety of available chemistries. Historically and generally still true, carboxylic acid groups are the part of the transfer agent that renders as-prepared hydrophobic QDs soluble in water at a physiological pH of 7.4 and engenders biological functionalization with proteins via amide bond formation. Generally commercially-available carbodiimide crosslinking reagents such as 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDC) are used [29]. Coupling efficiencies may be enhanced with the addition of N-hydroxysuccinimide (NHS) or sulfo-NHS. However, charge cancellation of anionic QDs with cationic species can render the colloidal dots unstable, causing them to precipitate from water [23,30]. This effect is mollified by either cancelling out the electrostatics of either the dots or the reagents using poly(ethylene glycol), [30,31] which prevents the loss of products and an enhancement of reaction yields. Sulfhydryl coated QDs may be functionalized using a reagent such as sulfosuccinimidyl-4-(N-maleimidomethyl)cyclohexane-1-carboxylate (Sulfo-SMCC) which is an amine to thiol cross-linker [32]. Other conjugation approaches based on biotin-streptavidin interactions [33] or self-assembly of polyhistidine tagged molecules on the zinc-rich surface of the QDs [34][35][36] are also frequently reported. Herein we will address some recent developments of QD-based fluorescent resonance energy transfer (FRET) sensors, including a brief discussion on the basic principles of FRET and an overview of the most commonly used sensing strategies and schemes for the detection of a wide variety of analytes. As the entirety of literature examples on QD-based sensors is very large, we will only highlight just a few examples to limit the scope on motif rather than to create a full list of analytes that have been targeted using the same. FRET Principle Fluorescence resonance energy transfer is a non-radiative transition of energy from an excited dipolar species (the donor) to an acceptor that is in close proximity. FRET is dependent on the distance between the donor and the acceptor, typically in the range of 1-10 nm, as well as the spectral overlap of the donor emission with the acceptor absorption. The FRET efficiency (E) can be defined as: where is the donor decay rate in the presence of an acceptor, is the donor decay rate in the absence of an acceptor, is the donor-acceptor distance at which FRET efficiency is 50% that is dependent on the donor emission overlap with the acceptor absorption, is the actual donor-acceptor distance, IDA is the integrated emission intensity of the donor in the presence of an acceptor, and ID is the same without an acceptor. The relationship is fundamentally based on Fermi's Golden Rule for interacting dipoles, and carries with it all the approximations that the theory is dependent upon. For example, the 6th power of distance scaling behavior is the result of the inverse r 3 dependence of dipole-dipole interactions and the squared coupling term of Fermi's Golden Rule. Not all types of energy transfer conform to this mechanism and thus have different distance dependencies. Furthermore the donor state cannot be extremely short lived and 1st order time-dependent perturbation theory needs to be applicable, i.e., back energy transfer from acceptor to donor should be minimal. Regardless of the nature of the energy transfer process, the timescale(s) of luminescence rise and decay change when energy transfer is occurring. Obviously the donor lifetime is quenched while the acceptor lifetime is enhanced; this is generally considered the best evidence of FRET. Studies have shown that QDs are effective energy donors to organic dye acceptors in a wide range of FRET-based processes [37][38][39] due to their high quantum yields and extinction coefficients, narrow and symmetrical emissions, and photochemical stability. This is somewhat surprising as the large size of nanocrystals would seemingly force the donor-acceptor distance to be so far (>several nm) as to preclude any significant efficiency. The mechanism of the energy transfer process is due to a dipole-dipole interaction [40] and follows the standard r 6 -dependent FRET efficiency. Regardless, there are some alterations to the standard model that need to be implemented. For example, it is not feasible to construct a single QD donor-single dye acceptor system. In this case, the FRET efficiency where a single QD donor interacts with n acceptors simultaneously and all the acceptors are located at the same distance can be expressed by [41]: This is further complicated by the fact that there should be a heterogeneous distribution of the n acceptors per QD that follows Poissonian statistics [42], where the probability of a QD having n acceptors is: where λ is the average dye:QD ratio. There are a few other issues to note, such as the odd fact that the FRET efficiency calculated from time-resolved data is rarely equivalent to that measured by total donor emission quenching (Equation (1)). This is likely due to the complex nature of QD donor lifetimes which are generally multi-exponential in nature. Last, QDs are not good FRET acceptors from organic dye donors [43]. This was conjectured to result from the longer excitation lifetime of the QD (>10 ns) compared to organic dyes (<10 ns). This is consistent with a breakdown of Förster theory due to the non-applicability of some approximations used in its derivation as discussed above. However, the use of the QDs as acceptors with longer lifetime donors such as lanthanidesis efficient [44,45]. Furthermore, new and interesting variations of energy transfer called bioluminescence-and chemiluminescence-resonance energy transfer have been recently demonstrated. In these systems, biologically emissive proteins conjugated to QDs create "self-lighting" dot systems that require chemical, rather than physical, excitation. The use of QDs as donors in FRET systems offers several advantages over conventional fluorophores [46][47][48]. Nanocrystals can be excited at regions far from the acceptor's absorption spectrum. This minimizes the direct excitation of the acceptor that otherwise would complicate data analysis. The emission spectra of QDs are narrow, tunable, and symmetric allowing for the engineered maximization of the overlap with the acceptor absorption without tailing into the acceptor emission region. Last, QDs can be multifunctionalized to impart both biological targeting and sensing functionalities. Turn-On Sensors Since the theoretical prediction of energy transfer from QDs to organic dyes [49] and experimental realization [50], numerous examples of the synthesis and study of QD-FRET coupled chromophores have been reported. Generally this work was motivated for sensing, where the design is tailored to the chemical nature of the analyte. The Mattoussi group pioneered the use of QD scaffolds for the development of FRET assays via self-assembly of dye-labelled proteins onto QDs [41]. One of their first reports was based on the competitive binding of a quencher with the target analyte. Initially, QD emission is highly suppressed due to the proximity of a strong quencher that is weakly bound to the protein. Upon addition of the analyte, the competitive displacement of the quenching dye with the target molecule results in a concentration-dependent recovery of the QD emission ( Figure 1). This type of reporting results in a so-called "turn-on sensor". While somewhat difficult to calibrate in a complex system (i.e., in vivo), turn on sensing is most useful for the absolute detection of a disease-state biological marker. This strategy was first demonstrated by sensing maltose by Medintz et al. [51]. In this study, CdSe/ZnS QDs were used as a scaffold for the self-assembly of polyhistidine-terminated maltose binding protein (MBP-His) prebound to β-cyclodextrin that was labeled with a quenching dye. Initially, the quencher rendered QD emission inefficient; however, the displacement of the β-cyclodextrin with maltose resulted in the recovery of the QD PL emission. A similar rationale was used to detect the presence of trinitrotoluene (TNT) [52]. An antibody for TNT that was expressed with an oligohistidine tag was coordinated to CdSe/ZnS QDs in water. This complex was exposed to trinitrobenzene (TNB) labeled with BHQ10 quenching dye which resulted in QD quenching, which was recovered by the addition of TNT samples. This report was significant as it demonstrated the generality of the method through the use of an engineered antibody. Ratiometric Sensors, Cleaving the Link The strategy of donor-acceptor spatial modulation has been used to create a ratiometric response, which is essentially a color change as the reporting mechanism rather than emission intensity modulation. An early demonstration concerned the measurement of proteolytic enzyme activity. Here, a QD is functionalized with a peptide linker that is terminated with an energy accepting fluorescent dye. Due to a finite FRET efficiency, emission can be observed from both the QD donor and dye acceptor. However, the ratio of the intensities of the two chromophores is altered by the addition of an analyte, which cleaves the linker causing the dye to diffuse away from the QD. As a result, nanocrystal emission becomes more dominant ( Figure 2). Due to their regulatory role on functionality of many proteins involved in major human diseases such as cancer and AIDS, proteases and metalloproteases are the most studied enzymes using this strategy [53][54][55][56][57][58][59][60][61][62][63][64][65][66][67]. Medintz et al. employed this strategy for sensing caspase-1, thrombin, chymotrypsin, and collagenase using engineered peptides [68]. Specifically, they synthesized a peptide containing a terminal polyhistidine unit for QD conjugation, then a protease-specific cleavage sequence, and ending with cysteine that was a functional handle for dye conjugation. Addition of the dye functionalized peptides to aqueous dots resulted in enhanced dye emission and QD quenching due to energy transfer. Enzyme addition cleaves the peptides, causing QD-dye displacement and a loss of FRET efficiency that caused a resurgence in the QD emission. A similar motif was constructed by Rao and coworkers [69] for detection of β-lactamase in which an enzyme substrate labeled with carbocyanine dye (Cy5) was immobilized on the QDs through biotin-streptavidin binding. This study was important as the group reported that the length of the linkage and its coverage density on the QD surface affect the rate of enzyme activity. Stevens and coworkers reported a simple detection method for histone acetyltransferase [70]. The group also demonstrated the first multiplex sensing mechanism for simultaneous detection of proteases and kinases using a QD-FRET probe [71]. Caspase 3 is a member of the cysteine-aspartic acid protease family that plays an important role in programmed cell death (apoptosis) by cleaving a variety of key cellular proteins [72]. Caspase 3 activity has been proven to be involved in a variety of cancers such as breast, ovarian, and prostate, as well as Alzheimer's disease [73][74][75]. Thus, developing biosensors for monitoring the activity of this enzyme in vitro and in vivo has attracted significant attention recently. Medintz and coworkers demonstrated peptide-based QD FRET sensors for monitoring proteolytic activity of caspase 3 [67,76]. In one of the designs, mCherry red fluorescent protein comprising an intervening N-terminal caspase 3 cleavage site was self-assembled on the surface of the QDs via the terminal His6-sequence of the peptide linker. Due to the close proximity of the fluorescent protein to the QDs, there is an efficient transfer of energy between them, but upon addition of the caspase 3 enzyme, the linker is cleaved and the fluorescent protein diffuses away causing a reduced FRET efficiency and a quantitative ratiometric change in donor-acceptor emissions [67]. In another design a peptide linker was engineered to have C-terminal cysteine at one end, an intervening DEVD sequence that serves as the caspase 3 cleavage site, and an N-terminal primary amine at the other end ( Figure 3). The cysteine thiol group was used to label the peptide with Texas Red maleimide, and the amine end was used for conjugation of the dye-labeled peptide to the carboxyl groups present on the surface of the polyethylene glycol modified dihydrolipoic acid (DHLA-PEG) capped QDs using EDC. The response mechanism is again based on linker cleavage in the presence of caspase 3 and the change in the FRET efficiency that is proportional to the amount of the enzyme present [76]. Medintz and coworkers recently reported a two-step assay for kallikrein proteolytic activity based on QD-FRET [77]. Kallikrein is a serine protease capable of initiating the blood clotting cascade. Aberrant activity of this enzyme in human plasma can potentially result from the presence of oversulfated chondroitin sulfate (OSCS) contamination, which is a drug adulterant in heparin. The design of the QD-based FRET sensor for kallikrein is challenging due to the fact that the large size of this enzyme causes a steric hindrance that prevents proper access and cleavage of the substrate near the QD surface. To overcome this challenge, the authors developed a two-step assay. The peptide substrate is composed of a terminal histidine tag that allows the self-assembly of the substrate on the QD surface, a kallikrein cleavage sequence, followed by a rigid spacer sequence with a N-terminal cysteine that provides a thiol group for labeling with a cyanine dye (Cy3). In the two-step assay shown in Figure 4, a known concentration of the Cy3-labeled kallikrein substrate peptide was digested in the presence of the enzyme, and then in second step, the sample was diluted with a buffer containing the QDs. The observed changes in QD-dye emission ratio due to FRET can be correlated to the concentration of the cleaved peptides over time. Ratiometric Sensors, Structural Rearrangement Another ratiometric sensing strategy is based on analyte-induced structural changes in a linker that connects a QD-dye FRET pair ( Figure 5). Such a design is ideal for the use of a DNA molecular beacon due to its hairpin structure [78][79][80][81][82][83][84]. The dye-labeled end is initially in close proximity to the QD. Upon addition of the target DNA and hairpin unfurling the dye-labeled end moves away from the QD resulting in a FRET efficiency decrease and subsequent QD PL emission recovery ( Figure 5). Using DNA to link QDs and FRET accepting dyes can perform more functions other than to detect the presence of complementary DNA. Recently, Kay et al. reported on the usage of a cytosine-rich oligonucleotide to bridge QDs and a rhodamine dye [85]. The DNA fragment was chosen because it is known to undergo conformational changes as a result of pH which results in a greater QD-dye distance under basic conditions. The group also successfully used this system to measure the acidification of maturing HeLa cell endosomes. The Response Mechanisms of QD-Based FRET Sensors: Spectral Overlap Another QD-FRET sensor design involves the changes in the acceptor dye photophysical properties in response to the presence of the target analyte. For example, since FRET depends on spectral overlap of the QD emission with the dye absorption spectrum, any changes in absorption spectrum of the dye such as cross sectional or wavelength shifts can alter the FRET efficiency. The change in the dye optical properties may result from a chemical reaction with the analyte or perhaps by an alteration of the dye's microenvironment. This in turn alters the ratio of the QD to dye emission that can be correlated to the analyte concentration. This strategy is very facile for the detection of small chemical elements such as hydrated salts or pH levels [76,86,87]. pH The first ratiometric QD-based FRET pH sensor was reported by Snee et al. [88]. The system was constructed using CdSe/ZnS QDs encapsulated with a hydrophobically modified poly(acrylic acid) that is conjugated to squaraine dye via an ester bond ( Figure 6). FRET is established between the coupled chromophores as verified by time resolved measurements that found the luminescence decay timescales of the blank QD versus QD-dye shortened from 31 ns to 19 ns. The absorption spectrum of the squaraine is pH-dependent because of the (de)protonation of phenolic groups around the squaraine functionality. At high pHs the FRET efficiency decreases due to a lowering of the dye absorption resulting in an emission spectrum that is dominated by QD luminescence. At low pHs the opposite occurs, resulting in quenching of the QD emission due to more energy transfer to the dye. The ratiometric response of the sensor offers advantages over single intensity based response due to less sensitivity to light fluctuations and the fact that dual-emissive ratiometric probes are self-calibrating. The sensor was also used to measure pH in a highly scattering medium, with results that were within a 5% margin of error. Several systems for QD-based ratiometric sensing using coupled QD-sensing dye chromophores have since been reported that follow this strategy. Dennis et al. [86] used pH-sensitive fluorescent proteins (FPs) for the fabrication of QD-FP FRET based pH sensors. In this study mOrange and its homologue mOrange M163K, which is a mutant with a shifted pKa and improved photostability, were conjugated to carboxyl-functionalized QDs using carbodiimide chemistry. The spectral overlap between the QD emission and FP absorption resulted in energy transfer from the QD to the proteins that are sensitive to pH changes. This is a result of the pH dependence of the molar absorptivity of these proteins. Increasing the solution pH caused an increase in the acceptor to donor emission ratio which was shown to be due to modulation of the FRET efficiency (Figure 7). The sensors are functional over the pH range of 6-8 and display the maximum sensitivity near the pKa of the acceptor fluorescent proteins. Indirect excitation of the FPs by FRET also increased their photostability and made simultaneous cell imaging/pH sensing possible. To measure the intracellular pH, the QD-FP conjugates were further modified with polyarginine to promote endosomal uptake. After incubation with HeLa cells and washing, the emission of the endosomal pH sensors was monitored over time using filters to image the QD emission separately from the dye emission. The results indicated that the pH drops as the early endosome matures to a late endosome as expected. Furthermore, all controls confirmed that the results were not due to photobleaching or degradation of the proteins rather than the natural pH change (acidification) that endosomes undergo. Krooswyk et al. [89] studied the response mechanism of this QD-sensing dye motif using a CdS/ZnS-fluorescein pH sensor. They initially proposed that the response mechanism should be dictated only by organic dye's photochemical properties. And while the pH response of the neat dye was consistent with the Henderson-Hasselbalch equation, as it should, the QD-dye conjugate pH-dependent fluorescence emission ratio deviated from the typical sigmoidal shape. As a result, it was found that the conjugation of the organic dye to a QD significantly changes the dye's microenvironment. Taking into account the variables that can affect the FRET efficiency and subsequent response of the QD-dye pH sensor, it was determined that the response is a function of the QD size as this changes the nanocrystal emission which in turn alters the FRET overlap integral, the swelling of the polymer used for QD encapsulation that changes the QD-to-dye distance [90], and the electrostatically different microenvironment that dye senses after conjugation into the QD. As such, it is very difficult to predict the performance of a QD-dye FRET sensor based on photochemical properties of the organic dye alone. Metals Environmental contamination by metals is hazardous as some metal ions are extremely toxic. As such, the use of QDs for toxic metal sensing is topical, and many groups have reported on their usage. It has been well known that even the most well passivated aqueous QDs are quenched by cations such as Ag + [91], Pb 2+ [92], Fe 2+ [93], Cu 2+ [94,95] and Hg 2+ [96] due to cation exchange [97]. As a result, quenching of QD fluorescence by these metals is highly non-specific and is not a practical analytical method in general. Furthermore, this non-selective ion quenching is an impediment to the development of a FRET sensor with QD donors due to direct and irreversible destruction of the donors by the analytes. Despite this, Page et al. [98] reported a ratiometric mercury sensor based on FRET between green emitting CdSe/ZnS QDs and a thiosemicarbazide functionalized rhodamine B dye (Figure 8). The dye has a unique role in that it reacts with mercuric ions to form mercury sulfide, which is highly insoluble in water. This prevents QD quenching by the metal. Furthermore, in the absence of Hg 2+ ions the dye is optically silent due to disruption of the delocalized electronic structure of the dye by the thiosemicarbazide group. Thus, the QD is the dominant emitter in the QD-dye coupled chromophore. Upon exposure to Hg +2 ions the desulfurization reaction that creates HgS also results in the restoration of the dye's absorption and emission properties, causing it to act as a good energy acceptor for the QD donor. Initially, an amine functional thiosemicarbaziderhodamine dye was developed for conjugation purposes, but the dye was very unstable and self-activated without mercuric ions. However, it was found that the thiosemicarbaziderhodamine dye had a very strong non-specific interaction with the surface of the polymer-coated QDs, so chemical coupling was unnecessary. It is worth noting that this QD protection mechanism is not perfect and QD emission quenching occurs via exposure to Hg 2+ ions, which likely explains the high limit of detection (79 ppb). However, the motif of "sense and sequester" does function for its intended purpose. Li et al. [99] reported a FRET system for detection of Hg 2+ using aqueous CdTe QDs as donors and butylrhodamine B (BRB) as an acceptor. Since the electrostatic interaction between the electronegative QDs and cationic BRB dye was not strong enough for self-assembly to occur, QD-dye adsorption was promoted using surfactants. Specifically cetyltrimethylamonium bromide (CTMAB) formed micelles that enhanced the QD-dye interactions and improved the FRET efficiency. Energy transfer was confirmed by the significant enhancement of the BRB PL emission and corresponding quenching of the CdTe PL emission. Upon addition of Hg 2+ to the FRET system, both the donor and the acceptor were quenched, but BRB was quenched more than the CdTe QDs. The response mechanism can be explained by cation exchange of Cd(II) by Hg(II) on the QD surface that results in fluorescence quenching of the CdTe and concomitant quenching of BRB PL emission due to the loss of FRET. The sensor showed linear response over a wide Hg 2+ concentration range and had a detection limit of 20.3 nmol·L −1 (4.07 ppb). Another method to mitigate the quenching of QD donors for mercuric ion sensing is based on silica coatings to protect the nanocrystals. For example, Liu et al. [87] adapted this strategy by first embedding CdTe QDs inside silica particles using a microemulsion method. The nanocrystals can still be quenched by Hg 2+ , so the particles were then given a thin cationic shell that electrostatically prevented mercury absorption. The surface was also linked to a spirolactamrhodamine B derivative that served as the Hg 2+ probe. The response mechanism is based on ring-opening of the spirolactam due to complexation with Hg 2+ , which results in FRET between the QD and activated dye that provided a ratiometric response. Time resolved emission also was used to demonstrate energy transfer from the QDs to the activated dye. The only negative aspect of this work is that the necessary cationic shell was detrimental to the detection limit that was found to be 260 nmol·L −1 (52 ppb). There are other examples of using silica embedded QDs to ratiometrically detect mercuric ions [100,101], however these are not FRET based systems. Proteins and Enzymes The enzyme-linked immunosorbent assay (ELISA) [102] is ubiquitous for the detection of proteins that are biomarkers for a variety of diseases such as influenza, cancer, and HIV. The methodology is heterogeneous and generally relies on multiple antibody-antigen interactions. Specifically, a substrate-bound capture antibody is used to immobilize its antigen whereupon another species-specific antibody-enzyme conjugate is used to catalyze the formation of an absorptive or fluorescent indicator. Highly engineered ELISA assays are very sensitive towards their targets, but the use of secondary antibodies is somewhat time-consuming and difficult to perform due to the multiple washing and blocking steps involved. As such, the development of QD protein sensors that function outside of the ELISA paradigm are if interest [56,80,103,104]. Recently Tyrakowski et al. demonstrated a novel paradigm of ratiometric sensing of proteins using QD FRET-based sensors without a secondary antibody requirement [105]. The system is composed of a three spoke wheel-like structure where rhodamine B piperazine, biotin (a streptavidin agonist), and a QD are all conjugated together with lysine ( Figure 9). The QD and dye are in close proximity and both chromophores are emissive due to energy transfer. Binding of the protein streptavidin to biotin brings the protein in close proximity to the dye. This in turn alters the microenvironment of the dye and causes quenching that results in a ratiometric response ( Figure 9). The sensor responds specifically to streptavidin, and other controls such as a similar system without biotin show no response towards the protein. The detection limit is calculated to be in the same range of ELISA methods, and it can be lowered by diluting the sensor concentration which is a known benefit of ratiometric sensors. This detection method is fast and easy to perform due to the homogeneity of the sensing platform. The authors also showed that the method can be adapted to detect other proteins using DNA oligonucleotide aptamers. They reported a similar sensing mechanism for the detection of thrombin using thrombin binding aptamer coupled to TAMRA as the energy accepting dye. At the same time Zhang et al. reported a nearly-identical system for thrombin sensing where protein binding displaced a FRET-accepting dye bound to the aptamer [106]. Oxygen Tumors develop in a state of hypoxia due to rapid growth, and oxygen levels inside the tumor can be indicators of tumor health and the effectiveness of treatment. Therefore the development of reliable oxygen sensors to probe biological environments has attracted significant attention. There are many reports on oxygen sensitive materials, generally composed of heavy metal complexes [107,108], metaloporphyrins [109,110], and polycyclic aromatic hydrocarbons [111,112]. All of these materials have room temperature phosphorescence emission that is quenched by the presence of oxygen and generate singlet oxygen in the process [113], which is known to have a destructive effect on cancer tumor cells and can also be used as photodynamic therapy [114,115]. There are many reports on the successful conjugation of photosensitizers such as rose bengal, chlorin e6, and phthalocyanine with QDs that serve as FRET donors to generate singlet oxygen for therapeutic purposes [115][116][117][118][119]. Considering the unique electro-optical properties of QDs, they can serve as both a versatile platform for immobilizing oxygen sensitive materials and as a light absorption and excitation source of the phosphors. QDs exhibit higher absorption cross-sections compared to phosphors, and thus FRET-driven excitation of the phosphors can have a higher yield compared to direct excitation. As a well-passivated nanocrystal material is not intrinsically oxygen sensitive, the QD acts as more of an internal reference however this still makes a ratiometric response possible. McLaurin et al. [120] reported a QD-FRET ratiometric oxygen sensor designed to detect oxygen in biological environments. Two osmium (II) polypyridyl complexes with amine functional linkers were synthesized as the oxygen sensitive materials. They were then conjugated to the QD via amide bond formation. The QDs were designed to function as FRET donors by engineering an overlap with the osmium (II) complexes' broad absorption spectra and the narrow QD narrow emission. Excitation was also performed using two-photon absorption, which allows for deeper tissue penetration and more selectivity in terms of localizing the excitation (Figure 10). Lemon et al. [121] reported a similar QD-FRET oxygen sensor using palladium porphyrins as oxygen sensitive materials. The QD-porphyrin conjugates were constructed by self-assembly of Pd porphyrins on the QD surface through meso-pyridyl substituents. Excitation of the Pd porphyrins is via FRET and the ratiometric response to the presence of oxygen is based on the proportional quenching of the Pd porphyrin emission. Ingram et al. [122] reported the construction of a ratiometric optode for oxygen detection based on FRET from a QD to platinum (II) octaethylporphine ketone. A combination of polyvinyl chloride and bis(2-ethylhexyl)sebacate was used as a biologically inert and O2 permeable polymer matrix for fabrication of the optode. The optode was then used for the real-time extracellular measurement of O2 in biological microdomains of active brain tissue. Figure 10. Schematic illustration of a two-photon absorbing QD-FRET oxygen sensor (reprinted with permission from [120], Copyright 2009 The American Chemical Society). The Response Mechanisms of QD-Based FRET Sensors: High Dye Loading Levels There is a significant level of interest in the synthesis of nanomaterials that are simultaneously fluorescent and magnetic. Unfortunately, some basic properties of semiconductor photophysics generally dictate that magnetic materials have fast de-excitation pathways that prevent fluorescence. Hence, there is a bifurcation in research where magnetic semiconductor nanomaterials are generally only used for MRI contrast agents whereas fluorescent quantum dots are used for visible light imaging. Our group recently addressed this issue by coating Fe2O3 nanocrystals [123] with fluorescein dye to make a magnetic nanomaterial that functioned as a pH sensor [18]. Recently we added another dimension to this where fluorescein and amine-functional carboxytetramethylrhodamine (TAMRA) were both conjugated to the surface of magnetic iron oxide nanomaterials. As shown in Figure 11, the system responds ratiometrically to a change in pH which is fully calibratable. There is a FRET interaction between the dye coupled chromophores as evident from the PLE spectrum of the TAMRA dye which showed fluorescein-like features. Furthermore, we can attach hundreds of dyes per dot [30], which mitigates photobleaching of the organic chromophores. Figure 11. Coupling fluorescein and TAMRA dyes to the surfaces of Fe2O3 nanoparticles creates a FRET pair between the two chromophores to produce a ratiometric emission spectrum as a function of pH. Conclusions and Outlook Semiconductor quantum dots offer advantages for use in analytical applications as FRET-based chemosensors and biosensors. Due to their unique electro-optical properties and recent developments on numerous water-solubilization and conjugation strategies, QDs provides a versatile platform for FRET-based sensing probes. Current studies have shown that QDs can be effectively employed as FRET donors to organic dyes and fluorescent proteins, offering advantages such as narrow symmetric emission with minimum spectral bleed-through and broad absorption spectra. This allows for excitation of the QDs donor without direct excitation of the acceptor(s). In some special cases QDs can also play the acceptor role with some lanthanides which is useful in time-resolved fluoro-immunoassays. Despite of the remarkable developments towards the use of the QDs in biosensing, there is still a demand for a simple and practical strategy for the intracellular delivery of these QD-based biosensors [124]. Another direction that the community is moving towards is cadmium free nanocrystals; however, it should be noted that the toxicology of cadmium materials does not appear to be an issue even in animal studies [125].
7,782.6
2015-06-01T00:00:00.000
[ "Biology", "Chemistry", "Materials Science" ]
Hybrid Faraday rotation spectrometer for sub- ppm detection of atmospheric O2 Faraday rotation spectroscopy (FRS) of O2 is performed at atmospheric conditions using a DFB diode laser and permanent rare-earth magnets. Polarization rotation is detected with a hybrid-FRS detection method that combines the advantages of two conventional approaches: balanced optical-detection and conventional FRS with an optimized analyzer offset angle for maximum sensitivity enhancement. A measurement precision of 0.6 ppmv·Hz for atmospheric O2 has been achieved. The theoretical model of hybrid detection is described, and the calculated detection limits are in excellent agreement with experimental values. ©2012 Optical Society of America OCIS codes: (300.6260) Spectroscopy, diode lasers; (300.6390) Spectroscopy, molecular; (280.4788) Optical sensing and sensors. References and links 1. R. F. Keeling, R. P. Najjar, M. L. Bender, and P. P. Tans, “What atmospheric oxygen measurements can tell us about the global carbon cycle,” Global Geochem. Cycles 7(1), 37–67 (1993). 2. R. F. Keeling, “Measuring correlations between atmospheric oxygen and carbon dioxide mole fractions: a preliminary study in urban air,” J. Atmos. Chem. 7(2), 153–176 (1988). 3. R. Kocache, “The measurement of oxygen in gas mixtures,” J. Phys. E Sci. Instrum. 19(6), 401–412 (1986). 4. O. S. Wolfbeis and M. J. P. Leiner, “Recent progress in optical oxygen sensing,” Proc. SPIE 906, 42–48 (1988). 5. A. A. Gorman and M. A. J. Rodgers, “Current perspectives of singlet oxygen detection in biological environments,” J. Photochem. Photobiol. B 14(3), 159–176 (1992). 6. D. B. Papkovsky, “Methods in optical oxygen sensing: protocols and critical analyses,” Methods Enzymol. 381, 715–735 (2004). 7. L. Pauling, R. E. Wood, and J. H. Sturdivant, “An instrument for determining the partial pressure of oxygen in a gas,” J. Am. Chem. Soc. 68(5), 795–798 (1946). 8. R. P. Kovacich, N. A. Martin, M. G. Clift, C. Stocks, I. Gaskin, and J. Hobby, “Highly accurate measurement of oxygen using a paramagnetic gas sensor,” Meas. Sci. Technol. 17(6), 1579–1585 (2006). 9. M. Benammar, “Techniques for measurement of oxygen and air-to-fuel ratio using zirconia sensors. A review,” Meas. Sci. Technol. 5(7), 757–767 (1994). 10. J. Ye, Y. Wen, W. D. Zhang, H. Cui, L. M. Gan, G. Q. Xu, and F. Sheu, “Application of multi-walled carbon nanotubes functionalized with hemin for oxygen detection in neutral solution,” J. Electroanal. Chem. 562(2), 241–246 (2004). 11. Z. Fan, D. Wang, P. Chang, W. Tseng, and J. G. Liu, “ZnO nanowire field-effect transistor and oxygen sensing property,” Appl. Phys. Lett. 85(24), 5923–5925 (2004). 12. P. Vogel and V. Ebert, “Near shot noise detection of oxygen in the A-band with vertical-cavity surface-emitting lasers,” Appl. Phys. B 72(1), 127–135 (2001). 13. V. Weldon, J. O’Gorman, J. J. Perez-Camacho, D. McDonald, J. Hegarty, J. C. Connolly, N. A. Morris, R. U. Martinelli, and J. H. Abeles, “Laser diode based oxygen sensing: A comparison of VCSEL and DFB laser diodes emitting in the 762 nm region,” Infrared Phys. Technol. 38(6), 325–329 (1997). 14. Q. V. Nguyen, R. W. Dibble, and T. Day, “High-resolution oxygen absorption spectrum obtained with an external-cavity continuously tunable diode laser,” Opt. Lett. 19(24), 2134–2136 (1994). 15. D. M. Bruce and D. T. Cassidy, “Detection of oxygen using short external cavity GaAs semiconductor diode lasers,” Appl. Opt. 29(9), 1327–1332 (1990). 16. C. Corsi, M. Gabrysch, and M. Inguscio, “Detection of molecular oxygen at high temperature using a DFBdiode-laser at 761 nm,” Opt. Commun. 128(1-3), 35–40 (1996). 17. M. Kroll, J. A. McClintock, and O. Ollinger, “Measurement of gaseous oxygen using diode laser spectroscopy,” Appl. Phys. Lett. 51(18), 1465–1467 (1987). 18. A. Pohlkötter, M. Köhring, U. Willer, and W. Schade, “Detection of molecular oxygen at low concentrations using quartz enhanced photoacoustic spectroscopy,” Sensors (Basel) 10(9), 8466–8477 (2010). #211372 $15.00 USD Received 2 May 2014; revised 16 Jun 2014; accepted 16 Jun 2014; published 20 Jun 2014 (C) 2014 OSA 30 June 2014 | Vol. 22, No. 13 | DOI:10.1364/OE.22.015957 | OPTICS EXPRESS 15957 19. R. J. Brecha, L. M. Pedrotti, and D. Krause, “Magnetic rotation spectroscopy of molecular oxygen with a diode laser,” J. Opt. Soc. Am. B 14(8), 1921–1930 (1997). 20. L. Gianfrani, R. W. Fox, and L. Hollberg, “Cavity-enhanced absorption spectroscopy of molecular oxygen,” J. Opt. Soc. Am. B 16(12), 2247–2254 (1999). 21. P. A. S. Jorge, P. Caldas, C. C. Rosa, A. G. Oliva, and J. L. Santos, “Optical fiber probes for fluorescence based oxygen sensing,” Sens. Actuators 103(1-2), 290–299 (2004). 22. B. B. Stephens, R. F. Keeling, and W. J. Paplawsky, “Shipboard measurements of atmospheric oxygen using a vacuum-ultraviolet absorption technique,” Tellus B Chem. Phys. Meterol. 55(4), 857–878 (2003). 23. M. K. Krihak and M. R. Shahriari, “Highly sensitive, all solid state fiber optic oxygen sensor based on the solgel coating technique,” Electron. Lett. 32(3), 240–242 (1996). 24. B. Brumfield and G. Wysocki, “Faraday rotation spectroscopy based on permanent magnets for sensitive detection of oxygen at atmospheric conditions,” Opt. Express 20(28), 29727–29742 (2012). 25. S. G. So, E. Jeng, and G. Wysocki, “VCSEL based Faraday rotation spectroscopy with a modulated and static magnetic field for trace molecular oxygen detection,” Appl. Phys. B 102(2), 279–291 (2011). 26. S. G. So, O. Marchat, E. Jeng, and G. Wysocki, “Ultra-sensitive Faraday rotation spectroscopy of O2: model vs. experiment,” CLEO (Baltimore, Maryland 2011), CThT2. 27. L. R. Brown and C. Plymate, “Experimental line parameters of the oxygen A band at 760 nm,” J. Mol. Spectrosc. 199(2), 166–179 (2000). 28. R. Lewicki, J. H. Doty 3rd, R. F. Curl, F. K. Tittel, and G. Wysocki, “Ultrasensitive detection of nitric oxide at 5.33 microm by using external cavity quantum cascade laser-based Faraday rotation spectroscopy,” Proc. Natl. Acad. Sci. U.S.A. 106(31), 12587–12592 (2009). 29. V. S. Zapasskii, “Highly sensitive polarimetric techniques,” J. Appl. Spectrosc. 37(2), 181–196 (1982). 30. H. Adams, D. Reinert, P. Kalkert, and W. Urban, “A differential detection scheme for Faraday rotation spectroscopy with a color center laser,” Appl. Phys. B 34(4), 179–185 (1984). 31. R. F. Curl, F. Capasso, C. Gmachl, A. Kosterev, B. McManus, R. Lewicki, M. Pusharsky, G. Wysocki, and F. K. Tittel, “Quantum cascade lasers in chemical physics,” Chem. Phys. Lett. 487(1-3), 1–18 (2010). 32. J. Hodgkinson and R. P. Tatam, “Optical gas sensing: a review,” Meas. Sci. Technol. 24(1), 012004 (2013). 33. M. Nikodem and G. Wysocki, “Measuring optically thick molecular samples using chirped laser dispersion spectroscopy,” Opt. Lett. 38(19), 3834–3837 (2013). Introduction The significant role of oxygen in geochemical and biological cycles necessitates its detection in various scientific and environmental settings [1][2][3][4][5][6]. Modern applications are increasingly stringent, requiring greater accuracy, stability and shorter acquisition times [1,3]. Magnetodynamic detection of oxygen by exploiting its paramagnetic properties is one technique that has been explored to meet these requirements [7,8], but is susceptible to mechanical vibrations and drifts. Alternate techniques include solid-state sensors for combustion diagnostics [9], and more recently miniature oxygen sensors, where alteration of the electrical or chemical properties of nanostructures is used [10,11]. Despite substantial progress in oxygen detection technologies, in situ oxygen detection remains challenging due to issues involving environmental contamination, operating pressure constraints and interfering molecular species. Optical spectroscopic systems have received strong interest due to their sensitivity and specificity [12][13][14][15][16][17][18][19][20][21] and are frequently built for low-power, field deployable operation. Subpart-per-million (sub-ppmv) oxygen detection limits have been achieved with vacuumultraviolet absorption [22] and fiber-based techniques [23], although portability, dynamic range and in situ capabilities hinder their application. To date, we have not identified any sensor in literature that performs in situ atmospheric oxygen detection with the target subppmv sensitivity required for biorespiratory diagnostics. Recent work [24][25][26] has identified promise for sub-ppmv oxygen detection at atmospheric pressure using Faraday rotation spectroscopy (FRS) that targets the A-band of oxygen at 762 nm. In FRS, an applied magnetic field splits the 3 Σ g ⎯ oxygen ground state, and quantum selection rules allow only ∆M j = + 1 and −1 for rovibronic transitions in the Aband, which interact with right-handed and left-handed circularly polarized light respectively. This creates circular birefringence, which causes rotation of linearly polarized light as it travels through an oxygen sample, and is detected by projection onto a nearly-crossed polarizer. So et al. [25,26] performed atmospheric pressure FRS on the oxygen P P 1 (1) transition in the A-band [27] using a modulated (AC) magnetic field, yielding an AC-FRS detection limit of 10 ppmv·Hz -1/2 . Brumfield et al. [24] employed rare-earth magnets for static (DC) field generation and conventional balanced-detection FRS for fringe removal and intensity-noise suppression. This configuration was used to achieve a shot-noise limited sensitivity of 6 ppmv·Hz -1/2 . In the present work we demonstrate further sensitivity enhancement using a distributedfeedback (DFB) laser diode and a hybrid-FRS system that combines balanced optical detection and optimization of the analyzer offset angle typical for conventional AC-FRS systems. The hybrid-FRS technique, which also uses a DC magnetic field, achieves a 10 × enhancement beyond the sensitivity reported in [24]. Conventional FRS methods There are two distinct FRS signal detection methods reported in literature: (1) 90°-method implemented with nearly-crossed polarizers [26,28,29] and (2) 45°-method employing balanced optical detection [29][30][31]. The 90°-method is more popular [28,31] because it is simpler to implement, requiring only a single photodetector and two nearly-crossed polarizers (one before and after the gas sample) to measure the Faraday rotation of the light polarization [31]. In this nearly-crossed polarizer configuration, the laser noise is effectively suppressed, providing an improved signal-to-noise (SNR) ratio in the FRS measurement compared to direct absorption spectroscopy. The optimum analyzer uncrossing angle (α opt. ) is determined by equalizing the detector-noise equivalent power (NEP) with the laser intensity noise or quantum shot noise generated from the laser radiation incident on the photodetector. In the 45°-method, the laser noise suppression is performed electronically by splitting the laser beam emerging from the sample cell into two orthogonally polarized components (usually of equal power achieved with α = 45°) and performing balanced optical detection using two photodetector elements. In this configuration, the component FRS signals measured on both photodetector elements are out of phase, while the intensity-noise is in phase allowing for effective laser-noise suppression. Auto-balancing photodetectors (e.g. Nirvana auto-balancing photodetector by New Focus) that require other than a 50/50 split ratio have also been successfully used in balanced detection FRS systems (the split ratio can be conveniently adjusted by varying α) [24]. Prior experimental work has assumed the 45°-method and the 90°-method to be separate [28,30,31]; that is, the use of one method precludes the other. In what follows, we experimentally show that these two techniques can be used in a complementary fashion resulting in a hybrid-FRS method that can achieve a better SNR than either one alone. To identify the benefits of hybrid-FRS, we compare below the performance of hybrid-FRS with the conventional 45°-method and 90°-method. Experimental setup The studies and experiments have been performed using the optical configuration shown in Fig. 1. The setup is similar to that reported by Brumfield et al. [24], but the VCSEL laser used in [24] has been replaced with a DFB diode laser (Sacher Lasertechnik, λ = 762 nm), with small modifications made to the polarization optics to enable hybrid-FRS measurement. The DFB laser targets the P P 1 (1) O 2 transition in the A electronic band. The laser current is modulated at 6 kHz with a modulation depth optimized to maximize the second-harmonic (2f) FRS signal. The ambient laboratory air at room temperature and atmospheric pressure is used as the sample gas for our studies. The laser light is first transmitted through a Glan-Thompson polarizer (GTP) to establish a clean polarization state prior to entering a cylindrical-mirror multi-pass cell (MPC). The cell provides 6.8 m path length with 40% optical throughput. An array of rare-earth magnets generates a 554 Gauss axial magnetic field in the active region probed by the laser beam within the multi-pass cell. Light passing through the cell undergoes polarization rotation due to interaction with the sample and is split (after exiting the cell) into orthogonal components by a Wollaston prism (WP). Balanced detection and demodulation is performed using a Nirvana auto-balancing photodetector (New Focus, model 2007) and lock-in amplifier (Signal Recovery 7265), with automated data acquisition using a DAQ board (NI-USB-6529) and customized LabVIEW software. The auto-balancing function is employed with an optimal 2:1 reference-to signal-photodiode split-ratio (480 µW on the signal-photodiode), resulting in laser intensity-noise suppression of > 20 dB (the 21.6 dB common-mode rejection ratio (CMRR) of the balanced detector was determined experimentally). The system is used to perform detection using the 45°-method and hybrid-FRS methods. As the DFB laser is capable of providing up to 30 mW of output optical power, a nano-particle polarizer (NPP) (shown as NPP1 in Fig. 1) is placed just at the laser output to attenuate the laser radiation and avoid detector saturation in the 45°-method. In hybrid-FRS the NPP1 is set to maximum transmission and the detector saturation is avoided by combination of the appropriate α setting and attenuation of the reference branch using the NPP2 (see Fig. 1). Since the optical fringes introduced by the NPP2 are very stable, the autobalancing circuit is capable of compensation of any slow optical power drift between the reference-and signal-photodiodes. It should also be noted that NPP2 is relatively thin resulting in a large fringe free spectral range (> 10 × the P P 1 (1) linewidth), so any parasitic intensity modulation induced by the laser wavelength modulation creates a negligible baseline offset in the measured FRS signal. In order to assess the proposed hybrid FRS method and establish a performance baseline, conventional balanced detection FRS has been implemented using the same system components. In our prior work we used a VCSEL [24] capable of delivering 20 µW of optical power to the signal-photodiode. This resulted in a minimum detection limit (MDL) for O 2 of 6 ppmv·Hz -1/2 . Application of a more powerful DFB laser that can easily deliver up to 480 µW to the signal-photodetector (detector saturation occurs at 500 µW) is expected to result in a 4.9 × improvement in sensitivity to oxygen. At P 0 ≈1.4 mW (which provides 480 µW to the signal detector at split ratio of 2:1), the MDL for O 2 measured with the conventional balanced FRS system using DFB laser is 1.8 ppmv·Hz -1/2 . This corresponds to an improvement of 3.3 × , or 70% of the expected enhancement. The 30% discrepancy has been attributed to the higher laser relative intensity-noise (RIN) of 5.3 × 10 −7 Hz -1/2 (2 × the VCSEL RIN), and an order of magnitude smaller CMRR. At these conditions the 45°-method operates roughly at 2 × the quantum shot-noise limit. However, it should be noted that due to the detector saturation limit only a fraction of the available laser power (P 0 ≈1.4 mW vs. 8 mW maximum available after the MPC) could be used to perform this measurement. Limitations of the 45°-and 90°-method Based on the results obtained with the 45°-method in the previous section, one can realize that its main limitation is related to detector saturation, which limits the amount of total laser power that can be used in the FRS measurement. On the contrary, the 90°-method is free from this limitation, because the amount of light transmitted through a nearly crossed analyzer is significantly lower; however, laser-noise and electromagnetic interference limit its sensitivity. In order to discuss the main limitations in both techniques, an analysis of signal and noise in both methods is performed below. In the 45°-method, the signal (V 45°) and noise (σ 45°) can be expressed as: Where G = 10 5 V/A is the transimpedance gain, R i = 0.5 A/W is the detector responsivity, and P 0 and Θ are the incident power (Watts) on the analyzer and polarization rotation angle in radians (due to the Faraday Effect) respectively. Similarly, in the 90°-method the signal and noise can be expressed as: Theoretically both techniques can ultimately be shot-noise limited. With an assumption that the shot-noise limit is achieved when the photocurrent shot-noise becomes equal to either the detector-or laser-noise, one can express the total noise of the system as: The factor 2 represents a quadrature sum of two noise contributions of the same magnitude. The SNR in the shot-noise limited (SNR SN ) case for both methods becomes: Malus' law for the signal-and reference-photodiodes, with a high-power (red curve, hybrid-FRS) and low-power (black curve, 45°-method) case. The low-power case (P 0,B ) can be used for conventional balanced detection when both signal and reference branches are below detector saturation. (b) In conventional balanced detection using low-power, signal enhancement (black arrow) is achieved by increasing P 0,C to P 0,B (detector saturation, or intensity-and shot-noise crossover point). In hybrid-FRS, P 0,C increases to P 0,A (red arrow) with decreasing α at constant P sig (thus constant noise). Signal ~1/α for small α, hence SNR ~1/α ~P 0 1/2 . Hybrid-FRS avoids saturation and intensity-noise limitations by moving the operating point parallel rather than perpendicular to the α-axis. Therefore with an assumption of α < 10° in the 90°-method both techniques should provide comparable ultimate performance. However practical limitations exist, and result in SNRs significantly lower than SNR SN in both techniques. (A) Detector saturation limit in 45°-method Given the total laser power on each detector element is limited by the specified saturation power P sat , there is a limit to the total power P 0 that can be used in the FRS measurement. To understand this point, we may consider a high power (P 0,A ) and low power (P 0,B ) case of Malus' law in Fig. 2(a), where conventional balanced detection may be used in the case of P 0,B (since both signal and reference branches are below saturation, Fig. 2(b)). Using small signal analysis and assuming shot-noise domination (Eq. (7)), it is clear that SNR enhancement will require increasing P 0,B , and is limited by detector saturation, i.e. P 0,B < P sat . Thus despite excellent laser-noise suppression provided by the balanced detection that enables operation in the shot-noise regime, there is a hard limit in the maximum achievable SNR with this technique. Therefore application of more powerful lasers does not bring the P 0 1/2 improvement in SNR predicted by Eq. (7), but only allows for maximum power of P 0 = 2P sat which limits the ultimate performance of the system. (B) Laser noise limit in 90°-method As predicted by Eq. (8), the 90°-method should be capable of the same ultimate sensitivity as provided by the shot-noise limited 45°-method. However the lack of balanced detection that provides an additional 20 dB of CMRR makes it difficult to suppress the laser-noise in the FRS system using the 90°-method. As a result, the laser-noise suppression based solely on the optical suppression through a decrease in α may not be sufficient to achieve shot-noise limited operation (please note that the signal in Eq. (3) scales with sin(2α) while the lasernoise is proportional to sin 2 (α), which allows for improvement in SNR in the laser-noise dominated regime by decreasing α until the detector-noise floor is reached; in such a configuration the shot-noise limited operation can only be obtained with ultra-low noise photodetectors and high-quality polarizers). Therefore most of the FRS systems utilizing the 90°-method operate in the laser intensity-noise dominated regime. Using Eqs. (3) and (4) and the experimental parameters relevant to our setup with NEP = 5.4 × 10 −12 pW·Hz -1/2 , RIN = 5.3 × 10 −7 Hz -1/2 , and P 0 = 8 mW (obtained from our 30 mW DFB diode after optical losses in the system primarily Fig. 3. Hybrid-FRS and 90°-method SNR calculation. (a) SNR for 90°-method at P 0 = 8 mW. For small α, noise ~α 2 and signal ~α. (b) SNR for hybrid-FRS at the same P 0 . CMRR suppression of intensity-noise gives significant SNR increase. V 90° is normalized to the V hyb maximum, σ hyb is normalized to the σ 90° maximum, and SNR 90° is normalized to maximum of SNR hyb . (c) Multiple SNR plots for P 0 = 1 mW, 3 mW, 8 mW and 15 mW. From the curves it is clear that SNR hybrid ≥ SNR 90°. The shaded red regions indicate the disallowed operating regime due to the limitation of detector saturation. from the MPC), the best SNR 90° is achieved at 2° as shown in Fig. 3(a). For ease of comparison, the SNR 90° values presented in Fig. 3(b) were normalized to the peak SNR of the hybrid-FRS method at P 0 = 8 mW. Hybrid-FRS Theoretically, the benefit of hybrid-FRS can be explained using SNR calculations. The hybrid-FRS signal equation is derived as balanced detection for an arbitrary α (0 < α < 45°), with the NPP2 polarizer ( Fig. 1) and auto-balancing circuit suppressing P ref by γ = tan 2 (α) and analyzing the difference P sig -γP ref , which yields [24]: We may also consider the noise of the hybrid-FRS, given as the quadrature sum of detector-, shot-and laser intensity-noise: When shot-noise limited operation is achieved the total noise of the system can be expressed as: It is clear that in the case of α = 45° the shot-noise limited SNR for the hybrid method becomes equivalent to Eq. (7) derived for 45°-method. For small α, the ultimate shot-noise limited SNR tends to be slightly smaller (√2 times) than SNR 90°-SN in Eq. (8). However given the effective laser noise suppression through balanced detection that helps in approaching the shot-noise limited regime, a √2 decrease in the ultimate SNR represents only a small penalty for the use of a second detector (the √2 factor originates from the second detector contributing uncorrelated photocurrent shot-noise). Comparison of hybrid-FRS to 45°-and 90°-methods The impact of laser noise suppression through balanced detection in hybrid-FRS is clearly noticeable in Figs. 3(a) and 3(b) which compare the SNR hybrid with SNR achievable with the conventional 90°-method using experimental parameters relevant to our optical setup. For easier comparison all values of the SNR presented in the Fig. 3 are normalized to the maximum SNR achieved with the hybrid method at P 0 = 8 mW. For the maximum P 0 of 8 mW, the 90°-method achieves optimum operating conditions at α opt. = 2° with the SNR nearly ½ of that achievable with hybrid-FRS. The hybrid method achieves an optimal SNR at α opt. = 7.3°, indicating a lower effective laser noise achieved with the balanced detection that permits opening the analyzer further to increase the laser power on the photodetector. Similar calculations were performed for other laser powers with an assumption of the same RIN and NEP. Figure 3(c) shows that in all cases the hybrid method demonstrates better performance than the 90°-method. It is also clear that for low-power lasers (e.g. VCSELs, see the P 0 = 1 mW plot) there may be no optimum point on the SNR plot and the α opt. becomes 45°, making hybrid-FRS equivalent to the conventional 45°-method. In the case of low laser power such that the incident power on the signal-photodiode remains always below detector saturation even at an analyzer uncrossing angle of α = 45° (P sig = P 0 /2) (Fig. 3(c)), the 45°-method is usually expected to provide better performance than the 90°-method. This is due to efficient laser noise suppression by balanced detection while operating the FRS system at the point of maximum signal generated at α = 45°. This is also indicated in Fig. 3(c) that shows SNR 90° and SNR hybrid at P 0 = 1 mW (effectively SNR 45°) as a function of α for low power. If the conventional 45°-method is used the SNR 45° can only be improved through an increase of optical power, but once the detector saturation is reached, further increase of signal in the 45°-method is not possible (see black line in Fig. 2(b)). Therefore at higher laser power it is beneficial to use hybrid-FRS and decrease α to achieve an increase in FRS signal while keeping the P sig constant and below detector saturation (see red line in Fig. 2(b)). If the laser noise is effectively suppressed by the balanced detector, the shot-noise limited operation can be maintained and SNR follows the P o 1/2 increase predicted by Eq. (12). In conclusion hybrid-FRS eliminates the detector-saturation limitation of 45°-method and reduces intensity-noise through balanced detection, thus enabling shot-noise limited operation that is rarely achieved in practice with the conventional 90°-method. Power constraints We now consider minimum and maximum power limitations in hybrid-FRS. It has been observed in Fig. 3(c) for the given experimental parameters, that when P 0 is low (~1 mW), the SNR curve increases monotonically with no local maximum. In this case we see that there exists a minimum power P 0,min under which no benefit is derived from decreasing α, and conventional balanced detection is superior (i.e. one should work as close as possible to 45° when operating below saturation of the balanced-photodetector). One can solve for this condition as the minimum laser power which satisfies d(SNR)/dα = 0 for 0° < α < 45°. The angle α where d(SNR)/dα = 0 is the optimal crossing angle α opt. , which is plotted for varying Fig. 4. (a) Minimum power constraint. Optimum crossing angle α opt. vs. P 0 on balancedphotodetector (solid black line). For P 0 < P 0,min no SNR local maximum exists, and conventional balanced detection is superior. The red circle is our experimental operating point. (b) Maximum power constraint. Finite polarization extinction ratio will cause light to leak through the nearly-crossed GTP and WP for large P 0 and small α. The leakage optical power introduces more noise than would otherwise be present. Each curve corresponds to traveling along a line of P sig = 160 μW in Fig. 5(c), and SNR values are normalized to the maximum SNR at P 0 of 10 5 mW calculated for the ideal polarizers. P 0 in Fig. 4(a). We calculate that below P 0,min = 1.5 mW our system requires conventional balanced detection, while for P 0 > 1.5 mW hybrid-FRS becomes optimal. To optimize system performance, the operating point should lie upon the solid black line in Fig. 4(a), while maximizing available laser power P 0 . Our experimental operating point is indicated by the red circle, with α = 8.3° at P 0 = 7.6 mW, which is close to the theoretical α opt. = 7.5°. Slight deviations of α do not affect the SNR as each point on Fig. 4(a) represents a zero-derivative SNR condition. To estimate the maximum power limit one has to consider polarization extinction ratio (R ext. ), which has been neglected so far but will play a significant role for large P 0 and small α, as light leakage will be comparable to the total power transmitted through an ideal polarizer. This effect of finite R ext. manifests itself as a transformation of the reference-photodiode suppression factor γ → γ · ζ in Eq. (9), and signal power P 0 · sin 2 (α) → P 0 · sin 2 (α) · ζ in Eq. (10) where ζ = 1 + (γR ext ) −1 . The effect of the non-ideal polarizers is shown in Fig. 4(b) which shows SNR as a function of P 0 for different R ext . The SNR indeed increases as P 0 1/2 but only until the laser-noise leaking through the polarizers begins to dominate and outweighs any benefit derived from increasing P 0 . In our experiment, although the GTP and WP have R ext. ~10 4 , due to depolarization in the MPC an effective extinction ratio of R ext. ~10 3 has been determined (corresponding to the red curve in Fig. 4(b)). Therefore in our current system the maximum power limit is ~100 mW, beyond which little or no SNR improvement is expected. Hybrid-FRS: measurement results Hybrid-FRS measurements were performed using the system in Fig. 1. The hybrid system is optimized subject to limitations of detector-noise (dominating at P sig < 20 µW), intensitynoise (dominating at P sig > 660 µW), detector saturation (occurring at P sig = 500 µW) and laser power available for measurement of P 0 = 8 mW. Figure 5(a) shows a plot of total noise as a function of optical power P 0 and analyzer offset angle α derived from experimental measurements of laser-and detector-noise. Since each constraint is delimited by a given P sig , the curves determining the operation area of the system follow the form of P 0 = P sig /sin 2 (α). The laser intensity-noise is calculated assuming a CMRR of 21.6 dB ± 1.8 dB, determined as an average from numerous measurements over different optical powers and split ratios. α is the crossing angle between the WP and GTP. P 0 is the total power incident upon the balanced detector. The desired shot-noise region of operation is indicated by the dashed lines. Each of the curves corresponds to a line of constant P sig ; P sig < 20 µW in detector-noise regime, P sig > 500 µW for detector saturation and P sig > 660 µW in intensity-noise regime. The red horizontal line defines the power output limit of the DFB laser after accounting for optical losses of the system. (b) Calculated signals with measurements superimposed (black points), which lie upon a line of constant P sig = 160 µW (dashed curve). The inset shows four measured 2f DC-FRS signals at varying α. (c) Calculated SNR for hybrid-FRS parameter space. Black points indicate measurements along P sig = 160 µW. An MDL of 0.6 ppmv·Hz -1/2 is achieved at α = 8.3° (α opt. = 7.5°), which is 1.4 × the shot-noise limit. The inset shows a comparison of measured detection limits and those calculated from the SNR at P sig = 160 µW (the size of the red circles indicate the measurement error). It is always desirable to operate in the shot-noise regime (shaded) which provides the ultimate sensitivity for a given power and allows for the ~P 0 1/2 increase in SNR. In other noise regimes the SNR either remains constant with increasing P 0 (intensity-noise regime) or is sub-optimal (detector-noise regime). It is important to note that the shaded area is the desirable parameter space accessible by the hybrid-FRS method, and measurement optimization occurs at the point with the highest SNR. Within the same parameter space, the FRS signal from P P 1 (1) O 2 transition is calculated using Eq. (9) and plotted together with the values measured experimentally (Fig. 5(b)). 2f DC-FRS peak signal levels were measured at P sig = 160 µW, and the Fig. 5(b) inset shows examples of hybrid-FRS spectra of the P P 1 (1) line acquired for varying α. Asymmetries in the line-shape are due to residual intensity modulation of the DFB laser, and peak signal measurements are in excellent agreement with calculations. In Fig. 5(c) an SNR map has been generated using the signal and noise from Fig. 5(b) and 5(a) respectively. It is clear from the figure that conventional balanced-detection that operates at a constant α (e.g. 45°) will not approach the highest SNR area through a simple increase of the optical power (the detector saturation level will be reached first). However, for hybrid-FRS which operates at constant P sig = 160 µW, the highest SNR that coincides with the line of the available optical power can be conveniently approached. The minimum detection limit (MDL) of O 2 for each experimental measurement is labeled on the SNR plot and the measured MDL values are in reasonable agreement with the calculated MDL shown in the inset of Fig. 5(c). Taking horizontal cross-sections (fixed P 0 ) of the SNR map, we re-obtain SNR curves shown in Fig. 3(c). Summary We have demonstrated and characterized a hybrid-FRS system that combines optimization of the analyzer angle α with balanced detection. The system detects atmospheric oxygen with an MDL of 0.6 ppmv·Hz -1/2 and consumes < 5 W due to application of permanent magnets for magnetic field generation. We have established hybrid-FRS as a superset of the 90° and 45° FRS detection methods, and the parameter space of operation for our system has been defined. SNR improvements with varying α and P 0 are in good agreement with predictions. For a given set of experimental parameters, hybrid-FRS will always outperform the 90°method due to CMRR-suppressed intensity-noise, and eliminates the detector saturation limitation in FRS systems based on 45°-method. For comparison, Table 1 summarizes alternative atmospheric oxygen detection techniques published in literature. Our result is a 10 × enhancement beyond DC-FRS with a VCSEL emitting only 200 µW [24], and our MDL is also a significant improvement over alternate optical techniques (WMS, QEPAS). Furthermore, FRS is a dispersion-based measurement, thus providing inherently linear response to analyte concentration (in contrast to conventional absorption-based systems, in which the upper bound of dynamic range is limited to ≤ 10% absorption according to the Beer-Lambert Law) [28,33]. For higher O 2 concentrations with absorption of > 10% (in our case this corresponds to > 40% O 2 levels for 6.8 m path length), power normalization of the FRS signal is required, which can be conveniently implemented using the DC optical signal measured with the photodetectors. Finally, the current hybrid-FRS system requires low power for operation (< 5 W), making it desirable for field deployable applications.
7,813.4
2014-06-30T00:00:00.000
[ "Environmental Science", "Physics" ]
Competing Orders and Unconventional Criticality in the Su-Schrieffer-Heeger Model The phase diagram of the one-dimensional Su-Schrieffer-Heeger model of spinless fermions coupled to quantum phonons is determined by quantum Monte Carlo simulations. It differs significantly from previous work and features Luttinger liquid, bond-order-wave (BOW), and charge-density-wave (CDW) phases. The retarded phonon-mediated interaction gives rise to physics known from the frustrated XXZ chain. Most notably, a continuous BOW-CDW quantum phase transition characterized by unconventional power-law exponents can be interpreted in terms of deconfined quantum criticality involving proliferation of solitons (Mudry et al. [1]). acting on bond b between sites i and j = i + 1 andĤ ph = b Equation (1) describes itinerant spinless fermions coupled to optical bond phonons with momentumP b , dis-placementQ b , and frequency ω 0 = K/M . For the present case of a half-filled band ( n i = c † i c i = 0.5), numerics [32,33] and field theory [34] suggest the same physics for optical phonons-which avoid a QMC sign problem-and the original acoustic phonons [18]. We use the dimensionless coupling λ = g 2 /Kt and set , k B = 1. After integrating out the phonons, the partition function for Eq. (1) contains a retarded interaction The free phonon propagator P (τ ) is local in space but its decay in imaginary time τ (here, β = 1/T ) is determined by ω 0 , P (τ ) ∼ e −ω0τ . The associated retardation effects are crucial for the phase diagram in Fig. 1. (1) yields the fermionic hopping termĤ 0 = − b [t + (−1) b ∆]B b studied in the context of topological insulators. The Peierls argument [35] implies that the dimerization ∆ is nonzero for any λ > 0 and opens a gap at the Fermi level. Quantum lattice fluctuations can destroy long-range order at sufficiently weak coupling and thereby allow for a Luttinger liquid (LL) to BOW quantum phase transition (QPT) at a finite λ c (ω 0 ) [32,34,36]. An exact solution (by the Bethe ansatz) is also possible in the opposite, antiadiabatic limit. For ω 0 → ∞, the interaction (2) becomes instantaneous and Eq. (1) maps to the t-V model H ∞ = −t bB b + V in ini+1 with V = λ [34] and an LL-CDW QPT at V c /t = λ c = 2 [37]. The BOW and CDW states spontaneously break translation symmetry. Long-range order at T = 0 is described by Ising order parameters that reflect the two possible BOW (CDW) dimerization patterns related by a shift by one lattice site. CDW and BOW states can be distinguished by point group symmetries. As illustrated in Fig. 1, CDW order breaks bond inversion symmetry but preserves invariance under site inversion. The opposite is true for BOW order. Because different symmetries of Hamiltonian (1) are spontaneously broken, a Ginzburg-Landau theory would correctly suggest the absence of an adiabatic connection between the ground states at ω 0 = 0 and ω 0 = ∞, an aspect ignored in previous work. Apart from its origin in retardation effects, a generically continuous BOW-CDW QPT is captured by the theory of Refs. [38,39] for a frustrated XXZ chain, see below. Method.-To connect the two exact limits, we used a state-of-the-art QMC method based on the stochastic series expansion [40] and directed-loop updates [41]. The performance gain from extending the latter to retarded interactions [42] is essential to explore the phase diagram of the SSH model. The method has only statistical errors and relevant technical details are summarized in the Supplementary Material (SM) [43]. All results were obtained for periodic chains of L sites at inverse temperatures βt = 2L. In the nonadiabatic regime [t/ω 0 = 1/10, Fig. 2(b)], we find behavior consistent with an LL at λ = 2 and longrange CDW order at λ = 5. Results for the BOW-CDW transition at λ = 6 are shown in Fig. 5(c). The charge stiffness D ρ converges to a nonzero value (vanishes) for L → ∞ in a metallic (insulating) phase [45,46]. In 1D and at sufficiently low temperatures [47], it can be measured using the winding number estimator for the superfluid stiffness [43,48]. In conjunction with Fig. 2 Fig. 3(a) hence reveals a metal-insulator LL-CDW transition with increasing λ at t/ω 0 = 1/10. If we instead start in the CDW phase at λ = 4, an increase of t/ω 0 appears to drive two consecutive QPTs separated by a metallic region [ Fig. 3(b)]. The correlators in Fig. 2 identify the critical points as CDW-LL and LL-BOW QPTs, respectively. Comparing Fig. 3 Fig. 5(a) reveals that a potential intermediate phase narrows with increasing coupling, but a peak in D ρ can be observed even at λ = 12 [43]. A renormalization-group (RG) analysis of umklapp interactions would suggest Berezinskii-Kosterlitz-Thouless (BKT) LL-BOW and LL-CDW transitions with a critical value K ρ = 1/2 and numerically challenging logarithmic scaling due to a marginally relevant operator [44]. This has been explicitly confirmed for t/ω 0 = 0 [37]. A functional RG study of the SSH model [36] reported unconventional BKT critical behavior with a critical K ρ < 1/2. We will argue below that key aspects of the SSH model are captured by a theory for the frustrated XXZ chain, which predicts conventional BKT LL-BOW and LL-CDW transitions but K ρ < 1/2 along the BOW- CDW critical line [38]. Large-scale simulations of classical frustrated 2D XY models [49,50] indicate a standard BKT transition [49,51] (see, however, Ref. [52]), albeit with challenging crossover phenomena. To obtain the phase boundaries in Fig. 1, we analyzed the finite-size scaling of D ρ (L). The universal stiffness jump at the critical point of the 2D XY model [53] translates to D ρ (∞) = t/2 for the t-V model (the SSH model with t/ω 0 = 0) [37]. For t/ω 0 > 0, we instead find nonuniversal stiffness jumps, with D ρ (L) < t/2 even for small L in, e.g., Fig. 3(b). Possible origins are discussed in the SM [43] and include velocity renormalization, geometry effects, and frustration. We used the first-order scaling ansatz [54] with fit parameters D ρ (∞), g, and C [43]. As for the 2D XY model, fits to Eq. (4) reveal the critical point as a minimum in goodness-of-fit measures (here: the reduced chi-squared χ 2 ν = χ 2 /ν for ν degrees of freedom) [54,55]; for applications to quantum models see Refs. [56,57]. Discussion.-BOW order at t/ω 0 = ∞ and CDW order at t/ω 0 = 0 were established exactly in Ref. [34] by the arguments outlined after Eq. (2). However, the phase diagram was predicted to have a single, monotonic BKT phase boundary separating a LL from a dimerized phase. Similar conclusions were reached by (functional) RG calculations [36,59] and for related spin-phonon models [36,[60][61][62]. A nonadiabatic mean-field approach yields BOW and CDW phases even for large ω 0 [63]. These findings differ significantly from ours. A phenomenological description of the different phases in Fig. 1 is provided by the Goldstone-Wilczek theory of Dirac fermions ψ = (ψ A , ψ B ) (A/B: sublattice components, see Fig. 1) coupled to classical phonon fields (we drop a phonon term L ϕ ) [64], The masses can be combined into m = (m bow , m cdw ) so that the spectrum E(p) = ± p 2 + |m| 2 [65]. For |m| = 0, L has a U(1) chiral symmetry generated by γ 5 [19]. The soliton charge is e/2 for m cdw = 0 [66] but can be arbitrary for m cdw > 0 [64,67]. Spontaneous mass generation is described by the Gross-Neveu model [68] The umklapp interactions reduce the chiral symmetry to a discrete Ising symmetry and are directly linked to the commensurately filled lattice [39]. In the context of Gross-Neveu theories, the CDW phase observed here for the SSH model is known as an Aoki phase [69]. The BOW phase constitutes an interaction-generated topological insulator [13], adiabatically connected to topological band insulators in the so-called BDI class [70]. A bosonized theory that captures spontaneous BOW and CDW order from a single interaction-as appropriate for Eq. (1)-as well as other key aspects of our findings is that of the frustrated J 1 -J 2 XXZ chain [38,39], The cosine term is irrelevant for K ρ > 1/2 (LL phase) and on the BOW-CDW transition line (where λ φ = 0 [1]). It is a relevant perturbation in the BOW and CDW phases, which are associated with opposite signs of λ φ and a pinning of the charge mode φ at different minima [1,44]. A relation between the SSH electron-phonon model and the frustrated J 1 -J 2 XXZ model emerges from the mapping of phonon-mediated retarded interactions to frustrated spin interactions [62,[71][72][73]. Within the theory (7), QPTs between the LL and symmetry-broken phases are conventional BKT transitions [38]. The unusual critical behavior at the BOW-CDW transition in Fig. 5 mirrors that along the line of continuous dimer-Néel transitions of the frustrated XXZ chain, with a continuously varying exponent η = 2K ρ < 1 [38]. Apart from λ = 6, see Fig. 5, we also find evidence for K ρ < 1/2 at criticality for λ = 4 [43], consistent with a location on the BOW-CDW transition line. Therefore, the two separate critical points (with significant uncertainty) in Fig. 1, inferred from Fig. 4(e), may be an artifact of challenging finite-size scaling in the tricritical region of the phase diagram. A physical picture of how BOW and CDW phasescharacterized by different broken symmetries-can be connected via a generically continuous phase transition is provided by the scenario of 1D deconfined quantum criticality of Ref. [1]. It is based on solitons in the CDW (BOW) order parameter that can be added in pairs and interpolate between the two degenerate CDW (BOW) configurations. Parameterizing the phase of the order parameter by ϕ = (cos ϕ, sin ϕ) [1], see inset of Fig. 1, BOW (CDW) patterns correspond to ϕ = 0, π (ϕ = ±π/2). For example, a defect in the BOW order connecting ϕ = 0, π contains a region with CDW order or ϕ = π/2. Simultaneous proliferation of BOW/CDW defects at ω 0,c provides a mechanism for a continuous transition without fine-tuning. Whereas the mean-field theory (5) suggests a BOW-CDW 'transition' via rotation of m while keeping the gap |m| open, the theory of Refs. [1,38] yields |m| → 0 at the critical point together with an emergent U(1) symmetry. This agrees with our numerical results for the spinless SSH model (1), for which metallic behavior entails a vanishing single-particle gap. Interactiondriven QPTs out of a topological band insulator were recently addressed in Refs. [20,26,74] (see also Ref. [14]), where BOW solitons are excluded and the BOW-CDW transition exhibits Ising criticality [26]. The BOW-CDW transition is driven by retardation, which is difficult to capture analytically [33,36,75]. An effective Hamiltonian of the form (7) has to be obtained from an RG treatment of the phonon-mediated interaction, accounting for the different signs of λ φ in the BOW and CDW phases and λ φ = 0 at the transition. The onset of CDW order at ω 0 W suggests that the free bandwidth W = 4t is an important energy scale. Hence, the standard bosonization approach based on a linearization of the electronic dispersion around the Fermi points (equivalent to W → ∞) may be insufficient. A CDW phase is absent in functional RG results [36]. Outlook.-Generalizations of our work include spinful fermions and higher dimensions. For the 1D spinful SSH model, a transition from BOW to a critical spin-densitywave state is expected, whereas a 2D SSH model supports valence-bond and antiferromagnetic phases [76]. We acknowledge helpful discussions with F. Assaad, A. Quantum Monte Carlo Method We used the directed-loop QMC method for retarded interactions in the path-integral representation [42]. It is based on an interaction expansion of the partition function Z = D(c, c) e −S0−S1 around S 0 = dτ ic i (τ ) ∂ τ c i (τ ). A general interaction vertex S 1 = − ν w ν h ν can be written as a sum over vertex variables ν, a weight w ν , and the Grassmann fields contained in h ν . The perturbation expansion becomes with sums over the expansion order n and the ordered vertex list C n = {ν 1 , . . . , ν n }. For each time-ordered configuration of vertices, the expectation value over Grassmann fields can be represented by world lines. The trivial choice of S 0 ensures that the imaginary-time evolution is entirely determined by the interaction vertices. Therefore, Eq. (S1) is the path-integral equivalent of the stochastic series expansion (SSE) representation where Z = Tr e −βH is expanded in the total Hamiltonian [40,77]. Accordingly, many algorithmic features, including the global directed-loop updates [41], directly transfer to the path-integral representation [77]. The retarded interaction of the SSH model includes two bond operators acting at different imaginary times. Therefore, a compatible interaction vertex must contain two subvertices j ∈ {1, 2} with local variables {a j , b j , τ j } labeling the operator type, bond, and time of each operator. For the SSH model with a coupling to optical bond phonons we have b 1 = b 2 = b. The interaction vertex of the SSH model becomes It is important to note that the symmetrized phonon propagator P + (τ ) = ω 0 cosh[ω 0 (β/2 − τ )]/[2 sinh(ω 0 β/2)] is included in the global weight w ν of the vertex. Whereas the bond-bond interaction is already nonlocal in time, the single hopping terms of the kinetic energy are promoted to retarded interactions by including unit operators with a second time variable, i.e., This is possible because β 0 dτ 2 P + (τ 1 − τ 2 ) = 1. As the vertices (S3) and (S4) both contain off-diagonal hopping operators, we have to include a purely diagonal term in the interaction vertex. The simplest choice is a constant shift of the action, With our choice of interaction vertices, we can formulate the diagonal and directed-loop updates similar to the SSE representation [41]. For the diagonal updates, we use the Metropolis algorithm to add and remove vertices h 00,b (τ 1 , τ 2 ) that do not change the world-line configurations but change the expansion order n. We propose time differences τ 1 −τ 2 according to the phonon propagator using inverse-transform sampling. Because P + (τ 1 − τ 2 ) appears as a global weight in front of each vertex, it drops out of the directed-loop equations. The latter can be solved for each vertex similarly to the original approach, see the Supplemental Material of Ref. [42]. The constant k in Eq. (S5) has to be chosen such that every weight in the loop assignments is positive. During the propagation of the directed loop, unit operators can be transformed into bond operators and vice versa, leading to local updates h 00,b ↔ h 10,b /h 01,b ↔ h 11,b . Note that the vertices are constructed in such a way that each subvertex can be changed individually while the other subvertex remains unchanged. For details on the updating schemes, we refer to Refs. [41,42]. The calculation of observables in the path-integral (interaction) representation is in many ways similar to the SSE representation. Sandvik et al. [77] systematically compared estimators for electronic correlation functions derived in the two representations. Estimators that only include diagonal operators, such as the charge structure factor C ρ (r) and the charge susceptibility χ ρ (r), are simple to derive and given in Ref. [77]. Estimators including off-diagonal operators can often be recovered from the vertex distribution if there is a vertex that only includes this operator. Measuring the static or dynamic correlations functions of two bond operators at arbitrary bonds b 1 and b 2 is only possible when considering the hopping vertices h 10,b /h 01,b . It turns out that the bond susceptibility χ b (r) has a very simple estimator where only the total number of hopping vertices at bonds b 1 or b 2 has to be computed, see Ref. [77] for the exact estimator. However, calculating the equal-time bond structure factor C b (r = b 1 − b 2 ) = B b1 B b2 in the interaction representation is more involved. While a general derivation is outlined in Ref. [77], we only state the final estimator for the SSH model. For a Monte Carlo configuration C n , the bond structure factor can be estimated from In principle, the sum over p runs over the time-ordered list of all subvertices contained in a world-line configuration. However, we can exclude the unit operators 1 b as they were only introduced to simplify the Monte Carlo sampling. I b1b2 (p − 1, p) is zero unless bond operators B b1 (τ p−1 ) and B b2 (τ p ) originating from the hopping terms h 10 /h 01 appear at adjacent times; then I b1b2 (p − 1, p) = 1. An integral expression for K(p − 1, p) was derived in Ref. [77] and gives K(p−1, p) = 2/(τ p+1 −τ p−2 ) when 4 or more subvertices are present in a world-line configuration. The time difference τ p+1 − τ p−2 ∈ [0, β] is defined by the two subvertices that surround the two bond operators under consideration. Note that K(p − 1, p) = 2/β for 3 subvertices, K(p − 1, p) = 1/β for 2 subvertices, and K(p − 1, p) = 0 for 0 or 1 subvertices. For further details, see Ref. [77]. The Monte Carlo configurations do not give direct access to observables containing phonon fields because the latter have been integrated out to obtain a retarded fermionic interaction. However, bosonic observables can be recovered from electronic correlation functions using generating functionals. In particular, we derived efficient estimators for the total energy, specific heat, fidelity susceptibility, and phonon propagator in Refs. [78,79] that make use of the vertex distribution. In the following section, we use the framework outlined in Ref. [78] to show that the superfluid stiffness of an electron-phonon model can still be calculated from the winding number. QMC estimator for the superfluid stiffness Consider a ring of length L threaded by a magnetic flux φ. At finite temperatures, the superfluid stiffness can be obtained from the free energy via [45] Because we consider a 1D system [47] and our simulations at β = 2L are essentially converged with respect to temperature, the measured values of ρ s are representative of the charge stiffness or Drude weight defined as [80] where E is the ground-state energy. Using F = − 1 β ln Z, the stiffness is directly related to the action of the SSH model. The magnetic flux can be incorporated by imposing twisted boundary conditionsĉ L+1 = e iφĉ 1 . The boundary term of the action reads Here, S L/R is the action of the hopping term (S4) crossing the boundary to the left/right, whereas S LL/RR corresponds to the bond-bond interaction (S3) with both hopping operators going to the left/right. The superfluid stiffness can then be calculated as The first expectation value is given by For each Monte Carlo configuration, expectation values of terms S a contained in the interaction vertex (S2) can be obtained by counting the number of vertices n a [78]. For the Monte Carlo average we then obtain S a = − n a . In the same way, the second term in Eq. (S10) becomes = − (S L + S R ) + 4 (S LL + S RR ) = (n L + n R ) + 4 (n LL + n RR ) (S12) and the third term is given by where we used S a S b = n a n b − δ ab n a . We get an additional shift for a = b that cancels the contribution of (S12). Our results are equivalent to calculating the winding number W = n B L − n B R where n B L/R counts the number of subvertices B b (τ ) crossing the boundary to the left/right. Here, n LL/RR contributes with a factor of 2 because each vertex contains two bond operators, whereas mixed contributions n LR drop out. Therefore, ρ s can be calculated in the same way for retarded interactions as for equal-time interactions, i.e., ρ s = L( W 2 − W 2 )/β [48]. Figure S1 shows D ρ (L) and K ρ (L) as a function of t/ω 0 for λ = 4, 6, 8, 12. For all couplings, the data are consistent with a metallic region at intermediate t/ω 0 . Whereas the apparent narrowing of this region between λ = 4 and λ = 6 matches the phase boundaries in Fig. 1, the theory discussed in the main text suggests that the BOW-CDW transition involves a gap closing and hence metallic behavior only at a single point. At this transition, the LL parameter K ρ < 1/2, in accordance with Fig. S1(e). Values K ρ < 1/2 can be reconciled with metallic behavior by assuming λ φ = 0 in Eq. (7) at the BOW-CDW critical point [38,39]. Additional data In contrast to Ginzburg-Landau theory, the BOW-CDW transition does not require fine-tuning of both t/ω 0 and λ. For a fixed λ, λ φ can be tuned to zero for a suitable value of t/ω 0 , giving rise to a line of critical points. Since K ρ < 1/2 at criticality, any nonzero λ φ yields long-range BOW or CDW order. The theory hence excludes an extended metallic region (as opposed to a critical line) with K ρ < 1/2, even though this is difficult to verify numerically. Previous work on the extended Hubbard model [81] suggests that a peak in K ρ (L) that narrows with increasing L indicates a continuous transition, whereas the absence of a peak or broadening with increasing L signals a first-order transition. Figures S1(e)-(h) hence support continuous behavior, in accordance with theoretical expectations [1,38]. Stiffness fits Standard BKT universality is predicted for the LL-BOW and LL-CDW transitions both in a general LL [44] and specifically for the frustrated XXZ chain [38]. A detailed RG analysis [82] gives the finite-size scaling forms which provide the leading corrections to Eq. (4). However, in the light of the observed nonuniversal jumps, functional RG predictions of K ρ < 1/2 at the LL-BOW transition [36], and K ρ < 1/2 at the BOW-CDW transition according to our data and theory [38], we determined the critical values in Fig. 1 using fits based on Eq. (4) with three parameters: D ρ (∞), g, and C. In contrast, g and D ρ (∞) can be computed exactly for the classical 2D XY model (see below), leaving only one free parameter. Specifically, for βt ∼ L y = ∞ (1D quantum chain at T = 0), g = 1 and D ρ = 2/π (D ρ = t/2) for the 2D XY (1D t-V ) model [55,83]. As expected and demonstrated below, multi-parameter fits provide less accurate, but nonetheless fully consistent, critical values (shallower minima, stronger dependence on the range of L) than single-parameter fits. This is particularly relevant for the analysis of quantum systems such as the SSH model, where the range and number of system sizes are limited. For the fits, we restricted the range of the jump to 0 < D ρ (∞) < 2t/π, using the known value of the noninteracting case. To discriminate between the logarithmic scaling at the critical point and the very weak finite-size dependence at weak coupling [see Fig. 4(a)], a nonzero lower bound g min was imposed. Otherwise, the choice g = 0 gives good fits throughout the LL phase and there would be no minimum of χ 2 ν at the critical point. The exact value of g min does not significantly affect the results and was chosen as 0.25. Finally, the allowed range of C was [0, ∞[. An important test case for the generalized, multi-parameter fit ansatz (4) was the LL-CDW transition of the t-V model, for which the critical value is known. We used the same range of system sizes as for the SSH model. Figures S2(a)-(c) give a comparison of results based on Eqs. (4), (S14), and (S15). All three fit functions yield very similar and hence compatible minima of χ 2 ν at the correct value λ = 2. Figures S2(d)-(f) are based on fits that exploit the known values g = 1 and D ρ (∞) = t/2. This additional information produces significantly sharper minima, in accordance with previous work on 2D XY models [54]. At the same time, the first-order fit functions (4) and (S14) do not fully capture the finite-size scaling on small system sizes, as manifested in χ 2 ν 1 even at λ = 2 in Figs. S2(d) and (e) for L ≥ 22 and L ≥ 30. Higher-order corrections are partially captured by varying g and D ρ (∞) [55], which explains the much better χ 2 ν for the same range of L in Figs. S2(a) and (b). For the more challenging case of t/ω 0 > 0, we focused on three-parameter fits based on Eqs. (4) and Eq. (S14). Within the present accuracy, the results are compatible with each other but slightly less systematic than for the t-V model. In particular, the fits become less robust upon increasing the smallest value of L due to a reduced number of degrees of freedom. A similar picture arises for a fixed λ = 4 in Figs. S3(e) and (f). For the present accuracy and range of system sizes, we cannot discriminate between the scaling forms (4), (S14), and (S15). Nonuniversal stiffness jumps For the XY model, the stiffness jump and the constant g can be computed from a series for a given aspect ratio r = L x /L y [55,84]. For 1+1D quantum systems, r = cL/β, with c a model-dependent constant. For example, D ρ (∞) varies significantly as a function of L x /L y , covering the whole range from 2/π to 0 [84]. Similarly, g varies between 1 and ∞ as a function of r. In principle, a change of the range of the retarded interaction can mimic a change in r, leading to dependence of D ρ (∞) and g on the phonon frequency. There are several other known mechanisms for nonuniversal values of the stiffness jump. The bosonization relation D ρ = K ρ u [44] implies that, even if K ρ = 1/2 at a QPT, D ρ (∞) can change via the renormalized velocity u. For example, u increases with V in the t-V model [37] but decreases with λ in the Holstein model [79]. The stiffness can also be reduced by non-vortex excitations that are not captured by the standard BKT theory of the XY model [85].
6,374.2
2019-05-14T00:00:00.000
[ "Physics" ]
Perception of echo delay is disrupted by small temporal misalignment of echo harmonics in bat sonar SUMMARY Echolocating big brown bats emit ultrasonic frequency-modulated (FM) biosonar sounds containing two prominent downward-sweeping harmonics (FM1 and FM2) and perceive target distance from echo delay. In naturally occurring echoes, FM1 and FM2 are delayed by the same amount. Even though echoes from targets located off-axis or far away are lowpass filtered, which weakens FM2 relative to FM1, their delays remain the same. We show here that misalignment of FM2 with FM1 by only 2.6 μs is sufficient to significantly disrupt acuity, which then persists for larger misalignments up to 300 μs. However, when FM2 is eliminated entirely rather than just misaligned, acuity is effectively restored. For naturally occurring, lowpass-filtered echoes, neuronal responses to weakened FM2 are retarded relative to FM1 because of amplitude-latency trading, which misaligns the harmonics in the bat's internal auditory representations. Electronically delaying FM2 relative to FM1 mimics the retarded neuronal responses for FM2 relative to FM1 caused by amplitude-latency trading. Echoes with either electronically or physiologically misaligned harmonics are not perceived as having a clearly defined delay. This virtual collapse of delay acuity may suppress interference from off-axis or distant clutter through degradation of delay images for clutter in contrast to sharp images for nearer, frontal targets. INTRODUCTION The biosonar sounds of big brown bats, Eptesicus fuscus (Chiroptera: Vespertilionidae), are frequency modulated (FM) and contain several harmonics, the most prominent being the first (FM1; sweeping down from ~55 to 22kHz) and the second (FM2; sweeping down from ~105 to 45kHz) (Saillant et al., 2007;Surlykke and Moss, 2000). The bat's signals contain these harmonics when target classification or echo discrimination takes place and during interception, when accurate localization is necessary for tracking (Ghose and Moss, 2003;Ghose and Moss, 2006;Simmons et al., 1995). Furthermore, recordings of sonar signals actually impinging on a suspended target confirm that the sounds contain these harmonics throughout aerial interception maneuvers (Saillant et al., 2007). The presence of harmonics widens the effective bandwidth of the transmitted signals (Simmons and Stein, 1980), which increases the sharpness of biosonar images Simmons et al., 2004) and enhances the bat's ability to perceive objects close to background clutter (Siemers and Schnitzler, 2004). There is new evidence that the harmonic structure of biosonar sounds may play a more specific role in echo processing than that which is evident from bandwidth considerations alone. Across the 22 to 105kHz band encompassing both FM1 and FM2, sound velocity is constant, so frequencies in both harmonics arrive from a target at the same delay. That is, harmonics in naturally occurring echoes are coherent, with an exact 2:1 ratio of frequencies (FM1:FM2) from moment to moment throughout the FM sweep. When trained to discriminate electronically generated 'normal' echoes in two-choice tests, big brown bats exhibit a delay acuity of approximately 50s (Moss and Schnitzler, 1995;Simmons et al., 1995). However, in new two-choice experiments with split-harmonic echoes, the bat's echo delay acuity is hugely disrupted -it deteriorates to approximately 800s -if the principal harmonics, FM1 and FM2, are deliberately misaligned in time by 300s or altered in relative strength so that neuronal responses to the harmonics are misaligned as a result of amplitude-latency trading Stamper et al., 2009). In the present paper we measure the size of the smallest temporal misalignment of FM2 relative to FM1 that causes the big brown bat's delay acuity to undergo the severe deterioration that is already known to occur for a relatively large 300s harmonic misalignment (Stamper et al., 2009). Using electronically generated split-harmonic stimuli, the time separation between FM2 and FM1 was decreased from 300 to 0s to determine the threshold for disruption of echodelay perception. There are two possibilities for how the disruption of acuity will be manifested. First, if the disruptive effect builds up gradually as the time interval between harmonics is increased, and even then only occurs for large harmonic misalignments (e.g. 300s), it may not be particularly significant under ordinary conditions of echolocation because acoustic propagation in air cannot dissociate harmonics even by much smaller amounts. (The velocity of sound across the bat's echolocation frequencies is constant.) Second, if the disruption of delay acuity occurs abruptly for temporal misalignments much smaller than 300s, it may prove to be an important aspect of echo processing, even under natural conditions. Although harmonic misalignment cannot occur acoustically, it could occur internal to the bat's auditory system. Physiological mechanisms that retard neuronal response latencies (amplitude-latency trading) will create a wide range of differentsized temporal misalignments of responses evoked by FM2 relative to FM1 because FM2 is normally attenuated more than FM1 in many echoes. Many bats regularly fly and forage amidst clutter, such as dense vegetation, in conditions where the ability to perceive the space immediately to the front is crucial not only to locate targets but also to be sure no obstacles occupy this space (Jensen et al., 2001;Moss et al., 2006;Schnitzler et al., 2003;Siemers and Schnitzler, 2000;Siemers and Schnitzler, 2004;Simmons, 2005;Simmons et al., 2001). Echoes from surrounding clutter typically arrive at the same time as echoes from potential targets or obstacles located in the frontal zone and have to be classified as clutter. In an experiment investigating what is perhaps the worst case of dense, range-extended clutter, big brown bats were flown in an obstacle array made of large numbers of plastic chains hanging in rows from the ceiling of a flight room (Hiryu et al., 2010). The chain array was dense enough to require the bat to emit sounds at short intervals to update its images, and deep enough that echoes of each broadcast continued to arrive after the next broadcast was sent out. When all of the echoes from the first broadcast had not yet returned but the second sound was emitted anyway, the bat encountered potential pulse-echo ambiguity, defined as difficulty in determining which echoes belong to which emissions. Big brown bats solve this problem -that is, disambiguate the echoes to associate them with the corresponding emissions -by making slight changes of several kilohertz in the ending FM1 frequency of successive broadcasts. This subtle difference between successive broadcasts (only a few percent of the total 80-kHz frequency band of the sounds) causes proportionately much larger changes in the timefrequency spectrograms of their echoes, which the bats evidently used to group the echoes with the correct broadcasts. From this result, the hypothesis for the experiments reported here is that big brown bats will be sensitive to disparities introduced into the time-frequency pattern of echo spectrograms. The simplest way to test whether bats are sensitive to small disparities in the spectrograms of echoes is to deliberately misalign the harmonics (separating FM1 and FM2 by controlled time intervals) and then to measure the size of the smallest misalignment that the bat can just perceive. The particular behavioral effect is, however, not whether bats can detect different degrees of misalignment, but the effect of misalignment on the acuity of echodelay discrimination. MATERIALS AND METHODS Subjects Four adult big brown bats (Eptesicus fuscus Palisot de Beauvois 1796; three males and one female) were wild-caught in Rhode Island with a permit from the state's Department of Environmental Management. They were housed in individual cages at 22-23°C and 50-60% relative humidity with free access to vitamin-enriched water. The day-night cycle of the room was reversed to 12h:12h dark:light so experiments could be conducted during the daytime. Bats' weights were maintained between 14.5 and 15.5g by adjusting the number of mealworms (Tenebrio larvae) that they were fed daily. Experiments complied with NIH Principles of Laboratory Animal Care (NIH publication no. 86-23, revised 1985). Experimental procedures were approved by the Brown University Animal Care and Use Committee. Procedure The split-harmonic echo-delay discrimination experiment was run in a 4.0ϫ3.0ϫ2.5m room with panels of sound-absorbent foam (SONEX ® , Pinta Acoustic, Minneapolis, MN, USA) lining the floor and walls. Light levels were kept dim (~10lux) during experimental trials to avoid disturbing the bats. The equipment for producing the electronic echo stimuli was located in an adjacent room. All trials were run double-blind; there were no visual cues available to the bats or to the trainers. The bats' task was to discriminate between electronically generated echoes that differed in arrival time while their harmonic structure was manipulated. Each bat was trained to sit on an elevated Y-platform (Fig.1) and broadcast its echolocation sounds toward two ultrasonic microphones (Brüel and Kjaer model 4138 1/8-inch condenser microphones, Naerum, Denmark), one located on each end of the platform. The signals picked up at these microphones were used to generate the echoes that served as stimuli for the experiments. Microphones were mounted 20cm away and separated by 40deg. Echolocation sounds emitted by the bat were picked up by the microphones, filtered and delayed electronically and then delivered back to the bat from small electrostatic loudspeakers 15mm in diameter (RCA, model 112343, Hauppauge, NY, USA). These speakers were mounted next to the microphones at the ends of both arms of the platform, 20cm away from the bat and 50deg apart. The stimuli delivered to the bat were delayed by 1160s through the combined acoustic paths from bat to microphone and from the loudspeaker to the bat, plus electronic delays that varied from 2000 to 2800s according to conditions. The bat was rewarded with a piece of mealworm for walking down the arm of the Y-platform corresponding to the loudspeaker that delivered the positive stimulus (S+; see below), which was presented on either the left or right side in a pseudorandomized sequence (Gellerman, 1933). If the bat made an incorrect response, i.e. to the negative stimulus (S-; see below), a broadband sound was made to signal to the bat that it made an error, and a 5-s pause in the experiment ensued. All trials were run using a double-blind procedure. Two experimenters were present while each bat was run: a trainer who handled the bat and was blind to the position of the correct choice, and a recorder who controlled which loudspeakers generated the stimuli and recorded the bat's response. The recorder observed the bat using a black and white CCD video camera (Supercircuits, Inc., Type 166 15-CB22-1, Austin, TX, USA) mounted on the ceiling above the Y-platform. Illumination for the camera was provided by two infrared LED panels (Supercircuits, Inc.) located on either side of the video camera. The recorder was able to monitor the bat's performance on a Sony digital 8-mm video Walkman ® recorder (New York, NY, USA) located behind the trainer and the Y-platform. Sets of 50 experimental trials were conducted each day, with a total of 3days for every condition. Each bat thus completed 150 trials over 3days for each stimulus condition, for which the percentage of correct responses was recorded. All data were analyzed using binomial probability tests, which are the appropriate way to process data from independent-trial, two-alternative forcedchoice experiments such as those used here (SPSS v. 16.0, Chicago, IL, USA). Electronic stimuli The two-channel electronic target-simulator system has been described fully elsewhere [see fig.4 in Stamper et al. (Stamper et al., 2009)]; no changes were made in the acoustic or electronic components for these new experiments. The stimuli were electronically generated echoes of the bat's own biosonar broadcasts. They differed in delay, which was regulated by digital electronic delay lines, and in harmonic structure, which was regulated by electronic filters in series with the delay lines (see below). The basic procedure was to present bats with both S+ and S-that were designed to incorporate split-harmonic echoes into an ordinary two-choice echo-delay discrimination protocol. For all conditions, S-consisted of a single-glint echo at a fixed delay. (A single-glint echo is a single reflected replica of the broadcast at a specific delay. A two-glint echo contains two reflected replicas at slightly different delays.) S+ for the experimental condition consisted of a split-harmonic single-glint echo with varying amounts of time separation between the harmonics. S+ for the control condition was a two-glint echo with the first glint at the same delay as FM1 in the split-harmonic condition and the second glint at the same delay as FM2. Thus, experimental data were collected using time separations between FM1 and FM2 that could be compared with data collected using the same time separations between two glints. Although twoglint echoes commonly occur (Simmons and Chen, 1989), splitharmonic echoes would not occur in nature because the velocity of sound is the same at all bat frequencies. Nevertheless, electronic manipulation of harmonic delays is used here as a tool to examine the bats' sensitivity to disruption of echo spectrograms on the timefrequency plane. Bats perceive the delay for each glint in a twoglint echo for separations larger than 2s , but the errors bats make in this task are confined to a narrow span of delays close to the glint delays themselves. In split-harmonic experiments Stamper et al., 2009), bats make errors over a wide span of delays extending hundreds of microseconds away from each harmonic delay. For this reason, in the present study two-glint stimuli were used as control stimuli so that all the effects of electronic delays, filtering and simulation of echoes from loudspeakers could be combined to create data with which to assess the added effects of the harmonic misalignment. M. E. Bates and J. A. Simmons The stimuli are best described in terms of changes imposed on the signals between their being picked up by the microphones and their being broadcast back to the bat from the loudspeakers. After being picked up by the left and right microphones (Fig.1), the bats' signals passing through the left and right simulator channels were highpass (HP) filtered at 20kHz (Rockland model 442 variable electronic filters; 36dBoctave -1 ; San Diego, CA, USA) and lowpass (LP) filtered at 90kHz (Stewart model vbf-8 variable electronic filter; 48dBoctave -1 ; Beckenham, Kent, UK) to set the overall boundaries of the echo band delivered to the bat and to remove lowfrequency background noise. In each simulator channel, these filtered signals were then delayed by specially built digital delay lines (1.3-s delay steps, 10-bit digitizing accuracy from analog input to digital delay and back to analog output). There were two parallel electronic delay lines in each simulator channel (left and right). The output of one was HP filtered and the output of the other was LP filtered (Rockland model 753A variable electronic filters; 112dBoctave -1 ). The delayed HP and LP filtered signals in each simulator channel were then summed by an analog mixer, again LP filtered at 90kHz (Rockland model 442 variable electronic filters; 36dBoctave -1 ), and finally delivered to the left and right loudspeakers (RCA type 112343), which returned them to the bat as acoustic echoes. The stimuli in the experimental and control conditions differed in the delay settings of the two delay lines in each simulator channel and in the HP and LP filter frequencies. In the experimental condition (split-harmonic S+; Fig.1B), the LP filter in each simulator channel was set to 44kHz to select only FM1 from the signals whereas the HP filter was set to 66kHz to select only FM2. A narrow frequency band around 55kHz was removed (cut) entirely from both S+ and S-to eliminate frequencies at which FM1 and FM2 might overlap. For experimental S+, the overall delay of the filter-selected FM1 was set to 3160s (2000s electronic delay + 1160s sound-path delay, corresponding to a target distance of 54.5cm) whereas the delay of the filter-selected FM2 was increased above 3160s by varying amounts of electronic delay between 2300 and 2000s. The split-harmonic delay difference between FM1 and FM2 (t) thus varied over steps of 300, 100, 25, 5.2, 2.6, 1.3 or 0.0s. For experimental S-, the overall delays of the filter-selected FM1 and FM2 were set to the same value of 3960s (2800s electronic delay + 1160s sound-path delay, corresponding to a target distance of 68.3cm). Under ordinary conditions, this 800s delay difference is easily discriminated by big brown bats (Moss and Schnitzler, 1995;Simmons et al., 1995). The crucial feature of the experimental condition is that both S+ and S-undergo the same electronic filtering; only the electronic delays of FM1 and FM2 differed, by varying amounts of t for S+ and by 0s for S-. In the control condition (two-glint S+), the LP filter in each simulator channel was set to 90kHz whereas the HP filter was set to 15kHz, to allow both FM1 and FM2 to pass unobstructed into the echoes returned to the bat. For control S+, the overall delay of the first glint was set to 3160s (2000s electronic delay + 1160s sound-path delay) whereas the delay of the second glint was increased to >3160s by varying amounts of electronic delay between 2300 and 2000s. The two-glint delay difference (t) thus varied over steps of 300, 100, 25, 5.2, 2.6, 1.3 or 0.0s. For control S-, the overall delay of the two glints was set to the same value of 3960s (2800s electronic delay + 1160s sound-path delay). The crucial feature of the control condition is that both S+ and S-again undergo the same electronic filtering; only the electronic delays of the two glints differed, by varying amounts of t for S+ and by 0s for S-. The time differences between FM2 and FM1 (t) for the harmonic split were the same as those used for the two-glint separation, so that comparison of the bats' performance in split-harmonic and twoglint conditions would reveal the effects of the harmonic delay separation. Besides the varying time differences for FM2 relative to FM1, or between the first and second glints, the only difference between the experimental and control conditions was the presence of the harmonic-split filter settings (i.e. the narrow filtered region around 55kHz removed from the split-harmonic stimuli; see Fig.1B). The effects of this 55kHz cut can be evaluated by comparing the performance of the bats under experimental and control conditions at t0s. If their performance is the same under both conditions, then the split-harmonic filters had no effect of their own beyond the delay differences between FM2 and FM1. From bat sound at the microphone to its echo at the bat's location on the platform, the total gain of the target simulator system was approximately -35dB for S+ and -40dB for S-. The big brown bat's sensitivity for echo detection is not a fixed quantity; thresholds for echo detection are high immediately following the broadcast and decline by approximately 11dB per doubling of delay thereafter for at least 6-10ms (Kick and Simmons, 1984). The 5dB attenuation of S-relative to S+ in the present study compensates for the 5dB increased sensitivity of the bat for echoes at the longer delay of 3960s compared with the shorter delay of 3160s. Green 300s two-glint curve shows percent errors in delaydiscrimination tests with S+ as a two-glint echo with 300s glint separation and S-as a one-glint echo at different delays relative to both glints in S+. Discrete error peaks at 0 and 300s show that bats separately perceive the delay of each reflection in the two-glint echo. The gray shaded area shows percent errors with S+ as a split-harmonic echo having FM2 at a 300s longer delay than FM1. Again, S-is a one-glint echo presented at different delays to probe for perceived delays in S+. Error peaks at 0 and 300s show that the bats separately perceive the delay of each harmonic (FM1 at 0s, FM2 at 300s), but these peaks appear as modulations on a broad pedestal of errors (gray area) stretching from 0 to >800s. The pedestal of errors illustrates a blurred or defocused delay image, whereas the well-defined peaks in the two-glint curve (green) illustrate a focused image that resolves the two glints. In the present experiments, the bats' performance at a normally easily delay-discriminated difference of 800s between S+ and S-is used as the index of defocusing while the harmonic split (t) is decreased stepwise from 300s to 0s (red). Significant defocusing of the bats' delay image begins for harmonic offsets as small as 2.6s and persists for offsets up to 300s. If FM2 is not delayed but removed entirely, performance returns to normal for focused images. (B)Mean (±1 s.d.) performance of four bats on an 800s delay-discrimination task in a split-harmonic experiment (red) and in two-glint control (blue). ns, P>0.007; *, statistically significant differences (P<0.007) between the two-glint control and split-harmonic experimental conditions based on binomial tests. two-glint performance curve that shows percent errors made on an echo-delay discrimination task where S+ is a two-glint echo at a delay of 3.2ms (300-s glint separation) and S-is a single glint echo at various delays from 3.1 to 3.9ms (mean performance on 60 trials per bat, N2 bats). [These data are from previous experiments (Simmons et al., 1990b); that they were not plotted in the original paper is solely due to limitations of space.] In that experiment, the delay of S-was moved along the horizontal S-/S+ delay difference axis (Fig.2A) to probe for locations (delays of S-) where bats would perceive S-and either glint of S+ as having the same delay. The discrete error peaks at 0 and 300s (green curve) show that the bats separately perceive the delay of each reflection in the two-glint S+. Fig Results from split-harmonic electronic echoes using an S+ with a 300s offset of FM2 relative to FM1 are plotted in black (Fig.2A) [150 trials per bat, N4 bats; re-plotted from (Stamper et al., 2009)]. For the S+ in that experiment, FM1 delay was 3160s and FM2 delay was 3460s. Again, S-is a one-glint echo presented at different delays to probe for perceived delays associated with each of the harmonics in S+. This black error curve has prominent error peaks at delays of S-that correspond to the delay of either FM1 or FM2 in S+ (at 0 and 300s), indicating that the bats perceived the delay of each harmonic separately. However, instead of error peaks sharply localized to the immediate region of the delay for FM1 or FM2, as occurs for the two-glint curve (green), there is a broad pedestal of errors (gray shaded area) surrounding these peaks and extending continuously from the arrival time of FM1 to longer delays past 800s. This pedestal of errors extends more than 800s past the arrival time of FM2, which itself is only 300s after FM1. The pedestal is referred to here as a blurred or defocused delay image because the bats' performance reveals a lack of sharply perceived delay for the 300s split-harmonic stimuli. By contrast, the welldefined peaks in the two-glint (green) curve illustrate a focused image that resolves the two glints as having separate delays. The present experiment with a variable offset of FM2 relative to FM1 addresses the sensitivity of the perceptual mechanism underlying the defocusing effect illustrated by the gray area in Fig.2A: how small a misalignment of FM2 from FM1 is sufficient to initiate the pedestal of errors? A stimulus delay difference of 800s is used as an index of blurring because it is remote from any of the ordinary effects of glints separated by 300s (green curve). In Fig.2A,B, the red and blue curves (split-harmonic persistence and two-glint control) are the mean split-harmonic experimental (red) and two-glint control (blue) data (mean ± s.d. percent errors from 150 trials per bat, N4 bats) from the present experiment. The mean results are plotted on Fig.2A on the linear scale of the time separation (t) of FM2 from FM1 from 300 to 0s. An additional data point is shown for the condition of no FM2 (i.e. only FM1) . These same data are replotted in Fig.2B on a logarithmic scale for the separation (t) of FM2 from FM1 to spread out the abrupt onset of the loss in performance, or defocusing effect, when FM2 is delayed by only a few microseconds relative to FM1. A Bonferroni correction resulted in a significant alpha level of 0.007. There was a significant decrement in performance (increase in percentage errors) for the experimental echoes compared with the control echoes beginning with the 2.6s harmonic separation. That is, a significant defocusing effect already is evident at the 2.6s harmonic separation, and it increases further as harmonic separation increases to 25s and greater. This defocusing effect then persists as t increases to 300s. It was only at the 0 and 1.3s harmonic separation that the mean performance of the bats was statistically indistinguishable between conditions (Fig.2B). Comparison of experimental and control performance at the stimulus condition of M. E. Bates and J. A. Simmons t0s tests for the effects of the split-harmonic filters themselves (55kHz cut; Fig.1B); the statistical similarity of performance indicates that the experimental and control results are not influenced by a bias associated with these filter settings. When FM2 was removed entirely from S+ echoes, instead of being delayed relative to FM1, mean performance then returned to being statistically indistinguishable between control and experimental conditions (Fig.2B). The delay acuity of the bats did not seem to be compromised when only FM1 was present in returning echoes. To summarize, when the temporal relationship between FM1 and FM2 is gradually disrupted through shifting the arrival time of the two harmonics by t, the bats abruptly lose acuity for performing the nominally easy 800s delay-discrimination task through a process that amounts to defocusing of the image (Fig.2A). Misalignments of FM2 with FM1 as small as 2.6s cause the occurrence of significant defocusing (Fig.2B). DISCUSSION The results described above show that big brown bats perceive echoes with misaligned harmonics -even echoes misaligned by as little as 2.6s -as having a poorly defined delay. This effect is described here as defocusing of the bat's delay image. Use of the term defocusing is justified by the wide spread of poor performance, extending out to 800s longer than the objective arrival time of the echoes. By contrast, two-glint echoes with glint separations (t) from 0 to 300s (equal to the harmonic separations in Fig.2) are perceived with very few errors in the 800s discrimination task because 800s is so remote from the delay of either the first or the second glint (green curve; Fig.2A). Most importantly, two-glint echoes with t from 10 to 300s are perceived as having two glints at their corresponding delays (Simmons et al., 1990a;Simmons et al., 1995;Simmons et al., 1998). The green two-glint control curve in Fig.2A illustrates how the two-glint image is in focus; i.e. the glints are registered separately at their correct delays, with a narrow spread of errors (width of green error peaks) over approximately ±50s. The intervening space does not contain many errors and keeps the peaks separate. Several studies involving a different approach than using electronically generated echoes have reported decrements in delaydiscrimination performance when the time-frequency structure of echoes is altered in ways other than temporal misalignment of harmonics (Masters and Jacobs, 1989;Masters and Raver, 2000;Surlykke, 1992). These experiments used an electronically generated model echo -a single transmitted waveform selected from a series of each bat's sounds and stored digitally to replace actual echoes of individual broadcasts. This model echo is triggered by the bat's sounds to arrive at a fixed delay for a delay-discrimination task (twochoice or go/no-go) (see Moss and Schnitzler, 1995). In various model echo tests, the parameters of echo duration, amplitude of FM2 relative to FM1, signal frequency and the curvature of FM sweeps (adjusted by changing the decay constant of an exponential curve used as the modulation function) were changed to assess the effect on delay-discrimination acuity. Besides the dramatic loss in acuity caused when echoes were reversed in time so that they swept upward in frequency instead of downward (Masters and Jacobs, 1989;Surlykke, 1992), only changes in the curvature of the FM sweeps yielded changes in performance that could be detected with the discrimination procedure (Masters and Raver, 2000). In the context of the present results, this is the only manipulation that led to differences in the spectrograms of echoes relative to broadcasts comparable to the differences produced in split-harmonic echoes (Fig.1B). The diagram in Fig.3 addresses implications of the defocusing effect caused by misaligning harmonics in echoes. A target represented by split-harmonic echoes would be perceived as being badly smeared along its echo delay axis, or as having a defocused distance and shape. By contrast, a target represented by echoes with correctly aligned harmonics, or echoes that contain FM1 alone, would be perceived as having a well-defined distance and shape. That is, its image would be in focus. The bat's sharp threshold sensitivity (2.6s), along with the persistence of defocusing for all values of harmonic separation from 2.6 to 300s, implies that some auditory mechanism detects harmonic misalignment of any size and then imposes strong defocusing on the resulting images. The virtual disappearance of defocusing when FM2 is removed completely, in contrast to the persistence of defocusing for various harmonic misalignments, argues for a zone of time immediately following the nominal, or correct, time-of-occurrence of the FM2 sweep relative to the FM1 sweep, within which the anomalous presence of the FM2 sweep initiates the defocusing effect. That is, the correct alignment of FM2 with FM1 is not being detected; otherwise, removal of FM2 would cause defocusing to occur. Instead, the misalignment of FM2 over a range from 2.6s to at least 300s is being detected. The crucial zone of time for misalignment begins almost immediately (i.e. 2.6s) following the nominal position of FM2 and extends to at least 300s later without serious deterioration of the defocusing effect. This pattern of results suggests that, when responses to FM2 are allowed to shift to a later time than normal, they activate neuronal inhibition that initiates a cascade of responses designed to register the shape of targets from echo interference spectra (see Sanderson and Simmons, 2000;Sanderson and Simmons, 2002). What is the significance under natural conditions of defocusing caused by temporal misalignment of echo harmonics if the atmosphere is a nondispersive medium for propagation of sound, especially over the short distances of up to 10m that are relevant for echolocating bats? Biosonar in air is dominated acoustically not by differences in the velocity of sound across frequencies but by LP effects related to propagation and directional beaming. First, atmospheric absorption attenuates higher frequencies relative to lower frequencies in proportion to target distance. The degree of LP filtering present in the outward-propagating sound increases with distance, and then it doubles over the echo's return path (Lawrence and Simmons, 1982). For a target at a distance of 3m, for example, frequencies of 50-100kHz in FM2 would undergo excess attenuation from atmospheric absorption by 9 to 18dB whereas frequencies of 25-50kHz in FM1 would be attenuated by <4 to 9dB. Consequently, FM2 would be attenuated by approximately 5 to 9dB more than FM1. Second, the bat's broadcasts are projected in a beam towards the front, with sharper directionality at high frequencies than at low frequencies (Ghose and Moss, 2003;Hartley and Suthers, 1989). The greater the eccentricity, or off-axis position, of the target, the more the incident sound undergoes LP filtering before it impinges on the target. The degree of LP filtering in echoes thus increases with both distance and off-axis position. In relation to harmonic structure, echoes from targets located straight ahead and at short range reach the bat's ears with FM1 and FM2 largely at the same strength, but as target range or off-axis location increases, FM2 is disproportionately attenuated in echoes relative to FM1. The opposite effect -HP filtering to remove FM1 so that echoes contain primarily FM2 -does not occur under natural conditions because the dimensions of flying insects and virtually all other objects, such as leaves, branches or the ground, are larger in proportion to the incident wavelengths than the Raleigh region for scattering (Fenton et al., 1998;Houston et al., 2004;Moss and Zageski, 1994;Simmons and Chen, 1989). That is, natural targets create echoes in an acoustic regime where the full bandwidth of incident sounds returns at an amplitude related to the target's crosssectional area (the target's acoustic 'size'). Insect-sized targets affect echo spectra, but not as global LP filtering. The target's contribution instead consists of local notches in the spectrum due to reinforcement and cancellation caused by interference between overlapping reflections from multiple parts of the target, or glints (the target's acoustic 'shape') (Kober and Schnitzler, 1990;Moss and Zagaeski, 1994;Simmons and Chen, 1989). Behavioral, neurophysiological and computational studies have identified a process, called spectrogram correlation and transformation (SCAT), that has been hypothesized to explain how big brown bats locate the frequencies of these interference notches and reconstruct the corresponding delay differences between different parts of the target (Matsuo et al., 2004;Neretti et al., 2003;Peremans and Hallam, 1998;Saillant et al., 1993;Sanderson and Simmons 2000;Sanderson and Simmons, 2002;Sanderson and Simmons, 2005;Simmons et al., 1995;Simmons et al., 1998). Using a combination of overall echo delay and the echo interference spectrum as cues, these bats recreate for each echo an image depicting the object as a small number of glints on a perceptual axis of distance. In contrast to specific shape-related spectral signatures caused by interference between glints, the presence of LP filtering can reliably be exploited to distinguish all echoes reflected by off-axis or distant clutter from echoes reflected by targets of interest located to the immediate front of the bat. How is this distinction represented in the bat's auditory system? Neurons at successive stages (cochlear nucleus to auditory cortex) in the big brown bat's auditory system are tuned to different frequencies across the echolocation band of roughly 20-100kHz (Dear et al., 1993a;Dear et al., 1993b;Ferragamo et al., 1998;Haplea et al., 1994;Jen et al., 1989;Ma and Suga, 2008;Pollak et al., 1977;Sanderson and Simmons, 2000;Sanderson and Simmons, 2002 document a basic echo-delay processing scheme that uses dispersed response latencies as delay lines in the inferior colliculus followed by coincidence-detecting neurons in the auditory cortex. Delay-tuned neurons created at the auditory cortex by this processing cascade respond at a wide range of latencies. The only extraneous influence on the accuracy of echo-delay coding is amplitude-latency trading, a consequence of increased response latencies for echoes that are reduced in amplitude (Bodenhamer and Pollak, 1981) [in big brown bats, the trading ratio is approximately -16sdB -1 (Burkhard and Moss, 1994;Ma and Suga, 2008;Simmons et al., 1990a;Simmons et al., 1990b)]. The majority of delay-tuned neurons in the cortex are selective not only for a particular echo delay but also for the presence of an interference notch at their tuned frequencies. That is, although inferior colliculus neurons fail to respond if an interference notch is present at their tuned frequency (Sanderson and Simmons, 2000;Sanderson and Simmons, 2005), cortical neurons only respond if there is a notch at their tuned frequency, an inversion of the neuronal representation that depends on the evocation of inhibition by the 'missing' responses in the inferior colliculus (Sanderson and Simmons, 2002). Individual delay-line cells in the inferior colliculus produce an average of only one spike for each broadcast or echo (Sanderson and Simmons, 2000;Sanderson and Simmons, 2005), so if a particular frequency is reduced in strength in an echo due to interference, the single spikes evoked in neurons tuned to that frequency are retarded in latency by amplitude-latency trading (Burkhard and Moss, 1994;Ma and Suga, 2008;Simmons et al., 1990b). When some frequencies in echoes are reduced in amplitude by interference, the spikes representing these frequencies thus shift to longer latencies up to several hundred microseconds following the latencies at which they would have responded if no reduction in amplitude had occurred. The resulting temporal misalignment of responses at specific frequencies in the inferior colliculus may initiate the inhibitory process that inverts the notch representation and displays the glints in the cortex (Sanderson and Simmons, 2002). The size of the time window containing neuronal responses that have retarded latencies due to interference notches thus coincides with the time window (Fig.3) into which the shift of FM2 causes defocusing of the delay image. We speculate that the time-sensitive inhibitory process the big brown bat uses for registering individual interference notches is also activated by intrusion of responses evoked by electronically delayed FM2. However, although interference notches are distributed systematically across several discrete frequencies in echoes according to the time separation of the glint reflections (Moss and Schnitzler, 1995;Simmons et al., 1995), we hypothesize that the shift of the entire range of frequencies in FM2 into the inhibitory time window causes many different notch-selective neurons to be activated simultaneously, in effect 'turning on' a large number of perceived glints and defocusing the image. Delay separation of FM2 from FM1 does not occur acoustically in natural echoes. However, the decline in amplitude of FM2 relative to FM1 due to off-axis location or long target range leads to greater amplitude-latency trading of responses evoked by FM2. In effect, the auditory representation of FM2 is split from the auditory representation of FM1. For example, the LP effect described above for echoes arriving from a target at a range of 3m results in attenuation of frequencies in FM2 by 5 to 9dB relative to frequencies in FM1. The corresponding retardation of responses to FM2 due to amplitudelatency trading (at 16sdB -1 attenuation) is 80 to 140s, which is solidly within the time window for defocusing of the bat's delay image (Fig.2B). M. E. Bates and J. A. Simmons The bat's delay images for targets located straight ahead and at short range will have high delay accuracy and resolution because the only spectral effect is interference due to the glint structure of targets, not a global LP effect. By contrast, images of background objects, or clutter, located farther away or off to the side will be rendered with very low delay acuity due to defocusing caused by LP filtering, which retards the auditory representation of FM2 relative to FM1. The imaging process in biosonar thus may be analogous to high-resolution foveal vision and low-resolution peripheral vision, except that objects are rendered in depth, not in direction.
8,843.8
2011-02-01T00:00:00.000
[ "Biology", "Physics" ]
The association between insurance status and in-hospital mortality on the public medical wards of a Kenyan referral hospital Background Observational data in the United States suggests that those without health insurance have a higher mortality and worse health outcomes. A linkage between insurance coverage and outcomes in hospitalized patients has yet to be demonstrated in resource-poor settings. Methods To determine whether uninsured patients admitted to the public medical wards at a Kenyan referral hospital have any difference in in-hospital mortality rates compared to patients with insurance, we performed a retrospective observational study of all inpatients discharged from the public medical wards at Moi Teaching and Referral Hospital in Eldoret, Kenya, over a 3-month study period from October through December 2012. The primary outcome of interest was in-hospital death, and the primary explanatory variable of interest was health insurance status. Results During the study period, 201 (21.3%) of 956 patients discharged had insurance. The National Hospital Insurance Fund was the only insurance scheme noted. Overall, 211 patients (22.1%) died. The proportion who died was greater among the uninsured compared to the insured (24.7% vs. 11.4%, Chi-square=15.6, p<0.001). This equates to an absolute risk reduction of 13.3% (95% CI 7.9–18.7%) and a relative risk reduction of 53.8% (95% CI 30.8–69.2%) of in-hospital mortality with insurance. After adjusting for comorbid illness, employment status, age, HIV status, and gender, the association between insurance status and mortality remained statistically significant (adjusted odds ratio (AOR)=0.40, 95% CI 0.24–0.66) and similar in magnitude to the association between HIV status and mortality (AOR=2.45, 95% CI 1.56–3.86). Conclusions Among adult patients hospitalized in a public referral hospital in Kenya, insurance coverage was associated with decreased in-hospital mortality. This association was comparable to the relationship between HIV and mortality. Extension of insurance coverage may yield substantial benefits for population health. I n 2005, the 58th World Health Assembly called for health systems to move toward universal coverage and social health insurance in order 'to guarantee access to necessary services while providing protection against financial risk' (1). Hailed as one of the great achievements in financing healthcare in the past century, insurance-based financing represented a move away from direct out-of-pocket payments. Today most industrial countries provide universal access to healthcare through a combination of social insurance, private insurance, general revenues, and user charges (2). Risk-sharing mechanisms such as social insurance provide resources to access healthcare and to promote health while protecting individuals and households against the potentially devastating direct financial costs of illness. In Kenya, the National Hospital Insurance Fund (NHIF) is the primary provider of health insurance with a mandate to enable all Kenyans to access quality and affordable healthcare services (3Á5). NHIF has existed since its establishment by Parliament in 1966 shortly after Kenya restructured by the enactment of the NHIF Act of 1998. Membership is compulsory for all salaried workers in the formal sector (both public and private) at a monthly cost of 30Á320 KSH (US$0.34Á3.65) depending on salary. Membership is also available on a voluntary basis to informal sector workers at a cost of 160 KSH (US$1.83) per month. The benefits package includes comprehensive inpatient medical coverage including fees of up to 396,000 KSH (US$4,521) per year for the contributors as well their dependents. The type of healthcare facility determines co-payments and cost sharing. Patients have no co-payment obligations when receiving inpatient care at government facilities, but are often responsible for some co-payments at private hospitals. By 2011, approximately 2.7 million Kenyans were members of the NHIF including 2.1 million formal sector employees. With the insurance benefits extending to approximately another 6 million dependents, including spouses, children, and disabled family members, it is estimated that insurance coverage had increased to approximately 20% of Kenya's population receiving insurance coverage, with only a small percentage obtaining coverage through private, community-based, or employer-based programs (5,6). While observational data from past decades in the United States suggest that those without insurance generally have higher mortality and worse health outcomes (7Á17), there is no published literature examining health outcomes associated with insurance coverage in this context. Past work in East Africa examining Rwanda's community-based insurance program Mutuelles de santé has demonstrated improved medical care utilization and financial risk protection among insured household (18,19) Similarly, findings have been demonstrated in the West African nations of Burkina Faso, Ghana, Senegal, and Mali (20,21). It is hoped that improved access and utilization will promote improved health outcomes, and the growth of insurance coverage in Rwanda has coincided with significant decreases in under-five mortality, infant mortality, and maternal mortality (19). Yet, unlike in the United States, the relationship between insurance coverage and health outcomes in hospitalized patients has yet to be firmly demonstrated in resourcepoor settings such as Kenya. The objective of this study was to determine whether uninsured patients admitted to the public medical wards at a Kenyan referral hospital have any difference in in-hospital mortality rates from those patients with insurance. Study design and setting This study was a retrospective observational study of patients admitted to the public medical wards at Moi Teaching and Referral Hospital (MTRH) for the 3-month period from October to December 2012. Located in city of Eldoret, MTRH is an approximately 750-bed national referral hospital for western Kenya. The public medical wards admit and discharge approximately 350 patients monthly. Those in the lowest socioeconomic strata largely populate these wards as the wealthier largely choose private wards or hospitals (22). Eldoret is located in Rift Valley Province, where the health insurance coverage was 11.9% in 2007, primarily due to coverage by NHIF (6). Across Kenya, approximately 45.9% of the population lives below the poverty line with an unemployment rate of approximately 40% (23,24). The study was approved by the Institutional Research Ethics Committee at Moi University College of Health Sciences and was submitted and determined to be exempt from review by the institutional review board at Indiana University. Population All patients discharged from the public medical wards at MTRH between October 1, 2012, and December 31, 2012, were eligible for inclusion in the study. Data collection Data were retrieved from MTRH's medical record database. Following each patient's death or departure from the hospital, the Medical Records Department examines the patient's paper file recording demographic and clinical information. The primary outcome of interest was inhospital death. The primary explanatory variable of interest was health insurance status. In addition to these variables, we also abstracted information on primary and secondary diagnoses (ICD-10 codes), HIV status, age, gender, occupation, and length of hospitalization as determined from admission to medical discharge. All patient identifiers were absent from abstracted data utilized for analysis for this study. Statistical analysis The bivariate association between insurance status and mortality was examined using a Chi-square test. We then fit a logistic regression model to the data with in-hospital mortality as the dependent variable and insurance status as the primary explanatory variable. We adjusted this estimate for age, gender, employment status, presence of a secondary comorbid illness, and HIV status (as gathered from primary or secondary diagnostic codes) in order to calculate an adjusted odds ratio (AOR). All analyses were conducted with IBM SPSS Statistics software (Version 21; IBM Corp. Armonk, New York). Study population Over the 3-month period from October 1, 2012, to December 31, 2012, a total of 956 patients were discharged from the public medical wards at MTRH. Overall, 21.3% of discharged patients had insurance through NHIF. There were no patients from the medical wards that were recorded as having private or other insurance coverage. Those patients with NHIF were significantly more likely to be employed outside the household and to have a comorbid illness secondary to the primary reason for hospitalization (Table 1). Otherwise, the populations did not differ significantly with respect to age, gender distribution, referral status, length of stay, and HIV prevalence. Outcome The overall mortality rate of patients discharged from the medical wards during the study period was 22.1% (95% CI 19.5Á24.7%). The mortality rate for uninsured patients was 24.7% while the rate for insured patients was 11.4% (Fig. 1). This equates to an absolute risk reduction of 13.3% (95% CI 7.9Á18.7%) and a relative risk reduction of 53.8% (95% CI 30.8Á69.2%) for mortality. Moreover, a Chi-square test for independence demonstrated a Chi-square value of 15.6 (pB0.001) supporting a significant bivariate association between insurance status and mortality in this population. In standard multiple logistic regression examining the association between insurance status, age, gender, employment, presence of a secondary comorbid illness, HIV status, and mortality, the model containing all predictors was statistically significant (pB0.001). As shown in Discussion Analyzing data from 956 patients discharged over a 3month period at a Kenyan referral hospital, our study demonstrates that individuals with health insurance had significantly lower all-cause in-hospital mortality as compared with patients without insurance. The inhospital mortality rate for insured patients was approximately 54% lower than uninsured patients. This remained statistically significant even after controlling for age, gender, employment, presence of a secondary comorbid illness, and HIV status. Moreover, after controlling for these confounders, insurance coverage had an association of comparable magnitude to that of HIV with in-hospital mortality for adult medical inpatients. The significant difference in mortality between insured and uninsured inpatients may reflect unmeasured differences in the populations as well as differences in healthcare provided for hospitalized patients. In this population, insured patients were more likely to have a comorbid secondary or chronic illness. These diagnoses largely represent chronic medical problems such as hypertension or diabetes. These problems could be more represented in the insured population as this population is better able to access care for diagnosis and management than the uninsured. Yet, as the NHIF insurance scheme only covers inpatient hospitalizations currently, any difference in outpatient care provision for such conditions would be due more to differences in socioeconomic status, education, and health seeking behaviors rather than insurance coverage itself. Furthermore, past work has demonstrated the rate of hospital admission among Kenyans with insurance was approximately 48 admissions per 1,000 per year, compared to 28 admissions per 1,000 per year among the uninsured (6). While representing a small fraction of the entire population, this increased admission rate amongst the insured may reflect increased access and earlier careseeking behavior. Insured patients, who may be more educated and live in urban areas, are able to recognize appropriate conditions warranting healthcare and access the care needed. The uninsured alternatively may delay accessing healthcare due to barriers in cost and geography as well as a possible lack of recognition of need. Ministry of Health data from 2007 demonstrated that lack of money along with self-medicating were the most common reasons for ill patients not seeking care (6). Additionally, another study suggested that the richest individuals in Kenya are nearly three times more likely than the poorest to access healthcare when needed (25). Moreover, the poorest not only utilized fewer services when needed but also reported more frequently being unable to complete the whole course of treatment recommended (25). Beyond pre-existing differences prior to admission, the differences in mortality between insured and uninsured inpatients may also be due to differences in care once admitted. At MTRH, payments are required of patients prior to receiving certain diagnostic tests, including echocardiograms, x-rays, and CT scans. Thus, there can be delays in vital diagnostic tests and management decisions as patients and their families seek funds. There are waiver mechanisms for patients unable to pay and for emergently needed tests, but even the waiver process can take days. Patients with insurance, on the other hand, are able to get these tests more expediently as the hospital will perform the test with insurance reimbursement upon discharge. Overall, healthcare providers are able to diagnose and treat patients with insurance more expeditiously without as many barriers to care. This difference in care provision may impact the ultimate outcomes of the patients and partially explain some of the differences in mortality. Our study had a number of limitations that need to be highlighted. First, the dataset did not contain information with respect to socioeconomic status and education level for which insurance status may be acting as a surrogate marker. Moreover, our dataset did not allow for classification of residence (urban versus rural) or a means of calculating distance from residence to hospital in order to identify if any of these factors may have contributed to the differences seen. Generally, past studies in Kenya have shown that patients with insurance are richer, more educated, more likely to be employed, and more likely to have an urban residence (6, 25Á27). All of these factors may contribute to increased access to care and improved outcomes; however, beyond employment status, our dataset did not provide other measures of patients' socioeconomic status, education level, or place of residence for comparison. Also, the data collected did not include markers of the severity of illness upon presentation or capture any delays in care provision either before or during the hospitalization that may have been due to lack of money and insurance. Conclusion Risk-sharing mechanisms such as social insurance provide resources to access healthcare and to promote health while protecting individuals and households against these potentially devastating direct financial costs of illness. While observational data from past decades in the United States suggest that those without insurance generally have higher mortality and worse health outcomes (7Á17), our study demonstrates that a similar association may be seen in adult medical inpatients at a public Kenyan referral hospital. Insurance coverage correlated with a 53.8% relative risk reduction in inhospital mortality. This association remained significant after controlling for several confounders and was even comparable in magnitude to the relationship between HIV and in-hospital mortality. Our study is unable to answer if it is the insurance coverage itself or underlying differences in the populations of insured and uninsured patients that are responsible for this association. However, in a country with an annual per capita health expenditure of approximately US$37 (28), our study suggests that expanding inpatient health insurance coverage at a similar or even lower annual cost for an entire household could potentially have a substantial impact on patient health outcomes. Future research will need to examine what aspects of having insurance coverage account for this significantly lower in-hospital mortality rate in this setting. As Kenya moves toward its Vision 2030 and goal of 'good health and reliable, equitable, affordable and sustainable healthcare services for the entire population of Kenya' (28), these answers will be needed to direct efforts in the most meaningful and effective directions of which social insurance may prove to be ultimately one of the most valuable.
3,390.8
2014-02-11T00:00:00.000
[ "Economics", "Medicine" ]
Applications of machine learning in animal and veterinary public health surveillance slaughtering or the mining of free text in electronic health records from veterinary practices for purpose of sentinel surveillance. However, ML is also being applied to tasks that had usually been tackled with traditional statistical data analysis. Statistical models have extensively been used to infer relationships between predictors and disease to inform risk-based surveillance and increasingly, ML algorithms are being used for prediction and forecasting of animal diseases in support of more targeted and efficient surveillance. While ML and inferential statistics can accomplish similar tasks, they have different strengths making one or the other more or less appropriate in a given context. slaughtering or the mining of free text in electronic health records from veterinary practices for purpose of sentinel surveillance. However, ML is also being applied to tasks that had usually been tackled with traditional statistical data analysis. Statistical models have extensively been used to infer relationships between predictors and disease to inform risk-based surveillance and increasingly, ML algorithms are being used for prediction and forecasting of animal diseases in support of more targeted and efficient surveillance. While ML and inferential statistics can accomplish similar tasks, they have different strengths making one or the other more or less appropriate in a given context. Keywords Animal health -Infectious disease -Machine learning -Surveillance -Veterinary public health. What is machine learning? The advancement in computing technology and power and the explosion of data generation and storage capability in the last decades have seen the increased use of machine learning (ML) in many areas. ML is a collection of methods built upon statistics, mathematics and computer science that enable automated pattern discovery and model building at scale. Many introductory articles describing the various ML techniques have been produced targeting researchers and scientists in different fields [1,2,3,4,5,6,7,8,9,10,11]. We do not intend to reproduce those efforts but aim to put the ML methods in context of their techniques and purposes in comparison to traditional statistical data analysis and to present ML solutions to specific surveillance tasks that cannot effectively be addressed by traditional statistical data analysis. In this section we will contrast unsupervised ML with the use of descriptive statistics, and supervised ML with the use of statistical modelling (inferential statistics) to highlight the similarity in the approaches they use and the differences in purposes. 41_2_24_Guitian_preprint 3/29 between continuous variables and between categorical variables while, similarity measures such as Euclidean distance or Manhattan distance summarise likeness between observations. Although it is possible to comprehend these descriptive statistics in smaller settings, the information can quickly become difficult to synthesise with increased number of variables and observations. Unsupervised ML techniques basically explore and process further these descriptive statistics to discover hidden patterns and groupings in the data and to extract useful features from the data. The main tasks unsupervised ML are used for are dimension reduction, clustering and association rule mining. Dimension reduction techniques such as principal components analysis are frequently used to reveal hidden patterns in high-dimensional interrelated data [12]. They are used, for example, to summarise a large number of correlated bioclimatic variables or to assist visualising population structure in genetic variation [13,14]. Machine learning in animal and veterinary public health The scope of artificial intelligence (AI) in the context of public health has recently been reviewed by Schwalbe and Wahl [23], who identified four categories of AI-driven health interventions: 2) mortality and morbidity risk assessment 3) disease outbreak prediction and surveillance 4) health policy and planning. It is possible to identify recent contributions of ML to animal and veterinary public health that broadly fall within these categories, as well as others that would not clearly fit within any of them. As in healthcare medical applications, signal processing methods in combination with ML can be used to enhance the performance of diagnostic or classification systems in animals or herds. Promising results have been obtained for example when convolutional neural networks were used to recognise and quantify specific lesions on digital images captured during routine slaughtering of pigs [24]. Improvements to diagnostic performance by applying ML are not limited to imaging data, classification tree analysis has been shown to be able to enhance the sensitivity of the classification regime on which the eradication programme for bovine tuberculosis in the United Kingdom (UK) is based [25]. Decision trees are a method of supervised learning that can be used for regression or classification tasks. They consist of a tree-like structure where each node represents a single input feature and, for numeric features, their split value. The final nodes after which no further splits take place are referred to as the leaves of the tree and represent the output that is used to classify or predict. Identification of the best feature and threshold value to split the data is carried out in order to generate the most homogeneous sub-nodes with respect to the outcome of the tree. Decision trees are one of the most widely used ML methods and the key component of other algorithms such as random forests. With regard to the second domain of application, ML has been used, for example, to attempt to predict cases of lameness in dairy cows based on milk production and conformation traits [26]. The predictive performance of the classifiers built in this study was suboptimal, but as 41_2_24_Guitian_preprint 6/29 acknowledged by the authors, it could possibly be improved by expanding the spectrum of data with which the models were trained. Indeed, this study illustrates how the capacity of ML algorithms to accurately predict presentation of a multifactorial condition, such as lameness in dairy cattle, relies on them being trained on data that captures the wide array of disease determinants. The ability of ML algorithms to generate real-time risk predictions based on a broad range of risk factors was the motivation for the use of ML to expand conventional risk prediction approaches and generate daily predictions for highly pathogenic avian influenza risk for poultry farms in the Republic of Korea [27], an application that falls within the third category of AI-driven interventions listed above. As for applications for health policy and planning, we are not aware of the use of ML algorithms to support allocation of resources for animal disease surveillance in the same way they have been used in public health resource allocation [28]. On the other hand, ML has been used to generate information in order to support animal health surveillance planning and outbreak response. In a recent example, to address the lack of comprehensive and accurate poultry population data in the United States of America (USA), Patyk et al. developed an automated machine learning process to locate commercial poultry operations and predict their size and type in the USA. The authors used a supervised ML algorithm to detect poultry operations from aerial imagery [29]. In recent years, there has been a rapid expansion in the application of ML to very diverse challenges in animal health, some of which do not entirely fall within the above areas of application, which mostly refer to the use of supervised algorithms for purpose of prediction or classification. Unsupervised ML methods have been used, for example, to discover underlying structure in poultry condemnation data to uncover potential indicators for broiler chicken health and welfare surveillance (cluster detection and association rule mining) [21, 30] and to classify cattle herd types to inform control and surveillance of endemic diseases (dimension reduction) [31]. Supervised ML methods for regression/classification have also been applied to animal and veterinary public health challenges beyond the four domains identified 41_2_24_Guitian_preprint 7/29 by Schwalbe and Wahl [23], a recent example being the identification of carnivore and bat species not recognised as reservoirs of rabies with trait profiles suggesting their capacity to be or become reservoirs [32]. An important emerging area of application of ML algorithms in the context of animal health surveillance is the analysis and extraction of information from clinical records for the purpose of syndromic surveillance [33]. Recent studies have shown the potential of applying machine learning algorithms to automate mining of free-text data in clinical and post-mortem reports; an application that can greatly facilitate the adoption of animal health syndromic surveillance [34,35,36]. At farm level, precision technologies are providing farmers and veterinarians with large amounts of data the analysis of which can greatly support health and production management. Machine learning algorithms are central to the analysis of such data and, as for text data, they can enable their used for the purpose of syndromic surveillance [37,38]. In summary, due to their diversity and versatility, ML algorithms are being applied to an increasing range of tasks in animal and veterinary public health. In addition to broad domains of application analogous to those recognised in the field of global health, more specific uses of ML to address particular tasks continue to emerge. In the following section, Examples of application in animal and veterinary public health surveillance Use of machine learning to maximise probability of pathogen detection As described above, ML can be applied for different purposes within the context of animal and veterinary public health. In the area of surveillance it can be used to determine the likelihood of pathogen detection. This allows researchers to prioritise samples or cases that have the highest probability of being positive, ensuring resources and laboratory capacities are focused on these samples and to assist in the design of any future programmes of surveillance. Such approaches have been applied to animal disease, but also food borne disease [39] and plant diseases [40] and make the most of the metadata associated with the biological samples or cases, such as geographical location, type/age of host, etc. As an example, Walsh et al. [41] used gradient boosted trees, which is an extension of a classification tree ( Another example of application of ML to generate insights into potential reservoirs of disease is the study by Wardeh et al. [50]. Predictions of associations between known viruses and potential reservoirs of disease (zoonotic and non-zoonotic), were obtained with an ensemble of six models using a large data set of mammal-pathogen interactions. The results highlighted that current knowledge is likely to heavily underestimate the number of existing associations, particularly in wild and semi-domesticated mammals. An application of ML in the context of disease surveillance that deserves special mention is the exploration of genome sequencing data. The characteristics of these data (large, complex and hiding patterns that would be challenging to determine via other means) make ML methodologies ideal for their analysis. Furthermore, sequencing data is now becoming more readily available due to reduction in cost and the increase in through-put within veterinary and public health institutes. Examples of tasks relevant for design and implementation of infectious disease surveillance, which have been successfully accomplished by applying ML to whole genome sequencing data include source 41_2_24_Guitian_preprint 12/29 attribution [51], assessment of pathogenicity [52], prediction of antibiotic resistance phenotypes [53] and prediction of clinical outcomes [54]. Consider training a meta-model Each ML method will have different strengths and weaknesses, and it is difficult to know a priori which approach will work best for any given problem. Users can also apply a stacking ensemble approach, where multiple methods are applied in parallel and the final model is a weighted combination of the predictions from all the models. Consider the transparency of the approach A disadvantage of ML algorithms when compared to statistical methods such as regression is the limited 'interpretable' information they provide beyond their immediate task (e.g. classifying observations). For example, neural networks have been found to be very effective at making predictions where there are complex non-linear relationships between variables, but they might be unsuitable for identifying individual risk factors from which to target farms for surveillance or control measures. Consider whether the main objective is to explain or to predict As the main focus of ML algorithms is on prediction rather than explanation, there can be differences in the variables that are included in the final predictive model between ML and classical statistics. While explanation is not the primary aim of ML methods, some of the factors that are found to be important for prediction by ML algorithms could be the target of further investigation and could shed light of causative explanations, even where they were not found to be statistically 41_2_24_Guitian_preprint 14/29 significant by classical statistics approaches. If the intention is to build a ML model in order to predict disease occurrence, the algorithm should ideally be trained on data that capture the array of disease determinants. This is particularly important when trying to predict the occurrence of multifactorial conditions. Consider the balance of domain expertise and machine learning expertise For example, Sperschneider [40] suggests that 5% of the time will be spent training the model but 95% selecting the most appropriate features, which needs biological/epidemiological expertise. Likewise, ML expertise is needed to ensure that the correct model is applied for the available data, and that overfitting is avoided.
3,019.8
2023-01-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Blockchain Infrastructure and Growth of Global Power Consumption The paper proposes the link between cryptocurrency implementation in the financial sector and energy consumption worldwide. The underlying mechanism of this blockchain infrastructure is described, practical cases of its adoption in various segments of the financial sector are provided. This paper tries to explain the power consumption of the cryptocurrency mining at the case of Bitcoin, Ethereum, Monero, Litecoin. Since mining is not regulated by the state, and even banned in some countries, it is difficult to find accurate data on how much electricity is spent on it. Method of Herfindahl–Hirschman is used for efficiency estimate of crypto market. INTRODUCTION Blockchain is one of the most popular terms associated with changes in the technological paradigm taking place within the framework of the so-called "fourth industrial revolution" (Bech and Garratt, 2017;Byström, 2016). This concept came into use not only in professional, but also quasi-professional forums, as well as in discussions in the media. However, people do not always pay due attention to the mechanism of its functioning, the identification of potential benefits and difficulties associated with its implementation. This is also true for the financial sector, where the blockchain can be widely used as a technological basis for new instruments to attract external financing and organize corporate governance. With its help, it is possible to reduce the unproductive costs of financial institutions, which even in the US and leading European countries make up at least 2% of the attracted resources. However, this value has not decreased over the past decades (Bazot, 2017;Philippon, 2016). In this context, it is advisable to analyze the mechanism of functioning of the blockchain in conjunction with the most significant examples of its use in finance. As such, innovations in the organization of exchange trading, investment and commercial banking, insurance, audit, accompanying changes in approaches to corporate governance, as well as in financial analysis are considered. their characteristics. The key one is the timestamp of registering a single transaction in the block and forming the block as a whole. LITERATURE REVIEW The idea to organize the storage of information by means of related blocks was proposed originally by cryptography specialists (Haber and Stornetta, 1991). They considered it possible to develop a digital document (register), which records the time of the intellectual property right. In this case, the creators of creative products themselves, to whom the rights arose, had to provide the relevant information until the moment when someone had time to reproduce it. The idea of decentralized filling of interconnected information blocks, along with the ability to verify the correctness of their filling by all participants, was developed in 2008, when an algorithm was developed, which could be implemented in practice (Nakamoto, 2008). In 2009, the first cryptocurrency (bitcoin) was released on its basis. Blockchain technology, which is the basis of bitcoin, allows for the combining in one block, information about transactions with a total volume of 1 megabyte. The formation of one block takes 10 min on average. A chain of blocks is formed by hash functions, a cryptographic technology that allows you to encode and embed information about transactions made in the previous block into each subsequent one. This principle of chain formation practically guarantees its invulnerability to fraudulent attempts to change information about transactions in one of the blocks: The person who undertook a hacker attack would have had to make changes in all subsequent blocks by changing their hash-headlines. It is obvious that blockchain users would easily notice these attempts, since the emerging block chain is fully available for their monitoring 1 . In addition, it is extremely difficult in terms of resources required, since to "rewrite" one block significant processing power is required, which leads to high energy consumption. It is worth noting that the decentralized blockchain technology is based on the computational efforts of the so-called miners who use special equipment to identify a suitable (from a cryptographic point of view) hash-headline for each block. This search is carried out by trial and error, and the miner who finds the correct hash-headline receives a reward that is fixed in bitcoins. To a large extent, the work performed by miners provides protection of blockchain technology from hacker influences: The more resource-intensive the process of enabling an additional block, the higher the degree of security (Mikhaylov, 2018b). Commissions are an additional source of income for miners that are paid for the accelerated recording of information about a particular transaction in the emerging block 2 . 1 When making transactions, you can maintain confidentiality by using nicknames or special protocols that allow you to completely anonymous transactions. 2 The presence of a limit on the amount of information to be recorded in each block, objectively reduces the ability to receive commission METHODS In terms of economic theory, the organizational principles of blockchain operation can significantly reduce the costs associated with the verification of transactions and the creation of a distributed network (Catalini and Gans, 2016). This creates the potential for large-scale transformation of existing markets and the formation of new ones. Therefore, the blockchain can be considered as an example of a general-purpose technology, which is the fundamental factor of long-term economic growth. Then, economic development is presented as a succession of this kind of technology. Nevertheless, it does not necessarily occur at regular intervals: There may be innovative pauses, which are often resolved through the crisis (Mikhaylov, 2018a). In this sense, the development and implementation of the blockchain following the global financial crisis of 2007-2009 looks symbolic. American researchers of the digital economy Catalini and Gans (2016) believe that further penetration of blockchain technologies will be faster in areas where a high degree of standardization of transactions has already been achieved or where the state itself is ready to implement these technologies. The first case is about the development of so-called smart contracts that provide, for example, foreign exchange transactions of banks, the trading of futures contracts, etc. At the same time, obviously, there will be a need for an external intermediary that plays the role of the operator of this technology, but the transactions between the counterparties will be carried out in a decentralized manner. In the second case, a lower degree of decentralization is envisaged: The functions of the technology operator and verification of the authenticity of transactions are reserved by the state. The second case provides for a lower degree of decentralization: The functions of the technology operator and verification of the authenticity of transactions are reserved by the state. Sometimes this approach is called permissioned blockchain technology. Its application mainly covers the creation of public goods: Maintenance of property registers, issuance of official documents, etc. For example, Massachusetts Institute of Technology introduced the accounting of diplomas issued on the basis of blockchain: In the summer of 2017, a group of 111 graduates was offered to receive, along with the traditional format, electronic diplomas that allow to certify their authenticity for the employer and other interested parties using blockchain. Leading universities in China and India, where there is an issue of fake diplomas, are considering introducing similar approaches. In Sweden and Brazil land rights are registered on the basis of blockchain technology. Integration of blockchain with the internet of things is also promising. For example, air pollution sensors or weather sensors income. Blockchain technologies, which are the basis of cryptocurrencies alternative to bitcoin, provide storage in the block of information exceeding 1 megabyte, as well as a higher rate of block formation. can transmit local information to a common network, including on a reimbursable basis, when such data transmission is mediated by payments using cryptocurrencies. Smartphones and other mobile devices (tablets) can be equipped with additional chips for cryptocurrency mining. . Blockchain can increase the transparency of ownership of jointstock property (Yermack, 2017). Similar to registers of various property rights, this technology allows to take into account changes in shareholders' shares. With its widespread use, this would increase the efficiency of the stock market as a whole by reducing information asymmetry and dramatically complicating insider trading (Nyangarika, 2019a;Nyangarika, 2019b). The most famous part of blockchain infrastructure is crypto market. We use market concentration index calculated by the capitalization of the digital currencies (CR-4) and the Herfindahl-Hirschman Index (HHI), it is clear that this market remained essentially monocentric in 2014-2016. Where S i is the market share of crypto currency i in the crypto market, and N is the number of cryptocurrencies. We will use a normed Herfindahl index like here: ( 1/ ) * for 1, * 1for Where again, N is the number of cryptocurrencies in the market, and H is the usual Herfindahl index. In general, the stock exchange infrastructure is promising for the use of blockchain technologies. In addition to registration and depository activities, they can be used to accelerate and reduce the cost of clearing operations. The first platform that transferred this kind of transactions on the blockchain was the Sydney stock exchange. NASDAQ, the London stock exchange and a number of other leading securities trading centers are currently working on similar solutions. After the global financial crisis of 2007-2009, the regulation of trade in derivative financial instruments was tightened. In particular, settlements between participants in derivatives trading are now mandatory through a central counterparty performing clearing . RESULTS The cost of bitcoin for break-even mining, including the cost of electricity and depreciation, is about $5000, the publication refers to unnamed experts. Bitcoin fell in December, 2018 to the price of $3200-it is 80% lower than last year, notes FT. The bitcoin hash rate, a value that shows how much energy miners use, has fallen by more than 40% since August. It means that since September, about 1.5 million bitcoin mining farms have been shut down in 2018. The most profitable liquid crypto currency to mine XMR and LTC (Table 1). Even though blockchain technologies can lead to a large-scale transformation of the financial sector, contributing to new forms of capital raising and significant cost savings arising from standard transactions, it is premature to argue that digital currencies based on them will be able to seriously compete with traditional ones in the coming years (Narayan et al., 2016;Narayan and Sharma, 2011). On the back of the so-called industrial revolution, the qualitative characteristics of demand are changing. So, it is important, in particular, the environmental friendliness of electricity generation. Experts with reference to the data of the International energy Agency note that electricity is a source of 42% of anthropogenic greenhouse gas emissions, which leads not only to global warming, but also to an increase in government and business spending on the implementation of environmental and social programs in the field of health (Nandha and Faff, 2008). Experts note that the digital transition in the electric power industry allows not only to increase the efficiency of the traditional energy system, but also opens up new opportunities for involving distributed generation in the energy exchange, including on the basis of renewable energy sources, energy storage systems, devices and complexes with regulated consumption, for the organization of a variety of energy services (Table 2). There is a myriad of cryptocurrencies: According to the portal coinmarketcap.com, at the end of April 2018, their number approached 1600. 3 Externally, they have a number of similarities with fiat money, which is issued by Central banks, but does not perform, or does not fully perform the prescribed set of functions. Cryptocurrencies are only partly inherent in the function of money as a universal equivalent, or measure of value. Currently, prices for a very limited range of goods and services are denominated in digital currencies. If consider the most famous of them -bitcoin, then, according to the portal coinmap.org at the end of April 2018, it was accepted by only about 12.3 thousand points of sale worldwide 4 . They are distributed very unevenly: The highest concentration is observed in Western Europe and the United States. There is a sporadic presence in South-East Asia and Latin America. At the same time, most of them are companies specializing in online trading. The number of well-known offline sellers that accept payments in bitcoins is very small: One can mention the manufacturer of computer equipment Dell and two air carriers -Air Baltic and Air Lituanica. Since bitcoin does not fully perform the function of money as a measure of value, it is difficult to use it as a means of payment. Along with a relatively narrow geographical area of active use, operational risks act as a limiting factor. Firstly, in a number of countries (China, Vietnam, Iceland, Bolivia, Ecuador) transactions using bitcoin are prohibited or are in the "gray" zone. In the vast majority of national jurisdictions, its status has yet to be determined. Therefore, international payments using bitcoin are often carried out "at your own risk." Secondly, as it is often the This is especially true for e-wallets, which are used for temporary storage of funds in cryptocurrency. They are bankrupted by the owners on purpose, and are subject to hacker attacks. The non-transparent nature of many bitcoin-mediated transactions also has a negative impact. The legal gaps related to this cryptocurrency, do not allow to exclude the possibility that there might be transactions aimed at laundering of criminal proceeds, support for terrorist organizations, etc. With this in mind, it can be assumed that transactions in bitcoins and other cryptocurrencies can be banned in leading financial countries (USA and EU) in the case of detection of terrorist financing attacks (both occurred and potential). It is worth mentioning that FATF was wary of the emergence of cryptocurrencies. In 2015, they put forward that it is necessary to assess the feasibility of their admission to circulation, among other instruments, using a risk-based approach that compares the benefits and costs of their official recognition at the state level (FATF, 2015). As for the performance of bitcoin as a means of accumulation, a very high volatility of the cryptocurrency rate plays a negative role here. After almost "vertical take-off" there was a sensitive correction, with intraday fluctuations in its rate reached several tens of percent. During February-April, 2018, it periodically fell below 7 thousand dollars (the peak was reached at $20,000). This volatile dynamic has forced experts to talk about the high probability of an asset price bubble. In a review of studies on the modeling of bitcoin, the first signs of the explosive dynamics of the exchange rate of this cryptocurrency to the dollar appeared in 2012-2013 (Chapman et al., 2017;WTO, 2017). We verify these statements by analyzing the ratio of the actual movement of the bitcoin exchange rate and the long-term stochastic trend of its dynamics, which is detected by the Hodrick-Prescott filter (Figures 1-4). At the same time, the results indicate the presence of episodes of the boom in this market, but do not allow us to say directly that there was a transformation into an uncontrolled price growth, or a "bubble". Although the standard techniques for the recognition of "bubbles" in financial markets do not exist, for this purpose they often use the comparison of the identified boom episodes with some of the abnormal levels. As such, the levels corresponding to one and a half or two standard deviations (SV) of the subtraction between the actual and trend dynamics are used (Jorda et al., 2015). It should be borne in mind that the high volatility of the bitcoin exchange rate is associated with a relatively small "depth" of this segment of the cryptocurrency market. According to coindesk.com, the capitalization 5 of bitcoin on April 25, 2018 was about $160.7 billion. Other segments of the cryptocurrency market competing with it are characterized by significantly lower capitalization: For example, in the case of Ethereum -the most famous alternative to bitcoin -this parameter was approximately $66 billion. By the standards of modern financial markets, these indicators can hardly be considered impressive. The total capitalization of digital currencies in early 2018 reached $700 billion 6 . This value is comparable to the capitalization of Brazil's smaller equity markets ($759 billion) and Spain's (704 billion) at the end of 2016, accounting for only 2.6% of the capitalization of the U.S. market 7 . 5 The capitalization of bitcoin as a segment of the cryptocurrency market is calculated by analogy with the capitalization of the stock market -as a product of the number of bitcoins in circulation at the value of their current exchange rate to the U.S. dollar. High volatility of the cryptocurrency exchange rates and relatively low capitalization make it possible to assert that even bitcoin does not fully meet the criteria of information efficiency of the market. As shown in a number of empirical studies (Urquhart, 2016;Bariviera, 2017;Kumar Tiwari et al., 2018), bitcoin exchange rate shows signs of improvement in market efficiency not earlier than since 2014. Therefore, institutional investors with significant investment volumes, but moderate risk appetite, will not come to the cryptocurrency market soon. Thus, it is doubtful whether they have the key function of money -absolute liquidity, and at the moment can hardly be considered to be real money. It is appropriate to draw parallels between the competition among crypto-currencies and the concept of private money by F. Hayek. It presupposes the adversarial nature of different currencies, which should result in the rejection of inefficiently managed monetary systems (Cong and He, 2017;Makrichoriti and Moratis, 2016). For now, it is difficult to say that in relation to cryptocurrencies this process is dynamic. Judging by the changes in the market concentration index calculated by the capitalization of the leading digital currencies (CR-4) and the HHI, it is clear that this market remained essentially monocentric in 2014-2016. Only in the second half of 2017, with the drop in the share of bitcoin to 40%, there was a noticeable decrease in concentration (Table 3). This balance of power among digital currencies is associated with their high volatility, demonstrating limited opportunities for effective diversification and, as a result, a high probability of "herd behavior" of investors putting their money in these assets. Sovereign States are justifiably distancing themselves from direct participation in such competition with digital assets that are not linked to a single emission center. But some of them do not exclude the use of blockchain technologies for the transition to electronic money along with paper, and then instead (Mbiti and Weil, 2011;Osah and Kyobe, 2017). In this case, such an initiative should be interpreted as an implicit attempt to carry out a confiscation monetary reform, since the expected flight from the official currency, Bolívar, is taking place against the background of hyperinflation. In addition, it is also a step towards restarting the country's international settlements in the conditions of economic sanctions and a steady reduction in gold and foreign exchange reserves. It is significant that the Venezuelan authorities have a negative attitude to the resolution of operations in bitcoins, apparently because that they believe that with the help of this cryptocurrency a massive withdrawal of capital from the country can be carried out, as happened during the political crisis in Argentina in 2015 (Raskin and Yermack, 2016). In the article by Luther, Salter, 2017, it is also shown that against the background of the European financial crisis in the countries whose banking system was in the most vulnerable position (Spain, Italy), the number of downloads of applications that allow the purchase and sale of bitcoins has increased significantly. The authors found that the same reaction of the population was typical for Cyprus, where against the background of the banking crisis in 2013 an extreme form of financial repression policy (partial deposit haircuts) was applied. CONCLUSION The digital currency itself will become the third element of the monetary base along with cash and reserves of commercial banks. The rate of its emission will depend on the activity of users' transactions. At the same time, it is impossible to exclude, if necessary, the introduction of additional discretionary elements, such as the establishment of negative interest rates, as well as temporarily excessive emission of cryptocurrency to stimulate economic growth. Blockchain technologies promise significant changes in the financial sector. It direct result of their implementation should be a significant reduction in the costs associated with the operation of financial intermediaries and markets. Decentralizing the interaction of economic agents and eliminating the excessive costs associated with many financial transactions can create conditions for more intense competition among existing financial institutions and reduce entry barriers for new players. In the long term, this will allow the transition from a predominantly oligopolistic structure of the financial sector in most countries to a more competitive structure -a contestable market, where large financial institutions may be present, but their market power is limited by the threat of virtually unimpeded entry of more flexible, innovation-oriented newcomers . Nevertheless, for the practical implementation of such a scenario and achieving a noticeable gain in public welfare, it is necessary to adequately manage the risks associated with digital financial innovations, especially in terms of admission to free circulation and regulation of investments in cryptocurrencies. At the moment, bitcoin consumes mainly very cheap electricity. As a result, the bitcoin network typically uses energy where it is abundant and cannot be stored or exported. In countries where hydrocarbons are difficult to export, for example, in countries without access to the sea, bitcoin is extracted and "harmful" electricity. But most miners are powered by electricity from hydroelectric power plants, geysers, and geothermal vents that cannot be transported or stored. Bitcoin will continue to look for such cheap and not used for other purposes sources of energy, as mining in cities or industrial centers will continue to be not profitable. It is possible that you spend on air conditioning or heating water more than the miner can afford. If the price of bitcoin stabilizes, and enough miners come to this market, in the near future we can expect a fivefold increase in their energy consumption. In the distant future, bitcoin mining will become less and less profitable. The current average value (12.5 bitcoins per block) will be halved every 4 years until it reaches zero. Transaction fees (currently two bitcoins per block) are likely to remain the same. In this case, the energy consumed will depend on the size of the Commission and the price of bitcoin. If the price reaches $1 million per bitcoin, two bitcoins per block will lead to a situation where every 10 min electricity is burned at $2 million. In light of all this, does bitcoin look like such a big burden on the neck of the world energy? Given the tendency of bitcoin mining to use renewable resources and the fact that the traditional banking system is not environmentally friendly, it is possible that the cryptocurrency has a positive impact on the environment.
5,320
2019-07-01T00:00:00.000
[ "Computer Science", "Environmental Science", "Engineering", "Business" ]
Volatility persistence in the Russian stock market Abstract This paper applies a fractional integration framework to analyse the stochastic behaviour of two Russian stock market volatility indices (namely the originally created RTSVX and the new RVI that has replaced it) using daily data over the period 2010–2018. The empirical findings are consistent and imply in all cases that the two series are mean-reverting, i.e. they are not highly persistent and the effects of shocks disappear over time. This is true regardless of whether the errors are assumed to follow a white noise or autocorrelated process; this is confirmed by the rolling window estimation, and it holds for both subsamples, before and after the detected break. On the whole, it seems shocks do not have permanent effects on volatility in the Russian stock market. Introduction Financial market instabilities have become more frequent and acute in the era of globalisation (Bordo et al., 2001), and have raised concerns about the benefits of traditional portfolio diversification strategies. Those involving instruments based on the VIX volatility index (which is negatively correlated to equity returns) are thought to be particularly effective during periods of market turmoil for tail risk hedging (Whaley, 1993). The VIX is especially attractive to investors with high skewness preferences (Barberis and Huang, 2008). Unlike credit derivative instruments, the liquidity of VIX derivatives improves during periods of markets turmoil, when investors are in search of hedging instruments (Bahaji and Aberkane, 2016). The existing literature also shows the diversification benefits of VIX exposures in institutional investment portfolios (Szado, 2009). In particular, a VIX short future exposure in a benchmark portfolio triggers a positive expansion of the efficient frontier (Chen et al., 2011); moreover, the addition of VIX futures to pension fund equity portfolios can significantly improve their in-sample performance, whilst incorporating VIX instruments into long-only equity portfolios significantly enhances Value-at-Risk optimisation (Briere et al., 2010). Most of the studies mentioned above focus on the developed economies. By contrast, the present paper provides new evidence for the VIX in an emerging economy such as Russia. Moreover, it considers both the old and the new VIX constructed for the Russian stock market and analyses in depth the statistical properties of both (long-range dependence, non-linearities and breaks) in a fractional integration framework. Understanding the behaviour of the VIX is important because this index can be used as a predictor of stock returns and volatility, economic activity and financial instability. Further, it can be the basis of portfolio diversification strategies designed by domestic and international institutional investors. Specifically, the choice of the hedging effectiveness measure aimed at capturing the tail risk in the portfolio depends on the stochastic properties of the VIX. This is the motivation for the present study, which examines two different VIX measures (RTSVX and RVI) in a comparative framework in the case of Russia, a country for which very little evidence is available at present. The newly constructed RVI has replaced the originally created RTSVX in order to comply with the latest international financial industry standards and take into account feedback from market participants (see Section 2 for more details). The layout of the paper is as follows. Section 2 provides background information on the Russian VIX, Section 3 outlines the empirical methodology, Section 4 describes the data and the empirical findings. Section 5 offers some concluding remarks. The VIX in the Russian Stock Market The idea of constructing a volatility index using option prices was first formulated at the time of the introduction of exchange trade index options in 1973. In subsequent years, the original methodology of Gastineau (1977), Cox and Rubinstein (1985) (1) where Т 1 and T 2 are the time to expiration expressed as a fraction of a year consisting of 365 days for the nearby and far option series respectively; Т 30 andТ 365 stand for 30 and 365 days respectively, expressed as a fraction of a year; σ 1 2 and σ 2 2 are the variance of the nearby and next option series respectively. There is only a limited number of studies on the Russian stock market, possibly because of the lack of long series of reliable data. As Mirkin and Lebedeva (2006) point out, Russian companies are more dependent on debt financing than equity financing since only about 6 percent of listed companies are traded in the largest Russian exchange; ownership in the equity market is highly concentrated; the Russian bond and equity markets are easily accessible to international investors and the corporate bond market has proven to be highly profitable without any defaults. Russian financial markets are rather stable and integrated in terms of international capital flows (Peresetsky and Ivanter, 2000); the degree of financial liberalisation in Russia determines the strength of its international integration (Hayo and Kutan, 2005); since the Russian stock market is not cointegrated with the US one investors should focus on the Russian VIX for predicting Russian stock market returns (Mariničevaitė & Ražauskaitė, 2015); in general, they have become more knowledgeable about the effects of the VIX on stock price indices for developed and emerging economies (Natarajan et al., 2014). 1 Methodology The concept of long memory was originally introduced by Granger (1980Granger ( , 1981, Granger and Joyeux (1980) and Hosking (1981), and allows the differencing parameter required to make a series stationary I(0) to take a fractional value. Assuming that u t is a covariance-stationary I(0) process (denoted as u t ≈ I(0)) with a spectral density function that is positive and bounded at all frequencies, x t is said to be integrated of order d (and denoted as x t ≈ I(d)), if it can be represented as (2) with x t = 0 for t ≤ 0, and where is the lag operator ( ) and d can be any real value. Then, u t is I(0) and x t is I(d), and d measures of the persistence of the series. In such a case, one can use the following Binomial expansion for the polynomial on the left-hand side of (2) for all real d: , and thus, noting that L j x t = x t-j , . The main advantage of this model, which became popular in the late 1990s and early 2000s (see Baillie, 1996;Gil-Alana and Robinson, 1997;Michelacci and Zaffaroni, 2000;Gil-Alana and Moreno, 2004;Abbritti et al., 2016;etc.), is that it is more general than standard models based on integer differentiation: it includes the stationary I(0) and 1 Other papers studying the Russian stock market and its volatility include Goriaev and Zabotkin (2006), Luukka et al. (2016) and Korhonen and Peresetsky (2016). nonstationary I(1) series as particular cases of interest when d = 0 and 1 respectively, but also nonstationary though mean-reverting processes if the differencing parameter is in the range [0.5, 1). We estimate the fractional differencing parameter d along with the rest of the parameters in the model by using the Whittle function in the frequency domain (Dahlhaus, 1989;Robinson, 1994) under the assumption that the estimated errors are uncorrelated and autocorrelated in turn. In particular, we adopt a parametric method that involves imposing a structure on the error term. Robinson's (1994) test is most suitable in this case very convenient in this context since it is valid for any range of values of d and therefore it does not require preliminary differencing; moreover, it allows the inclusion of deterministic terms such as an intercept and a time trend, and its limit distribution is standard normal. 2 Data and Empirical Results We analyse daily transaction level data for both the old (RTSVX) and new (RVI) volatility indices obtained from the Moscow exchange web database; the sample period goes from 7 December 2010 to 12 December 2014 and 6 January 2014 to 9 February 2018 respectively. Appendix 1 provides some descriptive statistics. RTSVX has a slightly higher mean but is less volatile than RVI; further, it has a lower kurtosis coefficient, but a higher skewness one. 4a. The RTSVX index As a first step we estimate the following model: 2 See Gil-Alana and Robinson (1997) for a description of the functional form of the version of the tests of Robinson (1994) used in this paper. where y t is the series of interest, in this case the original volatility index and the logtransformed data. Three specifications are considered, namely i) without deterministic terms (i.e. α = β = 0 a priori in (3)); (ii) with an intercept (α is estimated and β = 0 a priori), and iii) with an intercept and a linear time trend (as in equation (3)), and assuming that the errors are uncorrelated (white noise) and autocorrelated (Bloomfield, 1973) in turn. [Insert Table 1 about here] Table 1 shows the estimated values of d with their 95% confidence intervals. The t-stats imply that the time trend is not a significant regressor, therefore the selected [Insert Figure 1 about here] , ... , 2 , 1 , Under the assumption of autocorrelation, the estimates of d are initially around 0.8, and then decrease from the subsample till the end of the sample; all of them are below 1, implying mean-reverting behaviour. Next, we test for breaks using the approach suggested by Bai and Perron (2003) We then split the sample in two subsamples accordingly. The results for the two cases of uncorrelated and autocorrelated errors are presented, respectively, in Tables 2 and 3. The estimates of d are significantly below 1 in both subsamples, with both white noise and autocorrelated errors, and for both the original and the logged data. Tables 2 and 3 about here] 4b. [Insert The RVI index [Insert Table 4 Conclusions This paper has applied a fractional integration framework to analyse the stochastic behaviour of two Russian stock market volatility indices, namely the originally created RTSVX and the new RVI that has replaced it (for both of which very limited evidence was previously available), using daily data over the period 2010-2018. The chosen approach is more general than those based on the I(0) v. I(1) dichotomy and provides useful information on the long-memory properties and degree of persistence of the series being analysed. The empirical findings are consistent and imply in all cases that the two series are mean-reverting, i.e. their degree of persistence is limited and the effects of shocks disappear over time. This is consistent with the results reported in Cont and Fonseca (2002) and others on volatility in stock markets, it is true regardless of whether the errors are assumed to follow a white noise or autocorrelated process, and it holds for both subsamples, before and after the detected break. The rolling window estimation reveals the presence of some degree of time variation, but does not affect the general conclusion about the behaviour of the two series under examination. This type of volatility index can also be seen as a measure of market fear, which therefore does not seem to be permanently affected by shocks in the case of the Russian stock market. Moreover, given the fact that the effects of shocks are not long-lived there does not seem to be any need of strong policy measures to push the series back to their original trends. Finally, our findings represent useful information for investors aiming to design appropriate portfolio diversification strategies. ([931-1430]. In the case of autocorrelation, the only noticeable change takes place at the 40 th subsample corresponding to . A C C E P T E D M A N U S C R I P T
2,681
2020-01-01T00:00:00.000
[ "Economics" ]
Direct Arm Energy Control of the Modular Multilevel Matrix Converter Modular multilevel matrix converter is a type of converter used in medium-voltage ac-ac power conversion. Controlling the energy content of the converter arms is a prerequisite for a proper converter operation, and at the same time a challenging control task. So far, there are several approaches presented in the literature that successfully deal with the challenge. However, almost all of them fail when it comes to the simplicity of implementation, and ease of understanding by control engineers and researchers entering the field. This paper extends the direct arm energy control concept, already introduced for the class of modular multilevel converters, to the modular multilevel matrix converter. Presented control approach is characterized by an intuitive and simple implementation, ability to control the arm energies to arbitrary values, and stable operation under all operating conditions, including grid imbalances. Validity of the control concept was demonstrated using an in-house-developed hardware-in-the-loop simulator of the modular multilevel matrix converter, with control algorithms being deployed to the industrial-grade controller. I. INTRODUCTION The modular multilevel matrix converter (M3C) was firstly introduced without arm inductors [1], similar to the standard modular multilevel converter (MMC) [2], which initially followed the same approach. Absence of the arm inductors imposed limitations on the number of switching states due to the associated risks of inter-arm short circuits, so later publications almost exclusively employed arm inductors. This converter topology is used for medium-voltage ac-ac conversion tasks, typically for interfacing two three-phase (3PH) systems, or a 3PH and a single-phase (1PH) system, though it can be used to interface multiphase systems as well. It is applied in railway interties [3], where it connects a 3PH utility grid with a 1PH medium voltage (MV) railway grid, The associate editor coordinating the review of this manuscript and approving it for publication was Branislav Hredzak . typically with a different frequency. Main benefits compared to the standard solutions, e.g with three-level NPC converters, are a possibility of elimination of a bulky transformer on the railway side, increased reliability, and elimination of the second-harmonic filter [4]. In general, the M3C can be also applied in MV motor drives, especially in low-speed drives with a constant torque characteristic [5], as well as with variable-speed synchronous machines in pumped-hydro storage power plants [6]. High reliability achieved through redundancy, high efficiency, elimination of bulky filters and transformers come with the price of increased control complexity. One of the control challenges is to ensure that the energy content within each arm of the converter corresponds to its reference. In addition, it is necessary to equally distribute the energy content among the submodules (SM) of an arm, which is accomplished using different control techniques VOLUME 11, 2023 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ [2], [7], [8], [9]. Nevertheless, those control actions are completely decoupled from the arm energy control, and will not be discussed within this paper. Reference [10] presents a generalized set of criteria for the arm energy balancing, applicable to different MMC-alike topologies, such as the standard MMC, delta and star STAT-COM, matrix MMC and Hexverter. Although the stability of the method has been confirmed by means of simulation, using the M3C as an example, this reference only mathematically formulates the criteria, and does not present a solution for any particular converter, neither does it provide guidelines for the implementation. In [11] authors use the approach from [10] to balance the energies inside a standard MMC converter, demonstrating that the application of this method requires a profound mathematical expertise in obtaining a solution. The first references successfully dealing with this topic [12], [13] propose a set of solutions for the M3C energy control, based on circulating currents and common-mode voltage injection, which might be used in different ways, depending on the operating mode. While it demonstrates the ability to create an arbitrary imbalance among the converter arms, implementation of this method is based on double-αβ0 transformations, which might not be an intuitive approach and requires a considerable amount of direct and inverse double-αβ0 transformations. Similar approach was used in [14], where the authors experimentally demonstrate the effectiveness of the balancing method. Apart from a significant amount of double-αβ0 transformations, validity of the method was not tested under unbalanced grid conditions. Improvement of the method presented in [13] was proposed in [15], where an additional transformation is performed on the internal αα, αβ, βα, ββ currents, in order to decouple specific frequency components. The same approach was used by the authors in [16] and [17], which use the M3C as an interface between the wind generator and the ac utility grid. While this approach reduces the delay of the filtering chain in each of the specific directions, the overall delay is determined by the slowest among them. As a result, no significant benefits are expected in terms of control dynamics, while the realization of the control method gets even more complicated. In addition, relationship between the controlled current components in diagonal directions and the final arm currents become even more complex, which makes it difficult to both understand and limit the influence of a certain control action on a particular arm current. Authors in [18] present a generalized approach for current control and energy balancing, applicable to any of the modular multilevel converter topologies. The approach is based on using circulating current components at both the grid and the load frequency, as well as common-mode voltages and circulating current components at arbitrary frequencies. The optimal solutions, in terms of minimal RMS of the induced circulating currents, were presented in the form of matrices, which yield circulating current and common-mode voltage references from the arm power references. Two matrices are presented, where each one of them is optimal for different operating ranges of the M3C. While the solution poses lower current stress on the converter for some operating points compared to [14], the implementation of the method is not simple, due to the fact that the resulting matrices are of high order, and the elements of the matrix are variables dependent upon the operating condition, which are calculated in real time. High number of different elements in the matrices that are either used as inputs or calculated in real-time increases the risk of an error during implementation of the method. In addition, a clear distinction between the application range of the two methods has not be presented, which could lead to a transient behaviour of the converter during the hard switchover between the two. Applicability of the method was not tested under unbalanced grid conditions. Although the proposal combines all degrees of freedom and yields an effective solution with the minimal current stress, its lack of simplicity of implementation might render it unattractive, particularly for engineers and researchers entering the field. Recent publications on this topic [19], [20], [21] address the energy control issue in the M3C by searching for an optimal combination of the circulating currents and a common-mode voltage based on the model-predictive control principles. Fundamentally, all these methods are based on double-αβ0 modeling of the converter, while the model-predictive control serves to find the most optimal solution. While all of them manage to find an optimal combination between the circulating currents and the commonmode voltage, these solutions are in the authors opinion quite complex for implementation. Besides multiple double-αβ0 and inverse double-αβ0 transformations, calculation of the optimal circulating currents involves the use of dynamic calculation of the optimal matrix coefficients. Even though the authors claim that these optimization problems can be resolved offline, they still require a thorough offline analysis for all operating points, and feeding the optimal coefficients for all operating points into the look-up tables. All these analyses become even more complex when unbalanced grid conditions, arbitrary energy references, and arbitrary common-mode voltage amplitudes come into play. Complexity of the existing solutions for the M3C energy control was recognized by the authors in [22], who proposed an energy balancing solution that simplifies the energy control problem in the M3C, compared to the existing solutions. Although effectively achieving the balancing task within the converter under symmetrical conditions, proposed solution is unable to deal with the energy control in cases when different energy levels are required in different arms (as in the case of a SM failure in one arm). In addition, the solution might not be able to ensure the decoupling of the inner energy control from the terminal currents during grid faults. Another issue with operation of the M3C arises when the frequencies of the two ac system it interconnects become similar or equal. In such a case, energy oscillations in the M3C arms become excessively high, resulting in a large ripple in the voltage of the arm capacitors. If not mitigated, excessively high voltage ripple would lead to overvoltages across SM capacitors, loss of the voltage generation capability, and consequently loss of control over the converter. Mitigation of such oscillations relies on introduction of a common-mode voltage and circulating currents, which together counteract the power components that provoke the oscillations [14], [15], [17], [23], [24], [25]. Regardless of the control method being applied, what all of them have in common is the fact that the arm current increases significantly due to the introduced circulating currents. It can go up to 50% above the rated arm current [5], [26]. Additionally, voltage oscillations in case when the supply and load frequencies are equal are still 2.5 times higher in case of the M3C compared to the back-to-back MMC solution, yielding a requirement for a higher installed energy in the M3C. All this facts contribute to the conclusion that the M3C is a converter suitable for interconnecting two ac systems of unequal frequencies, or an ac motor drive with a rated frequency far below the supply frequency. Therefore, the focus of the paper is on techniques for the arm energy control in the M3C, assuming that the frequencies of the interconnected systems are sufficiently different from each other. This paper presents an extension of the direct arm energy control methods, firstly derived for a standard MMC [27], [28], to the M3C. The methods presented within this paper are characterized by an intuitive and simple implementation, which might be of great value to the control engineers and researchers entering the field. In addition, it is able to independently control arm energies to arbitrary values, and is also valid under balanced and unbalanced grid conditions. Control methods presented in this papers are deployed to the industrial-grade controller, and verified on an in-housedeveloped hardware-in-the-loop (HIL) simulator of the M3C converter. The case of variable-speed drive is used as a test case. Section II presents the basic equations and operating principles behind the M3C. Derivation of the methods for direct arm energy control, originally developed for the standard MMC, are presented in Section III, whereas their extension to the M3C was presented in Section IV. Description of the realtime (RT) hardware-in-the-loop (HIL) simulator used for the control verification is given in Section V, whereas the results of the RT HIL simulations are provided in Section VI, while a brief discussion of the results is provided in Section VII. Finally, conclusions of the paper are outlined in Section VIII. II. MODELLING OF THE M3C Electrical schematic of an M3C is provided in Fig. 1. It interfaces a 3PH ac grid, characterized by its phase voltages v A , v B , and v C , with another 3PH grid/load, that can represent a 3PH ac machine. Without loss of generality, the terminals labelled as ABC will be further referred to as grid terminals, whereas the terminals labelled with 123 will be further referred to as load terminals. Each grid terminal of an M3C is connected with each load terminal via a dedicated arm. Arm is a cluster of FB SMs connected in series, together with an arm inductor, which permits arm current control. Under normal operating conditions, a cluster of SMs can be perceived as a controllable voltage source, which can generate an arbitrary multilevel voltage waveform. Therefore, for the sake of analysis of the terminal currents and voltages, we will assume that the clusters of SMs are acting as ideal voltage sources. Due to the symmetry among the arms, only a single arm will be considered during the following analysis. Applying a Kirchoff's law to the arm interfacing the terminals A and 1, yields: Since an arm interconnects a grid-side and load-side terminal, it will contain portion of both terminal currents. Additional arm current components at grid and load frequency might be injected to perform the arm energy control, labelled as i A1 and δi A1 , respectively. The arm voltage comprises terminal voltage components, as well as current control components, as shown in (3). Equation (4) shows that all the arm current components can be fully controlled by the arm voltage components. Consequently, terminal currents can be independently controlled, and decoupling between the two systems can be fully achieved. The total voltage across the SM capacitors inside an arm is a reflection of the total energy stored within these capacitors. The energy content is controlled by the arm power, which can be expressed as in (5). Contribution of the voltage components v A , v 1 , v , and v δ which are normally very small compared to the other voltage components in (3), is neglected. References [5] and [23] demonstrate that the M3C is a preferable solution for interconnecting low-speed, hightorque drives to the medium-voltage grid, or generally to interface two systems with different operating frequencies. Consequently, the two systems of unequal frequencies will be considered here. Arm power components can have non-zero average values over their periods only in case when the voltage and current component are of equal frequencies. To analyse the arm power components, the following definition of the quantities from (5) is adopted: The additional arm current components, as well as common-mode voltage components are considered to be either at the grid or load frequencies, despite other different possibilities. The reason for such a choice is the fact that these components can interact with already present system variables (voltages and currents), and thus contribute to the arm energy control. As a result, injection of voltage and current components at another frequencies is avoided. Combining (6)-(12) into (5), and highlighting the components with non-zero average value gives the expression for the average arm power: The first two terms in (13) represent the average power delivered to the arm from the grid and taken by the load, respectively. Another two power terms, P 1 and P 2 , are a consequence of interaction between the additional arm current components at the grid and load frequency with the respective terminal voltages. The last two terms, P 3 and P 4 , are a consequence of interaction between the common-mode voltage components and the terminal currents. Since only the first two terms in (13) normally exist in the average arm power, one might conclude that those two terms are sufficient to control the arm energy. Namely, by controlling either of the terminal current component, i.e.Î (ω g ) A1 orÎ (ω l ) A1 , one can control the arm power, and thus the arm energy content. However, due to different constraints, it is not possible to control the arm terminal currents independently which is the reason for the additional arm current and common-mode voltage injection. III. ARM ENERGY CONTROL To control energy content within each arm, a feedbackbased approach is used. Arm energies are calculated from the measured SM voltages v c as W = C SM N k=1 v 2 c,k /2, and further filtered to exclude all harmonic components at the frequencies 2ω g , 2ω l , (ω g + ω l ), (ω g − ω l ), retaining only the dc values. Arm energy references are calculated based on desired SM voltages, as W * = NC SM (V * c ) 2 /2, where C SM represents capacitance of a SM, whereas N denotes the number of SM within an arm. Nine PI controllers are used, to control the average energy content within each arm, giving the power references as their outputs. Referring to the first two terms in (13), energy content within an arm can be controlled either by controlling the grid, or load current. For the sake of simplicity, it will be assumed that whenever in operation, the grid voltage is available, which does not necessarily hold for the load voltage. Therefore, the grid current can be used to maintain the energy content at desired value. Nevertheless, power requirements of energy controllers are in general arbitrary, and so would be the grid current requirements of individual arms, which would result in asymmetrical grid currents, as well as the appearance of the grid frequency currents in the load. To prevent such a scenario, arms are grouped into the bundles of three, where FIGURE 2. a) Grid current control for appropriate power distribution among the bundles of arms connected to the same load terminals; b) generation of additional arm current components i xy at grid frequency, corresponding to the power requirements of individual arms P xy ; c), d) illustration of the need for modification of the additional arm currents i xy entering the same load/grid terminal; e) proper power distribution among the arm bundles connected to the same grid terminals through the additional arm currents at load frequency δi xy ; f) illustration of the need for modification of the additional load-frequency currents. Principles of the current modification are elaborated in Section IV. each bundle represents arms connected to the same load terminal, as illustrated in Fig. 2.a. The total power requirement of a bundle equals the sum of power requirements of energy controllers ( P xy ) and the average power delivered to the load through the respective node (P load,y ): P y = P Ay + P By + P Cy + P load,y (14) Thus obtained power reference can be realized using a set of symmetrical grid currents, and the positive sequence of the grid voltage. Resulting arm current components are calculated as in (15), where x refers to the grid terminals ABC, whereas y refers to the load terminals 123. This notation is used throughout the rest of the paper. Resulting current components do sum-up to zero at the load terminals, whereas their sum at the grid terminals corresponds to the total current drawn from the grid. Therefore, using the grid current components ensures that the power delivered to the three bundles corresponds to the total power references requested by the energy controllers within each bundle. However, nine arm energies have to be controlled, which calls for exploiting other degrees of freedom from (13). Due to the assumption that the grid voltage is constantly available during the converter operation, using P 1 power components from (13) seems to be the most reasonable choice. To create this power component within an arm, it is necessary to inject an additional arm current component at the grid frequency, as illustrated in Fig. 2.b. This current component should be in phase with the respective grid voltage, as it would results in a minimal current amplitude for a given power reference. Referring to Fig. 2.b, additional arm current references are generated in a way described by (16). Due to the arbitrary magnitude of these current references, as well as due to the asymmetrical system of grid voltages under unbalanced grid conditions, these currents do not generally sum-up to zero, and would appear in the load current, interfering with the load current controller. For such a reason, certain modification of the current references is required, to ensure that these currents sum-up to zero at the terminal 1, as illustrated in Fig. 2.c. One should also recall that these current references are generated in-phase with its respective grid voltages, so as to minimize their magnitude. Therefore, the modification imposed on these currents should be such that, apart from the zerosum condition, the deviation from the original references is minimized, thus obtaining a set of currents that impose the minimal thermal stress on the converter arms. In addition to not being observed at the load terminals, these additional current components should neither be observed at the grid terminals. This condition is shown in Fig. 2.d, where the arms connected to the same grid terminals are grouped. Previously modified current references i * xy are further modified so that they sum-up to zero, and thus remain unobserved at both converter terminals. Current modification, VOLUME 11, 2023 as in the previous case, has as its objective a minimal deviation from the original references, as well as the zero-sum constraint. In the following section it will be demonstrated how these objectives can be ensured effectively in a simple manner. Due to the presented constraints, generated arm currents are not mutually independent. There are in total 6 nodes (ABC and 123), where these currents should sum-up to zero. Nevertheless, five out of six of these constraints are linearlyindependent. Therefore, only four out of nine arm current references generated this way are linearly-independent. To achieve additional energy control actions, power components P 2 , P 3 , and P 4 from (13) should be considered. Depending on the operating conditions of the converter different scenarios are possible. In the following analysis an example of an M3C feeding a synchronous machine from a 3PH ac grid is used, without loss of generality. A. SCENARIO I-USING THE LOAD VOLTAGES In case when the load voltages are available, e.g. a synchronous machine is operating, using the load voltages for additional energy control actions seems to be the best choice. The main reason for this is the fact that the high load voltages would require relatively low additional arm current component. In addition, injection of the common-mode voltages for the purpose of energy control, while the load voltages are present, would necessitate higher voltage/energy reserve in the arms. For those reasons, usage of the load voltages is beneficial, whenever possible. To perform additional energy control actions using the load voltages, arms connected to the same grid terminals are grouped into bundles, as illustrated in Fig. 2.e. Similarly as before, total power requirement of a bundle is calculated as: These power references are satisfied by introducing the arm current components at load frequency, as shown in (18). Due to the symmetry of the load voltages, thus generated currents do sum-up to zero at the grid terminals. Nevertheless, generated current references should also satisfy the zero-sum constraint at the load terminals. Therefore, a modification of the references should be performed, as illustrated in Fig. 2.f. Arms are regrouped into those connected to the same load terminals, and the modification to the references obtained by (18) is performed. Finally, the total arm current reference is obtained when the three current components are summed together with the load current reference. Thus generated reference is passed to the arm current controller, whose implementation is not covered in this paper. B. SCENARIO II-USING THE COMMON-MODE VOLTAGE Arm energy control has to be performed even in case when the converter is in idling mode, i.e. connected to the grid, but not generating any voltage across its load terminals. In such a case, another scenario should be applied instead of injecting the load-frequency arm current components. The only possibility, according to (13), is injection of the common-mode voltage v comm either at the grid, or at the load frequency. In the first case, common-mode voltage can interact with the existing grid currents, that are covering the converter losses, and thus provide the energy control functionality. In the latter case, additional load-frequency arm currents would have to be generated. While one can opt for any of the two possibilities, the authors of the paper opted for the first one, due to its simplicity. Under both balanced and unbalanced grid conditions, grid currents are equally distributed among the arms connected to the respective phase. Therefore, as in the case of energy control using the load voltages, a bundle of arms connected to the same grid terminal will be considered as a unit (c.f. Fig. 2.f). Therefore, to produce the required power reference of a bundle, common-mode voltage in-phase with the grid current of the respective terminal should be injected: (20) The amplitude of the common-mode voltageV comm is determined depending on the available voltage reserve, and it case when the load voltage is not generated, it can be as high as the load voltage amplitude. Three bundles of arms will have different power requirements, and it is necessary to align the common mode voltage with the current of the phase with the highest power requirements. Therefore, the total common-mode voltage injected is determined as: To summarize, in case when the load voltage is available, energy control is achieved by setting the arm current reference as in (19). Otherwise, the arm current reference is deprived of the term δi * xy , whereas the common-mode voltage, defined by (20) and (21), is introduced. Therefore, the switch-over between the two control scenarios is easily achieved in the control algorithm. IV. CURRENT MODIFICATION AND APPLICATION TO THE M3C To control the arm energies without influencing the grid or load currents, it is necessary to modify the additional arm current components so that they sum-up to zero at corresponding nodes. This was illustrated in Fig. 2.c, where additional arm currents i A1 , i B1 , and i C1 do not generally sum-up to zero, and their sum would appear in the load (terminal 1). Graphical representation of such a case is presented in Fig. 3.a, where phasors of the grid voltages, as well as the additional arm current components are shown. It is obvious that due to different power requests from energy controllers, references of the additional arm currents do not sum-up to zero, which imposes the need for their modification. Similar issue occurs in a standard MMC, and has already been analysed in [10], [11], [27], [28], and [29]. Reference [28] shows two possible approaches in modifying the original current references, illustrated in Fig. 3.b and Fig. 3.c. The first approach modifies the original set of references so that the active power references are preserved, while the zero-sum constraint is ensured. The second approach (c.f. Fig. 3.c) does not preserve the active power references, however it represents the solution that has the minimal deviation with respect to the original references, under the zero-sum constraint. Consequently, this solution yields the modified current references with the lowest magnitude, and with the lowest deviation from the original ones. Fig. 3.d illustrates the function A 2 used as a measure of deviation. The solution in Fig. 3.c is obtained under the zero-sum constraint and minimizing the function A 2 . Although the solution does not fully preserve the power references, it is characterized by a simpler implementation with respect to the first one, as well as validity under both balanced and unbalanced grid conditions, as demonstrated in [27]. Fig. 4 shows the implementation of the current modification block. Namely, it can be seen that the new set of references is obtained when the average value of the three currents is subtracted from each one of the initial references. In such a way, modified set of references is the best possible approximation of the initial set under the zero-sum constraint [28]. Therefore, this current modification will be used in all the cases where the zero-sum constraint is present. Implementation of the arm energy control of the M3C is shown in Fig. 5. Arm energies are calculated from the measured voltages of SMs within an arm, and energy pulsations at specific frequencies are filtered out. The measured value is compared with the energy reference of the respective arm and the error is fed to a PI controller that outputs the power reference P xy , as shown in Fig. 5.a. The next control action is the control of energies of the bundles of arms connected to the same load terminals. It is based on (14) and (15), and its implementation is shown in Fig. 5.b. The output of this control action are symmetrical arm current references that would make-up the grid current. Implementation of the internal energy control action, i.e. the one based on interaction of the grid voltage components and additional arm currents that are not observed at the converter terminals, is illustrated in Fig. 5.c. It can be seen that it only contains simple current-modification blocks and regrouping of the references. The final control action depends on the operating mode of the converter. In this paper, an M3C is assumed to feed a 3PH synchronous machine, which can have three distinctive operating modes: standby mode, generator mode, and motor mode. In case of either motor or generator mode, the voltage at the machine terminals is available, and can be used for energy control, as discussed in the previous section. This mode of converter operation in terms of energy control is referred to as Scenario I, and its implementation is illustrated in Fig. 5.d. In case when the machine is in standby mode (not operating), or at very low speed and thus low voltage, a solution using the common-mode voltage is preferable. In this case, the common-mode voltage component is adjusted in terms of its phase and amplitude to interact the most with a grid current of a bundle that has the highest power reference (refer to Fig. 2.e). This mode of operation in terms of energy control is referred to as Scenario II, and is illustrated in Fig. 5.e. While different transitions between the two scenarios might be adopted, the authors of this paper have opted for the simplest one. Namely, for operating modes where the load voltage was higher than half of its rated value, the Scenario I for energy control was used, otherwise it was Scenario II. As it will be demonstrated later, hard switch between the two didn't cause any problem in the converter control. VOLUME 11, 2023 FIGURE 5. Implementation of the arm energy control of an M3C: a) nine PI energy controllers for nine arms of the converter; b) implementation of the control action that manages the power distribution among the arm bundles connected to the same load terminals; c) generation of the additional arm current components i xy and their modification to remain internal to the converter; d) Scenario I of the additional energy control action-generation of the load-frequency additional arm current components δi xy , and their modification to remain unobserved at the converter terminals; e) Scenario II of the additional energy control action -generation of the common-mode voltage at grid frequency. As described by (19), in case of Scenario I the current references obtained from Fig. 5.b, c and d are summed-up and fed to the arm current controllers. In case of Scenario II, the common-mode voltage reference is created, whereas the current references δi * xy from Fig. 5.d are not used in the total current reference. V. REAL-TIME HIL SIMULATOR DESCRIPTION To facilitate the development of control concepts for the MMC, a real-time (RT) HIL simulator was developed within the laboratory, described in [30]. The control platform used in the MMC and RT HIL is based on ABB's AC 800PEC family of controllers, which aim at performing the top-level MMC control tasks, such as converter pre-charging, control of terminal voltages and currents, energy control, exchanging references and measurements with the SM, etc. Converter SMs inside the HIL simulator are realized in a hybrid manner. Namely, control part of the SMs are realized by the so-called Control cards, shown in Fig. 6, where the local SMs controller is implemented, based on TI TMS320F28069 digital signal processor (DSP). Control cards also host the communication interface between the SM controller and the top-level control. Power parts of the SMs, i.e. IGBTs, gate-drivers, DC capacitor, are modelled within a Plexim RT-Box 1, and there are in total eight SMs modelled within a single RT-Box. Control cards are interfaced to an RT-Box through the interface board, whereas they are communicating with the top-level controller (AC 800PEC) using the fibre-optic interface. Such a structure represents a single arm of the converter, and is referred to as Arm unit in Fig. 6. There are in total nine such units, each one modelling an arm. Due to the same communication interface and the same software running on the DSP of a control card, as it would in the real SM, it is achieved that there is no difference between the HIL-modelled arm and an arm in the real converter, from the perspective of the top-level controller. Grid and load side of the converter, as well as the interconnections between the arms are modelled using a separate RT-Box and interface board, labelled as Application unit in Fig. 6. Thereby, all the power stages of the converter are modelled in the RT HIL simulator, whereas the control hardware corresponds to the one found in the real converter. Consequently, top-level control methods, as well as the SM-level control (implemented on DSPs), can be safely developed and tested using such a system, without making difference between the real converter and the HIL system. Even though the HIL simulator is not the exact replica of the real converter, it can be considered as a valid platform for control verification. Namely, control hardware and software on both converter and SM level are identical in the HIL simulator and real converter. Modelling of the power stages of the converter in the RT-Box, models all the effects relevant for the current control, energy control and converter operation. Possible deviations with respect to the actual converter can be expected in unmodelled dynamics of the power switches, deviations of the SM capacitances and arm inductance. However, none of these effects can have a profound influence on the energy control. VI. REAL-TIME HIL RESULTS AND EVALUATION Presented control concept was tested on the real-time (RT) HIL simulator, modelling the M3C converter which interfaces VOLUME 11, 2023 FIGURE 8. Results obtained from the RT HIL simulator. The leftmost plot shows performance of the energy control method during the speed reversal of the synchronous machine. Even though the energy control changes between Scenario I to Scenario II, and back to Scenario I, the arm voltages remain unaffected and follow their reference. The middle plot demonstrates the effectiveness of the control principle in the generator mode of operation. The rightmost plot shows that despite sudden de-loading, followed by machine slowing-down, effectiveness of the energy control is not compromised. a medium-voltage grid, and a synchronous machine, as a typical use case of the M3C [6], [31]. Parameters of the grid, converter and machine are provided in Table 1. The first test scenario is such that the machine is in the standby mode, i.e. at zero speed. Regardless of the fact that the machine is not operating, the converter should be synchronized to the grid, and control its arm energy content. Leftmost part of Fig. 7 shows that the converter maintains the arm voltages around predefined value during this operating mode. In addition, at t = 1.12 s voltage references of the SMs in two arms are modified from V * c = 680 V to V * c = 710 V and V * c = 650 V, and set back to the original reference at t = 2 s. In both cases the voltage references are attained within t = 200 ms. The next scenario is shown as the middle plot of Fig. 7. It represents the machine speed-up to the rated speed, starting at t = 6.2 s, and load torque rising to the rated value, starting at t = 6.7 s. It can be observed that the arm voltages remain constant throughout the whole transition process, which verifies good performance of the presented control method. The rightmost plot in Fig. 7 shows converter operation at the rated speed and rated load torque. As in the first test scenario, voltage references of two arms were changed, and energy control was able to ensure reference tracking. Fig. 8 demonstrates converter's capability to control its energy content under different dynamic conditions, namely during the speed reversal (leftmost plot), negative speed of the machine (middle plot), as well as during the machine de-loading and slowing down. It is important to notice from Fig. 7 and Fig. 8 that the grid and load currents remain unaffected by the energy control actions, as well as that the transition between different operating modes of the machine, and thus between the two energy control scenarios, remains seamless. Finally, to verify the performance of the presented control method under unbalanced grid conditions, single-phase-toground fault in the grid is simulated during both the standby mode of operation (c.f. Fig. 10), and the motor mode of operation of the machine (c.f. Fig. 9). In both cases, arm voltages remained unaffected, thus demonstrating the capability of the two control scenarios to properly work under unbalanced grid conditions. VII. DISCUSSION Presented results verify effectiveness of the proposed energy control method, under different imbalances, speed (frequency) reversal, grid unbalanced conditions, as well as under no-load operation. Compared to the methods based on double αβ0 transformations, implementation of the method if far simpler, using only the real-time values of the grid and load quantities, without αβ0 and dq transformations. Compared to the solution proposed by [18], bulky matrices with real-time variable elements are omitted, whereas the transition between the low-frequency (low load voltage) and high-frequency operating modes can be seamlessly performed. Additionally, the proposed solution was verified under unbalanced grid conditions, even under the combined unbalance and no-load conditions, in contrast to [18]. Comparing the solution against its rival in terms of simplicity [22], the solution in [22] cannot ensure an arbitrary energy control under normal conditions, neither can it FIGURE 10. Performance verification of the energy control under unbalanced grid conditions. Synchronous machine is in the standby mode, so the energy control is achieved using the common-mode voltages-Scenario II. ensure decoupling of the internal balancing currents from the terminal currents under grid faults. Therefore, although the solution [22] seems as the simplest one, it in fact does not perform well under all operating modes. A detailed comparison between the balancing performance of the methods mentioned above, including the here introduced direct arm control balancing method, can be found in [32]. The authors conducted a theoretical comparison as well as real-time HIL tests to establish the characteristics of each method. The evaluated aspects include the dynamic response, degrees of freedom, low-voltage ride-through performance as well as the mathematical simplicity and the implementation simplicity. One might argue that the proposed solution does not preserve all the power references yielded by the energy controllers, due to the current modification blocks. While this claim is true, the current modification block is optimized so as to minimize the circulating currents imposed in the converter arms, while still ensuring the energy balancing under all conditions. Finally, simplicity of implementation, coupled with the fact that the current modification is optimized to reduce the current stress on the converter while ensuring an arbitrary energy control, make this solution unique, simple to understand and implement, and robust for various operating modes. VIII. CONCLUSION This paper presents extension of the direct arm energy control method, firstly derived for the standard MMC, to the M3C. The energy control method presented in this paper is derived using an intuitive approach, resulting in a simple implementation, which might be of great value to control engineers and researchers entering the field. Besides ensuring that the energy control actions are not observed at the converter terminals, presented control method proves to be valid under different operating conditions of a synchronous machine, as well during transients. It was demonstrated that the arm energies can be arbitrarily controlled, and that the control method is equally applicable under balanced and unbalanced grid conditions. Dr. Dujić has received the First Prize Paper Award from the Electric Machines Committee of the IEEE Industrial Electronics Society, in 2007. In 2014, he has received the Isao Takahashi Power Electronics Award for outstanding achievement in power electronics, and in 2018, the EPE Outstanding Service Award from the European Power Electronics and Drives Association. He is an Associate Editor of the IEEE TRANSACTIONS ON POWER ELECTRONICS. VOLUME 11, 2023
9,436
2023-01-01T00:00:00.000
[ "Engineering", "Physics" ]
Homotopy Analysis Method to determine Magneto Hydrodynamics flow of compressible fluid in a channel with porous walls abstract: In this article magnetohydrodynamics (MHD) boundary layer flow of compressible fluid in a channel with porous walls is researched. In this study it is shown that the nonlinear Navier-Stokes equations can be reduced to an ordinary differential equation, using the similarity transformations and boundary layer approximations. Analytical solution of the developed nonlinear equation is carried out by the Homotopy Analysis Method (HAM). In addition to applying HAM for solving obtained equation, the result of the mentioned method is compared with a type of numerical analysis as Boundary Value Problem method (BVP) and a good agreement is seen. The effects of the Reynolds number and Hartman number are investigated. Introduction Magnetohydrodynamics is essential in plasma physics and astrophysics and studies the motion of electrically conducting media in the presence of a magnetic field. In natural systems include the Earth's core and solar flares, and in the engineering world, the electromagnetic casting of metals and the confinement of plasmas MHD effects are important [1]. Recently reactor designs commonly involve the use of electrically conducting liquid metals, in fusion engineering, are much of the interest [2]. In recent decades many attempts have been made to develop analytical methods for solving such nonlinear equations. One of them is the perturbation method [3], which is strongly dependent on a so called small parameter to be defined according to the physics of the problem. Thus, it is worth developing some new analytical techniques,which are independent of defining a small parameter such as Homotopy Perturbation Method (HPM) [4,5,6,7], Variational Iteration Method (VIM) [8,9]. In fact the perturbation method cannot provide a simple way to adjust and control the region and rate of convergence of a particular approximated series. Liao introduced the basic idea of Homotopy in topology to propose a general analytical method for nonlinear problems, namely the Homotopy Analysis Method [10,11], that does not need any small parameter. This method has been successfully applied to solve many types of nonlinear problems [12,13,14]. In order to determine the velocity components, HAM is applied to solve the resulting nonlinear differential equation. Then the solution is compared with Boundary Value Problem Method. An ordinary non-linear differential equation can be derived from the governing differential equations by using similarity transformation. Description of the problem The two-dimensional MHD flow of a compressible fluid in a porous channel with suction and injection is investigated. The geometry of the problem is shown in figure (1-a) and (1-b). The x-axis is taken along the centerline of the channel and the y-axis transverse to these. The flow is symmetric about both axes. The porous walls of the channel are at y = H/2 and y = ΓH/2. The fluid injection or suction takes place through the porous walls with velocity V 0 /2. Here V 0 > 0 corresponds to suction and V 0 < 0 for injection. Let u and v be the velocity components along the x-and y-axes respectively, and B 0 is a uniform static magnetic field in y-direction. The compressible electrically conducting fluid that flows though the axial direction in the channel will induce a magnetic field in the medium in an applied magnetic field. The magnetic Reynolds number (Re m = σµ m U L) represents the relative strength of the induced field. In the above relation the characteristics such as U and L are the scale length and velocity and µ m is magnetic permeability. If the magnetic Reynolds number is small, the induced magnetic field will be neglected [17]. It can be assumed that the electric field is zero as no external electric field is applied and the effect of polarization of the ionized fluid is negligible. The equations for the MHD boundary layer flow of a compressible fluid with are: Assuming the symmetry about the x-axis and no-slip conditions at y = H/2, we have: The Equation (4) represents the non-dimensional parameters to rewrite the Equation (2) in the non-dimensional form, in which f (y * ) is assumed as a similarity function. Implementation of the Homotopy Analysis Method For HAM solutions, we choose the initial guess and auxiliary linear operator in the following form: Where c i (i = 1, 2, 3, 4) are constants. Let P ∈ [0, 1] denotes the embedding parameter and indicates non-zero auxiliary parameters. We then construct the following equations: Zeroth-order deformation equations For p = 0 and p = 1 we have When p increases from 0 to 1 then F (y * ; p) varies from f 0 (y * ) to f (y * ). By Taylor's theorem and using equation (13), F (y * ; p) can be expanded in a power series of p as follows: In which is chosen in such a way that this series is convergent at p = 1, therefore we have through equation (14) that Now we determine the convergency of the result, the differential equation, and the auxiliary function according to the solution expression. So let us assume: H(y * ) = 1 (3.14) We have found the answer by maple analytic solution device. The first deformation is presented below f 1 (y * ) = 2 35 hRey * 7 + 1 10 hM 2 y * 5 + − 3 280 hRe − 1 20 hM 2 y * 3 The solutions f (y * ) were too long to be mentioned here, therefore, they are shown graphically Convergence of the HAM solution As pointed out by Liao [11], the convergence region and rate of solution series can be adjusted and controlled by means of the auxiliary parameter . To influence of on the convergence of solution, we plot the so-called -curve of f ′′′ (0), as shown in Figures. 2 (a-c). The solutions converge for values which are corresponding to the horizontal line segment in curve. In order to investigate the range of admissible values of the auxiliary parameter , for various quantities of Re and M , the curves of were derived 9th-order approximations. Figures 2-4 shows obtained admissible values for auxiliary parameter . In our case study, it is easy to discover that h = −1.5 is suitable value which is used for values of 0.1 < M < 0.9 and −5 < Reω < 5. Result and discussion In the present study HAM method is applied to obtain an explicit analytic solution of compressible fluid in a channel under the presence of uniform magnetic field ( Figure. 1). First, a comparison between the applied methods and numerical method for different values of active parameters is shown in Figures. 2. The numerical solution is performed using the algebra package Maple 16.0, to solve the present case. The package uses a fourth order Runge-Kutta procedure for solving nonlinear boundary value (B-V) problem [18,19]. Validity of HAM is shown in Table 1 and Table 2. In these tables, the %Error is defined as: The results are proved to be precise and accurate in solving a wide range of mathematical and engineering problems especially Fluid mechanic cases. This accuracy gives high confidence to us about validity of this problem and reveals an excellent agreement of engineering accuracy. This investigation is completed by depicting the effects of some important parameters to evaluate how these parameters influence on this fluid. In figures (3) to (6) the effects of Hartman number and Reynolds number on the velocity components f and f ′ are investigated. From figures (3) and 5), it is observed that as the Reynolds number and Hartman number increase, the similarity function (f ) decreases. In the figures (4) and (6), toward the center point from y * = 0 to the suction side as the Hartman number and Reynolds number grow, f ′ decreases, but then this parameter increases. Hence the profile of the velocity component in x direction will have a common point that approximately takes place in y * = 0.25. So the stated point can be interpreted as a critical point in the formation of x direction flow. Conclusion In this research, an analytic method for solving of the two-dimensional magnetohydrodynamics (MHD) boundary layer flow of compressible fluid has been presented. Differential equations were transformed to algebraic equations, using Homotopy Analysis Method (HAM). Then HAM is compared with Boundary Value Problem (BVP) method as a numerical solution. The effects of different Reynolds number and Hartman number were investigated for the similarity functions f, f ′ used to determine the velocity components. It was found from the results, as the Hartman number and Reynolds number changed a common point appeared in the profile of the velocity component in x direction. When the velocity injection increased, it was clear that the suction force assisted the structural formation of y direction flow. This research has been also proved that HAM includes of high accuracy to solve different problems in the engineering field.
2,088.8
2016-01-01T00:00:00.000
[ "Engineering", "Physics", "Environmental Science" ]
Association of Complement Receptor 2 Gene Polymorphisms with Susceptibility to Osteonecrosis of the Femoral Head in Systemic Lupus Erythematosus Osteonecrosis of the femoral head (ONFH) is a complex and multifactorial disease that is influenced by a number of genetic factors in addition to environmental factors. Some autoimmune disorders, including systemic lupus erythematosus (SLE), rheumatoid arthritis (RA), and inflammatory bowel disease (IBD), are associated with the development of ONFH. Complement receptor type 2 (CR2) is membrane glycoprotein which binds C3 degradation products generated during complement activation. CR2 has many important functions in normal immunity and is assumed to play a role in the development of autoimmune disease. We investigated whether CR2 gene polymorphisms are associated with risk of ONFH in SLE patients. Eight polymorphisms in the CR2 gene were genotyped using TaqMan™ assays in 150 SLE patients and 50 ONFH in SLE patients (SLE_ONFH). The association analysis of genotyped SNPs and haplotypes was performed with ONFH. It was found that three SNPs, rs3813946 in 5′-UTR (untranslated region), rs311306 in intron 1, and rs17615 in exon 10 (nonsynonymous SNP; G/A, Ser639Asn) of the CR2 gene, were associated with an increased risk of ONFH under recessive model (P values; 0.004~0.016). Haplotypes were also associated with an increased risk (OR; 3.73~) of ONFH in SLE patients. These findings may provide evidences that CR2 contributes to human ONFH susceptibility in Korean SLE patients. Introduction Osteonecrosis of the femoral head (ONFH) is a complex and multifactorial disease which can be affected by combined genetic factors with relatively small effect in addition to environmental factors [1]. A variety of conditions, such as use of corticosteroids, alcohol abuse, and rheumatic diseases were reported as risk factors for secondary ONFH. Among autoimmune diseases, systemic lupus erythematosus (SLE) has shown higher incidence of ONFH ranging from 5 to 30%, than that of general population and ONFH, in turn, results in significant morbidity [2]. Although corticosteroid use has been reported as a significant predictive factor for developing ONFH in patients with SLE [3], there are also reports of patients with SLE complicated by ONFH, who have not taken corticosteroid [4,5]. This implicates possible role of the disease progression itself or underlying genetics. Although some studies reported that immunologic factors including interleukins and tumor necrosis factors might develop ONFH [6,7], most genetic studies have focused on gene polymorphisms affecting the coagulation and fibrinolytic systems [8,9]. Moreover, few genetic studies were performed to reveal their roles in the development of SLE ONFH [10]. Human complement receptor 2 (CR2) is encoded by a single gene containing 20 exons, which is located at chromosome 1q32.2 [11] and is expressed on mature B or follicular dendritic cells (FDCs) [12]. CR2 was known to bind C3 degradation products generated during the process of complement activation [13] and some studies suggested that CR2 might play an important role in immunity [14]. Therefore, given the pleiotropic effects of complement receptor, we investigated whether polymorphisms of the CR2 gene are associated with the susceptibility of SLE ONFH. [15]. Genomic DNA was isolated from the peripheral blood of each participant using a FlexiGene DNA Kit (QIAGEN, Valencia, CA). The current study was approved by the Institutional Review Board, and all participants in this study provided their informed consent. 2.2. Genotyping. Eight single nucleotide polymorphism (SNP) sites in the CR2 gene were selected based on locations, potential relevance to disease, and published data [11,16,17]. The genotype was determined using a TaqMan fluorogenic 5 -nuclease assay with predesigned or custom TaqMan primer/probe sets (Applied Biosystems, Foster City, CA). For genotyping of polymorphic sites, amplification primers and probes were designed for TaqMan assays (Applied Biosystems, Foster City, CA). The primer and probe sequences are indicated in Table 1. We designed both the PCR primers and the minor groove binder (MGB) TaqMan probes using Primer Express (Applied Biosystems). All reactions were performed following the manufacturer's protocol. Details regarding the PCR reaction and TaqMan assay have been described previously [9]. The fluorescence data files from each plate were collected and analyzed using automated allele-calling software (SDS 2.2, Applied Biosystems). Statistical Analyses. The threshold of Hardy-Weinberg equilibrium (HWE) value is set at >0.05. Allelic or genotype association tests in the case-control were calculated using 2 test or Fisher's exact test. Odds ratios (ORs) and corresponding 95% confidence intervals (CIs) for case-control data were also calculated. Genotypes were given codes of 0, 1, and 2; 0, 1, and 1; and 0, 0, and 1 in the codominant, dominant, and recessive models, respectively. The strength of linkage disequilibrium (LD) among the pairs of SNPs was evaluated using Haploview 4.2 software (http://www.broad.mit.edu/mpg/haploview/). Haploview software was also used to calculate haplotype structures and their frequencies within LD blocks. Haplotypes with frequencies < 5% were excluded from the following analysis. Continuous variables were compared by Student'stest or ANOVA. All analyses were two-tailed, and values < 0.05 were considered to be statistically significant. Genetic Association of CR2 SNPs with SLE ONFH Susceptibility. To determine whether CR2 gene polymorphisms might contribute to the susceptibility of ONFH development in SLE patients of Korea (SLE ONFH), the sample of 150 SLE and 50 SLE ONFH Korean patients was genotyped using eight SNPs spanning a 39 kb region of the CR2 gene from 0.6 kb upstream to 2.8 kb downstream of the gene ( Figure 1). We selected 8 informative SNPs that included 1 regulatory SNP (rs3813946 in 5 -UTR; T/C), 1 exonic SNP (rs17615 in exon 10; G/A, Ser639Asn), and 6 haplotypetagging intronic SNPs that tagged the two haplotype blocks ( Figure 1(b)). The resulting SNP data including location, amino acid substitution, genotype, MAF, and HWE of all analyzed polymorphisms are demonstrated in Table 2. Table 3 shows a comparison of genotype frequencies between casecontrol groups. When genotype distributions between the SLE (control) and SLE ONFH (case) groups were compared, the SNPs rs3813946 in 5 -UTR (untranslated region), rs311306 in intron 1, and rs17615 in exon 10 (nonsynonymous SNP; G/A, Ser639Asn) of the CR2 gene, located in block 1, demonstrated the evidence for association with risk of ONFH under recessive model ( values; 0.004∼0.016). None of block 2 SNPs showed evidence for association (Table 3). Association of CR2 SNP Haplotypes with SLE ONFH Susceptibility. Because LD is believed to be highly structured, with conserved blocks of sequence separated by hotspots of recombination, the function of a conserved haplotype may result from interaction between polymorphisms within a block. Therefore, SNP haplotypes were then constructed on the basis of genotypes of the SNPs, which resided in LD block (Figure 1(b)). Four major haplotypes with frequencies > 0.05 were predicted in LD block 1, and the frequency of each haplotype was compared between SLE and SLE ONFH patients (Tables 4 and 5). Haplotype 3 (ht3: T-G-T-G-A) and haplotype 4 (ht4: C-C-T-A-A) were associated with an increased risk (OR; 3.73∼) of ONFH in SLE patients under recessive analysis model ( Table 4). None of haplotypes located in block 2 showed evidence for association (data not shown). These results suggest that polymorphisms located in extracellular domain of CR2 gene may be functionally involved with increased susceptibility to ONFH in SLE patients. Discussion Although ONFH is a common complication deteriorating the treatment of SLE, details of the pathogenesis are not well and 2 of each SNP pair are shown. Two haplotype blocks were constructed based on the strength of LD among SNP pairs. The first 5 SNPs formed 24 kb block 1 and next SNPs formed block 2 (see Table 5). established. Because venous thrombosis and resultant blood flow obstruction mediated by thrombophilia or hypofibrinolysis are generally assumed to develop ONFH [18,19], most of gene studies have focused on gene polymorphisms affecting the coagulation and fibrinolytic systems [8,9]. Recent studies, however, reported that immunologic factors might develop ONFH [6,7] and few genetic studies were performed to reveal their roles in the development of SLE ONFH [10]. Complement receptor type 2 (CR2) is a membrane glycoprotein that binds C3 degradation products generated during complement activation, specifically iC3b, C3dg, and C3d. It has many important functions in normal immunity, such as targeting antigen to follicular dendritic cells in secondary lymphoid organs and cooperating with the B cell receptor to activate B cells [13]. CR2 is also assumed to play a role in the development of autoimmune disease [16]. Therefore, given these pleiotropic effects of complement receptor, we investigated whether polymorphisms of the CR2 gene were associated with the development of SLE ONFH. In this study, the SNPs rs3813946 in 5 -UTR, rs311306 in intron 1, and rs17615 in exon 10 (nonsynonymous SNP; G/A, Ser639Asn) of the CR2 gene are associated with the susceptibility of SLE ONFH under recessive model. Haplotype T-G-T-G-A (ht3) and haplotype C-C-T-A-A (ht4) (SNP order of haplotypes: rs3813946-rs311306-rs1567190-rs17615-rs17045328) were also associated with an increased risk (OR; 3.73∼) of SLE ONFH (Table 4). However, when Bonferroni correction for multiple testing was applied, there was no significance in all SNPs and haplotypes. Previously, it was reported that the minor C allele of rs3813946, located in 5 UTR of CR2, reduced transcription of reporter genes in CR2-nonexpressing erythroleukemia cells [13] and CR2 expressing B cells [16]. Under basic conditions, primary B cells from individuals homozygous or heterozygous for the minor allele at rs3813946 demonstrated a trend toward reduced levels of CR2 RNA transcript [16]. Nonsynonymous SNP rs17615, which is located in exon 10 of the CR2 gene, might also have functional effects that could lead to disease. Exon 10 is located directly in 5 of alternatively spliced exon 10a, which is found in a long CR2 isoform [20]. SNPs in coding region can change pre-mRNA splicing and message stability [21], and rs17615 allelic variant may regulate the relative level of the long and short isoform of CR2 [16]. Alternative splicing can be involved in the process of regulating normal physiological functions as well as pathologies. Genomewide alternative splicing studies estimate that greater than 95% of human multiexon genes express multiple splice isoforms. Interindividual variation in isoforms resulting from SNPs located in splicing regulatory motifs can occur in up to 21% of alternatively spliced genes [22], and the effects of these on splicing efficiency are assumed to contribute significantly to disease severity as well as susceptibility [23]. Alterations of CR2 expression have variable different effects on manifestations of disease in animal models of autoimmunity [24]. In addition, the substitution of asparagine for serine (rs17615; Ser639Asn), which is conserved in mice, rats, and sheep, may be important in receptor function. Therefore, there is a possibility that the dysregulation expression of CR2 is associated with the occurrence of the SLE ONFH. Although the current study showed positive relationship between CR2 polymorphisms and SLE ONFH, it is also limited. First, we had limited basic and clinical data of study samples. Important information such as family history of ONFH, onset of diseases, and medication history was not available in this study. Second, our study sample size is not enough for analysis of the effect of CR2 gene in ONFH with SLE. Although ONFH is one of the most common diseases around the hip joint in Korea, the incidence is relatively low in most countries. According to medical claims data from Korean National Health Insurance Corporation, the estimated average number of annual prevalent cases was 28.9 per 100,000. Moreover,
2,616.6
2016-06-30T00:00:00.000
[ "Biology", "Medicine" ]
Prediction of Dynamical time Series Using Kernel Based Regression and Smooth Splines Prediction of dynamical time series with additive noise using support vector machines or kernel based regression has been proved to be consistent for certain classes of discrete dynamical systems. Consistency implies that these methods are effective at computing the expected value of a point at a future time given the present coordinates. However, the present coordinates themselves are noisy, and therefore, these methods are not necessarily effective at removing noise. In this article, we consider denoising and prediction as separate problems for flows, as opposed to discrete time dynamical systems, and show that the use of smooth splines is more effective at removing noise. Combination of smooth splines and kernel based regression yields predictors that are more accurate on benchmarks typically by a factor of 2 or more. We prove that kernel based regression in combination with smooth splines converges to the exact predictor for time series extracted from any compact invariant set of any sufficiently smooth flow. As a consequence of convergence, one can find examples where the combination of kernel based regression with smooth splines is superior by even a factor of $100$. The predictors that we compute operate on delay coordinate data and not the full state vector, which is typically not observable. Introduction The problem of time series prediction is to use knowledge of a signal x(t) for 0 ≤ t ≤ T and infer its value at a future time t = T + t f , where t f is positive and fixed.A time series is not predictable if it is entirely white noise.Any prediction scheme has to make some assumption about how the time series is generated.A common assumption is that the observation x(t) is a projection of the state of a dynamical system with noise superposed [8].Since the state of the dynamical system can be of dimension much higher than 1, delay coordinates are used to reconstruct the state.Thus, the state at time t may be captured as (x(t), x(t − τ ), . . ., x(t − (D − 1)τ )) (1.1) where τ is the delay parameter and D is the embedding dimension.Delay coordinates are (generically) effective in capturing the state correctly provided D ≥ 2d + 1, where d is the dimension of the underlying dynamics [21]. Farmer and Sidorowich [8] used a linear framework to compute predictors applicable to delay coordinates.It was soon realized that the nonlinear and more general framework of support vector machines would yield better predictors [15,18,19].Detailed computations demonstrating the advantages of kernel based predictors were given by Müller et al [19] and are also discussed in the textbook of Schölkopf and Smola [22].Kernel methods still appear to be the best, or among the best, for the prediction of stationary time series [16,20]. A central question in the study of noisy dynamical time series is how well that noise can be removed to recover the underlying dynamics.Lalley, and later Nobel, [12,14,13] have examined hyperbolic maps of the form x n+1 = F (x n ), with F : R d → R d .It is assumed that observations are of the form y n = x n + n , where n is iid noise.They proved that it is impossible to recover x n from y n , even if the available data y n is for n = 0, −1, −2, . . .and infinitely long, if the noise is normally distributed.However, if the noise satisfies | n | < ∆ for a suitably small ∆, the underlying signal x n can be recovered.The recovery algorithm does not assume any knowledge of F .The phenomenon of unrecoverability is related to homoclinic points.If the noise does not have compact support, with some nonzero probability, it is impossible to distinguish between homoclinic points. Lalley [14] suggested that the case of flows could be different from the case of maps.In discrete dynamical systems, there is no notion of smoothness across iteration.In the case of flows, the underlying signal will depend smoothly on time but the noise, which is assumed to be iid at different points in time, will not.Lalley's algorithm for denoising relies on dynamics and, in particular, on recurrences.In the case of flows, we rely solely on smoothness of the underlying signal for denoising.As predicted by Lalley, the case of flows is different.Denoising based on smoothness of the underlying signal alone can handle normally distributed noise or other noise models.Thus, our algorithms are split into two parts: first the use of smooth splines to denoise, and second the use of kernel based regression to compute the predictor.Only the second part relies on recurrences. Prediction of discrete dynamics, within the framework of Lalley [12], has been considered by Steinwart and Anghel [24] (also see [3]).Suppose x n = F n (x 0 ) and xn = x n + n is the noisy state vector.The risk of a function f is defined as where ν is the distribution of the noise and µ is a probability measure invariant under F and with compact support.Thus, the risk is a measure of how well the noisy future state vector can be predicted given the noisy current state vector.It is proved that kernel based regression is consistent with respect to this notion of risk for a class of rapidly mixing dynamical systems.Although the notion of risk does not require denoising, consistency of empirical risk minimization is proved for additive noise n of compact support as in [12].In the case of empirical risk minimization, compactness of added noise is not a requirement imposed by the underlying dynamics but is assumed to make it easier to apply universality theorems.Our results differ in the following ways.We consider flows and not discrete time maps.In addition, we work with delay coordinate embedding [21] and do not require the entire state vector to be observable.Finally, we prove convergence to the exact predictor, which goes beyond consistency.The convergence theorem we prove is not uniform over any class of dynamical systems.However, we do not assume any type of decay in correlations or rapid mixing.Non-uniformity in convergence is an inevitable consequence of proving a theorem that is applicable to any compact invariant set of a generic finite dimensional dynamical system [1,9,25].This point is further discussed in section 2, which presents the main algorithm as well as a statement of the convergence theorem.Section 3 presents a proof of the convergence theorem. In section 4, we present numerical evidence of the effectiveness of combining spline smoothing and kernel based regression.The algorithm of section 2 is compared to computations reported in [19] and the spline smoothing step is found to improve accuracy of the predictor considerably.The numerical examples bring up two points that go beyond either consistency or convergence.First, we explain heuristically why it is not a good idea to iterate 1-step predictor k-times to predict the state k steps ahead.Rather, it is a much better idea to learn the k-step predictor directly.Second, we point out that no currently known predictor splits the distance vector between stable and unstable directions, a step which was argued to be essential for an optimal predictor by Viswanath et al [28].The heuristic explanation for why iterating a 1-step predictor k times is not a good idea relies on the same principle. The concluding discussion in section 5 points out connections to related lines of current research in parameter inference [16,17] and optimal consistency estimates for stationary data [11]. Prediction algorithm and statement of convergence theorem , define a flow that may be limited to an open subset of R d with compact closure.Let F t (U 0 ) be the time-t map with initial data U 0 .It is assumed that U (t; U 0 ), t ∈ R, is a trajectory of the flow whose initial point Let μ be a compactly supported invariant probability measure of the flow-map F t for t > 0 and let X be its support.It is assumed that the initial point ω is drawn from the measure μ.For ω ∈ X, the trajectory U (t; ω) exists for all t ∈ R and is unique.In addition, the flow is assumed to be ergodic with respect to the measure μ. As a consequence of the C r embedding, there is a measure µ compactly supported in R D that corresponds to μ.The measure µ is ergodic and invariant under the flow lifted via the embedding.Denote the compact support of µ by X.For every point ω in X, there corresponds a unique point ω in X and vice versa.Because the prediction algorithm is based on delay coordinates and not the state vector, it is more convenient to work in the embedding space R D and in terms of ω and µ.Therefore, we will rely on the bijective correspondence between X and X and use the notation u(t; τ ; ω) instead of u(t; τ ; ω) and u(t; ω) instead of u(t; ω).With these conventions, u(t; τ ; ω) can be thought of as the path in R D with u(0; τ ; ω) = ω.Similarly, u(t; ω) can be thought of as a real-valued signal with u(0; ω) = ω 1 , where ω 1 is the first component of ω ∈ R D .In later arguments, the assumption that ω is µ-distributed will be significant, and so will be the ergodicity of the flow with respect to µ. Given the signal u(t; ω), it is assumed that the recorded observations are u η (jh; ω) = u(jh; ω) + j , where j is iid noise.Following Eggermont and LaRiccia [5,6], we assume that E j = 0 and E | j | κ < ∞ for some κ > 3. To avoid inessential technicalities it is assumed that τ /h ∈ Z + so that the delay is an integral multiple of the time step h.In particular, we set τ = nh.Similarly, we assume t f = n f τ , n f ∈ Z + , where t f is the look-ahead into the future.The noisy delay coordinates u η (jh; τ ; ω) are assumed to be available for j = 0, . . ., (N + n f ) n, which implies that the observation interval of The exact predictor F : R D → R is a C r function such that F (u(t; τ ; ω)) = u(t + t f ; ω) for ω ∈ X. Lemma 3 proves uniqueness and existence of the exact predictor F .The exact predictor F corresponds to a fixed t f > 0, but that dependence is not shown in the notation.The problem as considered by Müller et al [19] is to recover the exact predictor F from the noisy observations u η (jh; ω).Let |•| denote Vapnik's -loss function.The algorithm of Müller et al computes f m such that the functional is minimized for f = f m in the reproducing kernel Hilbert space H Kγ corresponding to the kernel K γ .The kernel K γ is assumed to be given by K . The kernel bandwidth parameter γ and the Lagrange multiplier Λ are both determined using crossvalidation.This method approximates the exact predictor the approximation is iterated n f times.We will compare our predictor against that of Müller et al using some of the same examples and the same framework as they do in section 4. In our algorithm, the first step is to apply spline smoothing.In particular, we apply cubic spline smoothing [4] to compute a function u s (t; ω), t ∈ [−(D − 1)τ, N τ + t f ] such that the functional The parameter λ is determined using five-fold cross-validation.The minimizer u s (t; ω) depends upon the noise-free signal u(t; ω) as well as the instantiation of the iid noise in u η (jh; ω) for −(D − 1)n ≤ j ≤ (N + n f )n.However, the dependence on the iid noise is not shown in the notation. The second step of our algorithm is similar to the method of Müller et al.The predictor f 1 is computed as Both the parameters γ and Λ are determined using five-fold cross-validation.Here n f and therefore t f are fixed because we seek to approximate the exact predictor with lookahead fixed at t f .As explained in section 4, it is significant that the predictor directly optimizes with a lookahead of t f .Iterating a τ -step predictor n f times gives worse predictions. The second step (2.3) differs from the algorithm of Müller et al in using the spline smoothed signal u s (t; ω) in place of the noisy signal u η (t; ω).Our algorithm relies mainly on spline smoothing to eliminate noise.Yet another difference is that we use the least squares loss function in place of the -loss function.This difference is a consequence of relying on spline smoothing to eliminate noise.As explained by Christmann and Steinwart [3], the -loss function, Huber's loss, and the L 1 loss function are used to handle outliers.However, spline smoothing eliminates outliers, and we choose the L 2 loss function because of its algorithmic advantages. We now turn to a discussion of the convergence of the predictor f 1 to the exact predictor F .The first step is to assess the accuracy of spline smoothing.We quote the following lemma, which is a convenient restatement of a result of Eggermont and LaRiccia [5,6] (see pages 132 and 133 of [6]).In the lemma, , where h = τ /n and where j are iid random variables.It is further assumed that be the spline that minimizes the functional . Let p = P (n, N, ∆, ω) be the probability that where the ∞-norm is over the interval Some remarks about the connection of this lemma to the algorithm given by (2.2) and (2.3) follow.First, the lemma assumes a fixed choice of λ (the relevant theorem in [5,6] in fact allows λ to lie in an interval).In our algorithm, λ is determined using cross-validation because of its practical effectiveness [29]. Second, the probability P(n, N, ∆, ω) (which may be interpreted as the probability that spline smoothing fails to denoise effectively) depends on ω and therefore on the particular trajectory.If P(n, N, ∆, ω) depends on ω only though a bound on the m-th derivative of u(t; ω), t ∈ [−(D − 1)n, nN ], the bound would be uniform for all trajectories on the compact invariant set X.The achievability part of Stone's optimality result [26] gives such a bound but the algorithm in that proof does not appear practical.Proving a similar result for smooth splines based on the existing literature does not appear entirely straightforward.In the L 2 norm, some uniform bounds have been proved for smooth splines by Györfi et al [10].A bound on the L 2 norm can be combined with a bound on the the m-th derivative using a Sobolev inequality to obtain an ∞-norm bound.Although the rate of convergence would be slightly sub-optimal, it would suffice for our purposes.However, the result of Györfi et al is for expectations and not for convergence in probability, and an argument using Chebyshev's inequality does not give strong bounds. The convergence analysis of the second half of the algorithm also alters the algorithm slightly.In particular, the use of cross-validation to choose parameters is not a part of the analysis.To state the convergence theorem, we first fix > 0. By the universality theorem of Steinwart [23], we may choose F ∈ H Kγ such that ||F − F || ∞ < in a compact domain that has a non-empty interior and contains the invariant set X.The convergence theorem also makes the technical assumption 2 / ||F || 2 Kγ < 1, which may always be satisfied by taking small enough. The choice of the kernel-width parameter γ is important in practice.In the convergence proof, the choice of γ is not explicitly considered.However, γ still plays a role because ||F || Kγ depends upon γ. The parameter Λ in (2.3) is fixed as Λ = 2 / ||F || 2 Kγ for the proof.Next we pick δ = 1/2 and ∈ Z + such that the covering of the invariant set X using boxes of dimension 2 − ensures that the variation of F (as well as that of the exact predictor F and f 3 , which is defined later) within each box is bounded by δ/4. Suppose A 1 , . . ., A L are boxes of dimension 2 − that cover X in the manner hinted above.We next choose T * such that the measure of the trajectories (with respect to the ergodic measure µ) that sample each one of the boxes A j adequately (in a sense that will be explained) is greater than 1 − if the time interval of the trajectory exceeds T * . The parameter ∆ is a bound on the infinite norm accuracy of the smooth spline as in Lemma 1. Choose ∆ > 0 small enough that where B 1 is a constant specified later.The main purpose of increasing n is to make spline smoothing accurate.However, the following condition requiring n to be large enough is assumed in the proof: Within this set-up, we have the following convergence theorem. Nonuniform bounds implying a form of weak consistency are considered by Steinwart, Hush, and Scovel [25].However, the algorithm of (2.2) and (2.3) does not fit into the framework of [25].The application of spline smoothing to produce u s (t; ω) means that u s (t; ω) may not be stationary, and our method of analysis does not rely on verifying a weak law of large numbers as in [25].The analysis summarized above and given in detail in the following section relies on ∞-norm bounds. Proof of convergence We begin the proof with a more complete account of how the embedding theorem is applied.is a C r diffeomorphism between Ṽ and its image in R D .This assumption is generically true [21].This map is called the delay embedding.Denote the image of Ṽ under the delay embedding by V . The invariant measures μ and µ as well as X, X, ω, ω, u(t; ω), and u(t; τ ; ω) are as defined earlier.It is assumed that X ⊂ Ṽ , which implies X ⊂ V . by U 0,τ so that U 0,τ ∈ V .There exists a unique and well-defined C r function F : V → R, called the exact predictor, such that for all U 0,τ ∈ V .In particular, F (u(t; τ ; ω)) = u(t + t f ; ω) for all t ∈ R and all ω ∈ X. Proof.To map U 0,τ ∈ V to πF t f (U 0 ), first invert the delay map to obtain the point U 0 in Ṽ , advance that point by t f by applying F t f , and finally project using π.Each of the three maps in this composition is C r or better.The predictor must be unique because F t f is uniquely determined by the flow. Remark.The embedding theory of Sauer et al [21] may be applied to the compact invariant set X without enclosing it in the open set Ṽ .Indeed, if the box counting dimension of X is d , the embedding dimension need only satisfy D ∈ Z + and D > 2d .That can be advantageous because we may have d much smaller than d.However, there are two difficulties if X is a fractal set.First, tangent spaces cannot be defined and we cannot assert the delay map to be a diffeomorphism although it will be one-one generically.Second, we will need to extend F to the closure of an open neighborhood of X in R D to apply the universality theorem, and such an extension cannot be made from X if X is a fractal set.Both these difficulties go away if we take Ṽ to be a submanifold that contains X.If d is the dimension of Ṽ , we would only require D > 2d .For simplicity, we have assumed Ṽ to be an open set. The following convexity lemma is an elementary result of convex analysis [7].It is stated and proved for completeness.Lemma 4. Let L 1 (f ) and L 2 (f ) be convex and continuous in f , where f ∈ H and H is a Hilbert space.If w ∈ ∇L i (f ), the subgradient at f , assume that for ||f || ≤ r, and assume that ||f 1 || < r and ||f 2 || < r.Then, Proof.Because f 1 minimizes L 1 (f ), we have 0 ∈ ∇L 1 (f 1 ).Thus, By adding the two inequalities, we have proving the lemma.This last step relies on is the noise-free signal, our arguments are phrased under the assumption that |u(t; ω) − u s (t; ω)| ≤ ∆.This assumption is realized with probability 1 − P(n, N, ∆, ω), which tends to 1 as n increases (by Lemma 1).For convenience, we denote P(n, N, ∆, ω) by p.The probability that u η (t; ω) is successfully denoised by smooth splines so that |u(t; ω) − u s (t; ω)| ≤ ∆ is then 1 − p. In general, a C r function defined on an embedded submanifold can be extended to an open neighborhood of the submanifold using a partition of unity.Because V ⊂ R D is an embedded submanifold, X ⊂ V , and the exact predictor F is defined on V , it follows that there exists M > 0 such that F can be extended to Y , where We will always assume ∆ < M so that the spline-smoothed signal maps to Y under delay embedding with probability greater than 1 − p.Without loss of generality, we assume M ≤ 1.The convergence proof will assess the approximation to F with respect to the measure µ.Therefore, the manner in which the extension is carried out is not highly relevant.The sole purpose of the extension is to facilitate an application of the universality theorem for Gaussian kernels. Let Thus, B is a bound on the size of the embedded invariant set with ample allowance for error in spline smoothing.Let u s (t; ω) denote the spline-smoothed signal and u(t; ω) the noise-free signal with ω ∈ X. Define where t f = n f τ , n f ∈ Z + , and K is any smooth and positive kernel defined over Y × Y .The kernel K will be specialized to the Gaussian kernel K γ when applying the universality theorem.Define using the noise-free signal u(t; ω).Let T = N τ and define For Λ > 0, all three functionals are strictly convex and have a unique minimizer.The unique minimizers of W 1 , W 2 , and W 3 are denoted by f 1 , f 2 , and f 3 , respectively.The functional W 1 is the same as in (2.3), the second step of the algorithm.Thus, f 1 is the computed approximation to the exact predictor F . The following lemma bounds the minimizers of Lemma 5.The minimizer and the stated bound for ||f 1 || K follows.The bounds for f 2 and f 3 are proved similarly. Here B 1 depends only on B and the kernel K.The kernel K is assumed to be C 2 . Proof.First, we note that ||f || ∞ ≤ c 0 ||f || K and ||∂f || ∞ ≤ c 1 ||f || K , where ∂ is the directional derivative of f in any direction.By a result of Zhou (part (c) of Theorem 1 of [30]), we may take , where D is the embedding dimension and the ∞-norm is over x, y ∈ Y .If we define B 1 using it follows that both ||f || ∞ and ||∂f || ∞ (where ∂ is a directional derivative in any direction) are bounded above by B 1 /Λ 1/2 .We may write The proof is completed by utilizing these bounds in (3.3) and defining B 1 as Proof.Follows from Lemmas 5, 6, and 4. Lemma 4 is applied with Λ , and λ = 2Λ.The choice of r is justified by Lemma 5 and the choice of δ is justified by Lemma 6.To justify the choice of λ, note that W 1 (f ) and W 2 (f ) can both be written as K .Thus, if w ∈ ∇W i (f ) (the subgradient of W i is unique and may be obtained explicitly), we must have Λ .Proof.We will argue as in Lemma 6 and assume that ||f || ∞ , and ||∂f || ∞ are bounded by we may apply the mean value theorem to the integral and argue as in Lemma 6 to upper bound the difference by (B 1 ) 2 h/Λ.The proof is completed by summing the differences from k = 0 to k = N n − 1 and dividing by N n. Proof.Follows from Lemmas 5, 8, and 4. Lemma 4 is applied with Λ , and λ = 2Λ.The choices of r, δ, and Λ are justified using Lemmas 5 and 8 and an additional argument as in the proof of Lemma 7. Choose > 0. At this point, we specialize K to a kernel for which the universality theorem of Steinwart applies.For example, K = K γ .We may then find F ∈ H K such that ||F − F || ∞ ≤ , where the ∞-norm is over Y .In fact, we will need the difference |F (x) − F (x)| to be bounded by only for x ∈ X.The larger compact space Y is needed to apply the universality theorem and for other RKHS arguments. In addition, Proof.We have is the minimizer, and This last inequality uses ´(F (u(t; τ The proof of the first part of the lemma is completed by combining the inequalities.To prove the second part, we argue similarly after noting Consider half-open boxes in R D of the form with ∈ Z + and j i ∈ Z.The whole of R D is a disjoint union of such boxes.Because X is compact, we can assume that X ⊂ ∪ L j=1 A j , where the union is disjoint, each A j is a half-open box of the form above, and A j ∩ X = φ for 1 ≤ j ≤ L. We will pick to be so large, that each box has a diameter that is bounded as follows: Here δ > 0 is determined later, and Lemma 10 tells us that ||f 3 || K ≤ √ 2 ||F || K , and therefore (by part (c) of Theorem 1 of [30]) As a consequence of our choice of , x, y ∈ A j implies that |f 3 (x) − f 3 (y)| < δ/4, (3.4) bounding the variation of f 3 within a single cell A j .Because the exact predictor F is C r , r ≥ 2, and X is compact, we may also assert that for x, y ∈ A j by taking larger if necessary.The next lemma is about taking a trajectory that is long enough that each of the sets A j is sampled accurately.By assumption X is the support of µ.However, we may still have µ(A j ) = 0 for some j.In the following lemma and later, it is assumed that all A j with µ(A j ) = 0 are eliminated from the list of boxes covering X. Lemma 11.Let χ A j denote the characteristic function of the set A j .There exist T * > 0 and a Borel measurable set S ,T * ⊂ X such that ω ∈ S ,T * implies that for all T ≥ T * and j = 1, . . ., L and with µ (S ,T * ) > 1 − .Proof.To begin with, consider the set A 1 .By the ergodic theorem, lim for ω ∈ S ⊂ X with µ(S) = 1.Let ) for some T ≥ s . The sets A s, shrink with increasing s.Then the measure of ∩ ∞ s=1 A s, under µ is zero.Therefore, there exists We can find s 2 , . . ., s L similarly by considering the sets A 2 , . . ., A L .The lemma then holds with T * = max(s 1 , . . ., s L ). Lemma 12. Suppose that ω ∈ S ,T * , T ≥ T * , and Λ = 2 / ||F || 2 K ≤ 1. Suppose that f 3 minimizes W 3 (f ), which is defined using u(t; ω), T , and Λ .Then For ω ∈ S ,T * , we have where the first inequality holds because A j are disjoint, the second inequality holds because |f 3 (y) − F (y)| > δ/2 follows from (3.6) for y = u(t; τ ; ω) ∈ A j with j ∈ J, and the final inequality is a consequence of Lemma 11 and ω ∈ S ,T * .embedding dimension used for delay coordinates were τ = 6 and D = 6, as in [19].The size of the training set was N = 1000.For cross-validation, the γ/2D parameter was varied over {0.1, 1.5, 10.0, 50.0, 100.0}, and the Λ parameter was varied over 10 −8.5 , 10 −8 , . . ., 10 −0.5 for least squares with or without spline smoothing but over 10 −10 , 10 −6 , 10 −2 , 10 2 for the more expensive support vector regression.For support vector regression, the was varied over {0.01, 0.05, 0.25}.The phenomenon we will demonstrate is far more pronounced than the slight gains obtained using more extensive cross-validation.For support vector regression, we were able to reproduce the relevant results reported in [19].4.2, and 4.3, each point is an average over 480 independent datasets in the case of least squares with or without spline smoothing and over 48 data sets in the case of support vector regression.In all cases, using half as many datasets does not change the picture. A t f = n f τ predictor can be obtained by iterating a τ -step predictor n f times, and this strategy is sometimes used to save cost [19].This is not a good idea as explained in [28] and as shown in Figure 4.2.An optimal predictor would need to roughly split the distance to the nearest training sample such that the component of the distance along unstable directions is small and with the component along stable directions allowed to be much larger.The balance between the two components depends upon t f , and therefore, iterating a one-step predictor is not a good strategy. In Figure 4.1, we see that spline smoothing becomes more and more advantageous as noise increases.The situation in Figure 4.3 is a little different.When t f is small, spline smoothing does help more for the noisier SNR of 0.4 compared to 0.2.However, for larger t f , even though spline smoothing helps, it does not help more when the noise is higher.This could be because as t f increases capturing the correct geometry of the predictor becomes more and more difficult, and this difficulty may be constraining the accuracy of the predictor.The MacKey-Glass example is a delay-differential equation and does not come under the purview of our convergence theorem.The Lorenz example, ẋ = 10(y −x), ẏ = 28x−y −xz, ż = −8z/3 + xy, is a dynamical system with a compact invariant set and comes under the purview of the convergence theorem.The Lorenz signal has a standard deviation of 7.9.For the Lorenz plots of Discussion For the prediction of dynamical time series, we have shown that flows are quite different from maps.In the case of flows, the time series can be denoised by relying solely on the smoothness of the underlying flow.The predictor can be derived by applying kernel-based regression to the denoised signal.The resulting predictor converges to the exact predictor under conditions described by Theorem 2. As far as dynamical time series are concerned, the parameter estimation problem [16,17] is complementary to prediction.Much of the existing theory is for maps and with the assumption of rapid mixing.For flows, smooth splines or a similar technique may prove an effective method to denoise in the context of parameter estimation as well. The convergence theorem given here does not give rates and is not uniform.Obtaining rates with uniformity over a class of flows will probably require rapid mixing assumptions as in the case of maps [11,24].Rapid mixing results for flows may be found in [2] for example. With respect to rates and uniformity, there are two more issues that would need to be considered.First, convergence of smooth splines in the ∞-norm must be proved with explicit bounds that depend only on the norm of the m-th derivative.A more significant point is that rates of convergence for a given lookahead t f may not be the best direction.As pointed out in [28], the question of how large t f can be given a signal of length T appears to have implications for the prediction algorithm and not just to its analysis.There is no evidence that existing algorithms including the one in this paper are capable of predicting as far into the future as an optimal algorithm should. The smooth spline idea is primarily local and so are the optimality results of Stone [26].Stone's algorithm for achievability is to find a local scale and to fit a polynomial using linear least squares within that local region.It is perhaps worth noting that the same idea has a dynamical analog.In its dynamical version [27], the noisy dynamical time series is embedded within Euclidean space using delay coordinates.The embedding will be necessarily noisy.However, the embedded manifold can be smoothed locally using linear techniques. Figure 4 . 1 : Figure 4.1: Root mean square errors in the prediction of the Mackey-Glass signal with t f = 1 as a function of the signal to noise ratio.The superiority of the method using smooth splines is evident. Figure 4 . 1 demonstrates that (2.1) produces predictors that are corrupted by errors in the inputs or delay coordinates.The method with spline smoothing is more accurate and deteriorates less with increasing SNR.For the Mackey-Glass plots in Figures 4.1 , Figure 4 . 2 : Figure 4.2: Comparison of the 1-step least squares predictor (without spline smoothing) iterated t f times with the t f -step predictor (without spline smoothing).The latter is seen to be superior. Figure 4 . 3 : Figure 4.3: The plot on the left uses SNR of 0.2 and the plot on the right uses 0.4.The method using smooth splines does better in all instances. Figure 4 . 4 : Figure 4.4: The advantage of spline smoothing for Lorenz is much less on the right with h = 0.1 than on the left with h = 0.01. Figure 4 . 4 , each point is an average over 160 datasets each with N = 1000.The picture did not change even with many fewer datasets. Figure 4 . 4 compares h = .01and h = .1 for Lorenz.In both cases, the embedding dimension is d = 10, the delay parameter is τ = 1, and the lookahead is t f = h.It may be seen that spline smoothing is less effective when h = 0.1 as compared to h = 0.01.A typical Lorenz oscillation has a period of about 0.75, and when h = 0.1 the resolution is too low causing too much discretization error.Smooth splines are less effective in reconstructing the noise-free signal if the grid on the time axis does not have sufficient resolution.The left half of Figure4.4 shows an example where prediction using spline smoothing improves accuracy by a factor of 100 with h = 0.01.
8,395.8
2015-10-31T00:00:00.000
[ "Computer Science", "Mathematics" ]
Development of a DNA-based real-time PCR assay for the quantification of Colletotrichum camelliae growth in tea (Camellia sinensis) Background Tea, which is produced from new shoots of existing tea plants (Camellia sinensis), is one of the most popular, non-alcoholic, healthy beverages worldwide. Colletotrichum camelliae is one of the dominant fungal pathogens of tea. The interaction of C. camelliae with tea could be a useful pathosystem to elucidate various aspects of woody, medicinal plant-fungal interactions. Currently, many studies characterizing resistance or virulence and aggressiveness use lesion size at the infection sites on the leaves to quantify the growth of the pathogen. However, this method does not offer the sensitivity needed for the robust quantification of small changes in aggressiveness or the accurate quantification of pathogen growth at the early stages of infection. Results A quantitative real-time polymerase chain reaction (qRT-PCR) assay was developed for the quantification of C. camelliae growth on tea plant. This method was based on the comparison of fungal DNA in relation to plant biomass. This assay was used to investigate the phenotypes of tea plant cultivars in response to C. camelliae infection. Two cultivars, Zhongcha 108 (ZC108) and Longjing 43 (LJ43), were tested with this method. ZC108 was previously reported as an anthracnose-resistant cultivar against C. camelliae, while LJ43 was susceptible. The traditional lesion measurement method showed that both cultivars were susceptible to a virulent strain of C. camelliae, while the qRT-PCR approach indicated that very little fungal growth occurred in the anthracnose-resistant cultivar ZC108. The observed results in this study were consistent with previously published research. In addition, the DNA-based real-time PCR method was applied for analysis of pathogenic differences in general C. camelliae isolates and among several Colletotrichum spp that infect tea. Conclusions This study showed that the DNA-based qRT-PCR technique is rapid, highly sensitive and easily applicable for routine experiments and could be used in screening for resistant tea plant cultivars or to identify differences in pathogen aggressiveness within and among Colletotrichum species. Introduction Tea (Camellia sinensis) is one of the most economically important crops in the world. Tea is widely grown in Asian, African and South American countries to produce non-alcoholic healthy beverages worldwide. Tea has also been used medicinally in China over a long period of time [1]. Investigations into the medicinal uses of tea have been intense, and now, many medicinal products have been developed [1][2][3]. Tea is a perennial, evergreen woody plant, and some plants have lived for more than a thousand years. During its long life, tea faces many biotic and abiotic stresses, including pathogens, insects, low and high temperatures, and heavy metals [4][5][6][7][8][9][10][11][12]. Therefore, tea can be used as a perennial, medicinal species to characterize how defence responses are activated or suppressed. Colletotrichum camelliae and C. fructicola were the species most often isolated and were proposed as dominant pathogens of tea [20][21][22][23]. Colletotrichum camelliae damages tea leaves and causes several tea diseases known as tea leaf blight, tea brown blight, or tea anthracnose [10,[21][22][23][24]. The fungus C. fructicola is both a pathogen and an endophyte of several plant species, while C. camelliae was only isolated as a pathogen of Ca. sinensis in previous studies [22,23]. The interaction of tea and C. camelliae would be a useful pathosystem to elucidate various aspects of woody medicinal plant-fungi interactions. In addition, C. siamense was originally isolated from coffee (Coffea arabica) berries of Thailand and also caused tea anthracnose in China [22], while C. fioriniae had been isolated from various hosts, including Camellia spp. grown in Yunnan, Fujian, Sichuan, and Zhejiang Province of China [22]. The easiest way to evaluate plant resistance or susceptibility is to score the severity of visual disease symptoms during the plant-fungus interaction. Visible lesions often occurs late in this host-pathogen interaction, making it difficult to correctly measure lesion size at early growth stages. In addition, when lesion sizes show minor differences during tea cultivar interactions with diverse pathogen isolates, the lesion measurement method may not be adequate. An alternative approach for quantifying fungal growth is available through the use of a highly sensitive DNA-based methods such as quantitative realtime PCR (qRT-PCR). Those methods include two types of procedures. In one procedure, qRT-PCR was based on only the DNA of the fungi to measure the growth of the pathogens [25,26]. However, that method does not consider the normalization of pathogenic DNA in relation to plant DNA biomass. Therefore, a second method that considers both plant and pathogen interaction has been developed for the precise measurement of pathogen growth in various host plants [27][28][29][30][31]. In the case of the tea plant-C. camelliae interaction, the visual lesion measurement assay has remained in use thus far despite certain disadvantages as indicated above. Currently, genes for actin (ACT) and β-tubulin (TUB) as well as the internal transcribed spacer (ITS) region of ribosomal RNA, and ribosomal rDNA are often used as standards for the quantification of fungal biomass [25,27,30]. In addition, the genes encoding fungal cutinase (CutA), plasma membrane ATPase (PMA) and GDSLlike lipase (GLL) have been used for qRT-PCR in certain plant-fungi interaction systems [25,29,31]. Some of those are involved in basic cellular processes, primary metabolism or cell structure maintenance [32]. For C. higginsianum and C. gloeosporioides, the ACT and ITS, respectively, have been widely used to quantify and detect fungal growth in host plants [27,33]. To date, no such information has been reported on tea plant pathogens like C. camelliae, C. fructicola, C. siamense and C. fioriniae. This study reports on the optimization of a qRT-PCRbased analysis for C. camelliae quantification during its interaction with tea plant. This DNA-based methodology could be applied in two ways in the future: (i) to compare tea germplasm responses to C. camelliae and to detect resistant or susceptible tea plant cultivars and (ii) to quantify pathogenic differences in general C. camelliae assays and among isolates of several Colletotrichum spp. DNA-based analysis of real-time PCR primers for quantification of C. camelliae and tea To quantify C. camelliae or tea DNA, qRT-PCR primers were designed to efficiently amplify the target sequences, respectively ( Table 1). The targets were Cs18SrDNA1 in tea and glyceraldehyde-3-phosphate dehydrogenase (GAPDH) in C. camelliae. Both targets are conserved and have been used as reference genes for plants or fungi [21][22][23]34]. The ITS region of ribosomal RNA from Colletotrichum spp. was also included as it has often been used in development of PCR primers for the detection of various fungi [26]. The primer pair S37/S38 was designed based on the Cs18SrDNA1 sequence and was expected to amplify a 167-bp DNA fragment from tea [34]. The primer pairs S572/S573 and S576/S577 were designed based on the GAPDH and ITS sequences of C. camelliae, respectively (Table 1) [21,22]. These primer pairs were expected to amplify an 82-bp and a 62-bp DNA fragment of C. camelliae, respectively. The initial experiment was run with 27 ng of DNA for each technical replicate. The specificity of the primers for all the proposed genes was tested by conventional PCR on 2.0% agarose gels (Additional file 2: Figure S1). Specific amplification of the GAPDH PCR product could be observed only in the samples of C. camelliae isolates, which included isolates CCA, CCB, LS_19, ZJ1A5, ZJ1A8 and HB1A4 (Additional file 2: Figure S1a). For the other fungal samples, DNA products were not observed (Additional file 2: Figure S1a). This indicated that the GAPDH primer specifically quantified C. camelliae. The amplification of the ITS PCR product was observed for C. camelliae, C. fructicola, C. siamense and C. fioriniae (Additional file 2: Figure S1b). However, the ITS primers designed in this study did not amplify the DNA from P. camelliae-sinensis, Neopestalotiopsis sp. or M. oryzae. These results indicated that the ITS primer could quantify the Colletotrichum spp. including C. camelliae, C. fructicola, C. siamense and C. fioriniae. Next, the C. camelliae isolate CCA was selected and further tested for its growth on tea plants as measured by qPCR using the GAPDH and ITS primers. To assess the specificity of the amplification of the target DNA regions of the pathogen and plant biomass, DNAs were further extracted from the 2-year-old healthy tea plants of the cultivar Longjing 43 (LJ43), C. camelliae CCA, and from C. camelliae CCA-infected leaves of LJ43. For all the genes, a single band of the expected size was obtained, indicating that no primer dimers or nonspecific amplified products had been generated (Fig. 1a). To confirm the specificity of the primers, the melting curves for all tested sequences were established by qRT-PCR. The melt curves observed for all the sequences showed single well-defined sharp peaks. The single peak in the melt curves demonstrated the specificity of the annealing temperature and showed that the evaluated primers successfully amplified the desired amplicons ( Fig. 1b-d). The melting temperature (Tm) values for the qRT-PCR products from the different genes ranged from 81.0 to 83.7 °C (Table 1). Those products showed good agreement with the published lengths and the guanine-cytosine (GC) percentages of the amplicons [22,32]. In addition, the primer efficiency was tested using a tenfold dilution series of pure C. camelliae DNA, tea plant DNA and C. camelliae-infected tea plant DNA. For all the DNA samples, the primers yielded a linear amplification over the range of template concentrations with a correlation coefficient of R 2 > 0.99 ( Fig. 1e-g). At the same DNA concentration, the cycle quantification (Cq) value of ITS was approximately 5.5-7.5 cycles greater than that of GAPDH, indicating that the amplification of the ITS product was approximately 50-200 times more efficient than that of GAPDH. This suggested that the ITS primer was more sensitive than the primers developed based on the GAPDH gene and could be used to detect very low DNA concentrations. Overall, all the primer pairs were suitable for the quantification of the target genes even when using low input quantities of DNA. DNA-based real-time PCR for the quantification of C. camelliae growth after infection on tea The tea cultivar Longjing 43 (LJ43) was observed to be susceptible to C. camelliae in the tea garden ( Fig. 2a) [10]. In this study, C. camelliae CCA growth was first quantified on the tea cultivar LJ43. Droplet inoculation of mechanically wounded tea leaves with C. camelliae isolate CCA spores was performed, and leaf samples were collected at 4 DPI (days post infection). To detect possible contamination by endophytic fungus from the tea, mechanically wounded leaves were incubated with water (ddH 2 O) as a control. As shown in Fig. 2b, lesions developed on leaves inoculated with CCA spores and their diameters were large at 4 DPI, while the un-inoculated control leaves did not show any lesion development, confirming that LJ43 was susceptible to C. camelliae CCA, as previously reported [10]. The same results were observed by analysing the pathogen growth with qRT-PCR (Fig. 2c, d). The amplification ratio of GAPDH/Cs18SrDNA1 or ITS/Cs18SrDNA1 was significantly higher in C. camelliae-infected tea leaves than in un-inoculated control tea leaves. These results indicated that the DNA-based qPCR method could be used to quantify C. camelliae CCA growth in tea. The quantification of fungal growth over time is often used to demonstrate differences in defence mechanisms at certain stages of the host-parasite interaction. To show that qRT-PCR-based quantification of C. camelliae is also suitable for temporal studies, the samples of C. camelliae CCA-infected LJ43 plants were collected at different times. Disease symptoms were observed and measured over time (Fig. 3a). The lesions grew, and the lesion diameter increased from 0.1 cm at 1 day to 0.8 cm at 6 DPI (Fig. 3b). This indicated that lesion size increased with time. The DNA-based qPCR results indicated that the fungal DNA increased in 3 days and reached a high level at 3 days compared to 1 day (Fig. 3c, d). It then dramatically increased from 3 to 4 days (more than twofold) and then slightly increased from 4 to 6 days (Fig. 3c, d). The qPCR data showed two stages of fungal growth and a clear disease development curve during the host-pathogen interaction (Fig. 3c, d), which was not observed with the lesion measurement method (Fig. 3b). These results indicated that the DNA-based qPCR assay can be used to examine increases in fungal biomass over time. DNA-based real-time PCR applied for analysis of C. camelliae growth on different tea cultivars The responses of different tea cultivars to C. camelliae CCA were next evaluated. Two tea plant cultivars, LJ43 and Zhongcha 108 (ZC108), were used. Previous research reported that ZC108 was resistant to C. camelliae [10]. In this study, leaves of both LJ43 and ZC108 were droplet-inoculated with the C. camelliae isolate CCA. As shown in Fig. 4a, b, the lesion sizes increased with time on both CCA-inoculated LJ43 and ZC108 tea leaves. Here, the lesions extended to similar large sizes, and there were no major differences in lesion size between LJ43 and ZC108 at 2, 4 and 6 days after C. camelliae CCA inoculation. Based on the traditional lesion measurement method, both LJ43 and ZC108 were susceptible. This was different from previous reports on the anthracnose lesion sizes of LJ43 and ZC108 [10], perhaps because different C. camelliae strains were used in this study and a different wounding treatment was performed before inoculation. However, when fungal biomass was measured, much more fungal growth occurred on LJ43 than on ZC108, as significantly more CCA DNA was detected in LJ43 by qPCR (Fig. 4c, d). This result indicates that the tea cultivar ZC108 was more resistant than LJ43 to C. camelliae CCA. Similar results were observed in previous studies [10,37]. This also suggests that the growth of C. camelliae CCA was somehow restricted by ZC108, so it might have produced certain defence responses towards the fungal infection. Here, the qPCR results were different from the traditional lesion measurement method (Fig. 4b-d). This indicated that qRT-PCR analysis was more sensitive than the traditional lesion measurement method in the quantification of C. camelliae growth during its interaction with tea plant. DNA-based real-time PCR applied for analysis of tea LJ43 interaction with different C. camelliae isolates Colletotrichum camelliae is one of the dominant pathogens of tea in several provinces of China [21][22][23][24]. Previous reports have identified diverse C. camelliae isolates from various tea gardens in China [21][22][23]. Different fungal isolates can display distinct levels of aggressiveness towards their host plant. The C. camelliae isolates CCA, CCB, LS_19, ZJ1A5, ZJ1A8 and HB1A4 (Additional file 1: Table S1) were used to test differences in aggressiveness on tea cultivar LJ43 by qPCR assay. As shown in Fig. 5a, the leaf anthracnose symptoms produced by the different isolates of C. camelliae were very dissimilar. At 6 DPI, the lesion sizes of C. camelliae isolates CCA, Fig. 2 qRT-PCR methodology to quantify C. camelliae growth rate following infection of tea plant leaves. a Photo of tea plant cultivar LJ43 infested with C. camelliae in a tea garden. The close-up frame shows a single leaf with typical anthracnose symptoms. b Phenotypes of LJ43 leaves in response to C. camelliae CCA (Cc, top row) and ddH 2 O control (CK, bottom row). The leaves were wounded with a razor blade and immediately inoculated with 5 µL C. camelliae CCA spores (1 * 10 6 spores mL −1 ). For the control, ddH 2 O alone was used. c, d qPCR-based biomass of C. camelliae CCA growth on tea plant LJ43. c The ratio of the primer pairs GAPDH/Cs18SrDNA1 was used to determine the fungal biomass. d The ratio of the primer pairs ITS/Cs18SrDNA1 was used to determine the fungal biomass. *P < 0.05 by the LSD test LS_19 and ZJ1A5, on the infected tea plants were larger than those of CCB, while the largest lesion sizes were caused by ZJ1A8 and HB1A4 (Fig. 5a). Fungal growth biomass, as measured by qRT-PCR, revealed greater fungal growth in CCA-, LS_19-and ZJ1A5-infected plants than CCB, while the most fungal growth occurred for ZJ1A8 and HB1A4. The CCB-infected tea plants had very little fungal growth (Fig. 5b, c). These results indicated that ZJ1A8, HB1A4, CCA, LS_19 and ZJ1A5 were aggressive isolates, while CCB was not aggressive on the tea cultivar LJ43. This also was consistent with recent reports that C. camelliae LS_19 was a pathogenic isolate [23]. Here, C. camelliae isolates ZJ1A8 and HB1A4 were more aggressive than CCA, LS_19 and ZJ1A5. The qPCR method revealed the difference in aggressiveness among C. camelliae isolates on tea cultivar LJ43. DNA-based real-time PCR applied for the analysis of interactions between tea LJ43 and Colletotrichum spp. To compare the differences in the pathogenicity of Colletotrichum spp. on tea LJ43, C. fructicola SX_6, C. siamense E-8-1 and C. fioriniae ZJ1A2 also were examined. The ITS primer was used to test their growth differences. As shown in Fig. 6a, b, the leaf anthracnose symptoms and fungal growth differed among three species. Colletotrichum siamense E-8-1 infected tea plants producing larger lesion sizes and more fungal biomass than those inoculated with C. fructicola SX_6 and C. fioriniae ZJ1A2. This demonstrated that the qPCR method also can be used to compare and quantify the difference in aggressiveness of Colletotrichum spp. on tea plants. Discussion Tea is a valuable crop because of its use in the beverage industry, and the use of tea as a medical crop has been demonstrated in many studies [1][2][3]. Colletotrichum camelliae is one of the dominant fungal pathogens of tea [22,23]. The C. camelliae-tea interaction is a valuable pathosystem for determining how fungal pathogenicity is established and how tea plant defence responses are activated. These types of studies have created a critical need The growth of C. camelliae CCA on tea plant was determined by the classical visual lesion measurement method over a time course of 6 days. c, d qRT-PCR-based biomass of C. camelliae CCA growth over 6 days. c The ratio of the primer pairs GAPDH/Cs18SrDNA1 was used to determine the fungal biomass. d The ratio of the primer pairs ITS/Cs18SrDNA1 was used to determine the fungal biomass. The letters represent significant differences at different times by LSD test (P < 0.05) for new and accurate assays that measure small differences in C. camelliae growth in planta. Visual quantification of lesion size can be used to distinguish differences in pathogen growth on very susceptible plants or when pathogens show large differences in aggressiveness. This method works less well for small differences in aggressiveness or when hosts display small differences in resistance [28,30]. Recently, the qPCR methodology was shown to be a highly sensitive, reliable, simple and accurate way to quantify pathogen growth in other host-pathogen systems [28][29][30]. The qPCR procedures offer the added advantage of quantifying pathogen growth even at the early stages of the host-parasite interactions [28,30]. In this study, a precise procedure was developed for qPCR quantification of C. camelliae growth on tea. This method included assays for both sides of the host-parasite interaction, one pathogen gene and one tea gene, which enables accurate normalization of the ratio between pathogen biomass and plant biomass (Figs. 2c, d; 3c, d; 4c, d; 5b, c; 6b). In these experiments, detached leaves were used, so plant tissue did not increase after inoculation. However, plant DNA might have been reduced as the host was damaged by certain pathogens at the infection sites. Thus, the amplification of the pathogen DNA sequence compared to that of the gene in the host plant gave a precise quantification of disease development and severity during the plant-pathogen interactions. In addition, the genomic DNA extraction and qPCR performance in this study were simple, rapid and the common reagents used in the experiments can be easily obtained in a laboratory. A reliable PCR assay depends on having highly sensitive PCR primers that amplify the target DNA sequences. In the present study, the primers for two fungal targets, ITS and GAPDH sequences, were developed. The GAPDH primer was specific for amplifying C. camelliae, including all of its different isolates. The ITS primers designed here could detect both C. camelliae and its close relatives in the genus Colletotrichum. The ITS primer was better The growth of C. camelliae CCA on LJ43 and ZC108 was determined by visual lesion measurement. c, d qRT-PCR based biomass of the C. camelliae CCA growth on tea plants LJ43 and ZC108, respectively. c The ratio of the primer pairs GAPDH/Cs18SrDNA1 was used to determine the fungal biomass. d The ratio of the primer pairs ITS/Cs18SrDNA1 was used to determine the fungal biomass. The letters a and b represent significant differences between different samples according to LSD test (P < 0.05) for quantification of C. camelliae growth during even the early stages of infection, when little pathogen growth had occurred. These primers did not detect other fungal pathogens, such as P. camelliae-sinensis, Neopestalotiopsis sp., or M. oryzae, which makes them useful for the quantification of C. camelliae growth on tea. The Cs18SrDNA1 was used as target DNA sequence to monitor relative fungal growth in tea. The qPCR primers for amplification of tea Cs18SrDNA1 in this study was highly sensitive. The Cs18SrDNA1 has also been used as a reference gene to detect gene expression in tea [32]. Therefore, the primer for Cs18SrDNA1 can be used in studies of tea-C. camelliae interactions, not only to detect the expression of different genes, but also to help quantify relative pathogen growth. It is estimated that more than 3000 tea accessions have been collected and conserved in the China National Germplasm Tea Repository [38]. Those genetic resources could provide diverse parental material for tea disease resistance breeding. It would be useful to test their responses to pathogens such as C. camelliae. Based on the qPCR method, C. camelliae growth differences were compared on the two national tea cultivars LJ43 and ZC108 in the current study. A previous infection assay revealed different responses to C. camelliae infection between ZC108 and its parent cultivar LJ43 by comparing the differences in lesion size [10]. While LJ43 was susceptible to C. camelliae-1 and C. camelliae-2, ZC108 was reported to be resistant to both [10]. Although in this study, the lesion measurement data indicated that both cultivars were highly susceptible to C. camelliae CCA, but the qPCR results indicated that ZC108 was more resistant than LJ43. In future, this qPCR assay could be used to screen the responses of other tea accessions towards C. camelliae. Many studies have identified Colletotrichum spp. as the causal agents of several tea diseases [21][22][23][24]39]. Previous research has shown that remarkable species diversity exists within the genus Colletotrichum that infect tea [21][22][23]. For example, different isolates of C. camelliae were collected from several provinces in China. In certain isolates, the conidiophores and setae were directly The ratio of the primer pairs GAPDH/Cs18SrDNA1 was used to determine fungal biomass. c The ratio of the primer pairs ITS/Cs18SrDNA1 was used to determine the fungal biomass. The letters a, b and c represent significant differences between the quantification of different isolates of C. camelliae according to LSD test (P < 0.05) produced from the hyphae or on a cushion of roundish hyaline cells, while other isolates only produced aerial mycelium [22]. The genetic differentiation among C. camelliae isolates or Colletotrichum spp. with different morphology and aggressiveness phenotypes should be further clarified. The qPCR methodology developed in this study performed well in detecting differences in aggressiveness between C. camelliae isolates. Furthermore, this qPCR assay also could detect differences in aggressiveness among C. camelliae and its related species, such as C. fructicola, C. siamense and C. fioriniae. Conclusions A procedure for the quantification of C. camelliae using qRT-PCR was studied as a new method for assessing the growth of this fungus on tea. This study indicated that the DNA-based qPCR assay was more sensitive and accurate than the traditional lesion measurement method. The qRT-PCR assay for assessing C. camelliae growth in tea plants was highly precise, sensitive and easily applicable, and this method could be used for identifying resistant tea plant cultivars or in screening for differences in pathogen aggressiveness. Plant materials and growth conditions Tea C. sinensis cultivars Longjing 43 (LJ43) and Zhongcha 108 (ZC108) were used for all of the assays in this study. LJ43 was the parental cultivar of ZC108. Cuttings of LJ43 were irradiated by Co 60 γ-ray in 1986, and the mutant lines were further propagated for single-plant selection [37]. After a 24-year breeding procedure, one new line was selected and named ZC108. ZC108 was then registered as a new cultivar in China in 2010 [10,37]. Two-year-old plants of LJ43 and ZC108 were grown in a disease-free climate chamber under 12 h light/12 h dark conditions at 25 ± 2 °C and 60% relative humidity before inoculation. For fungal inoculations, 50 mature leaves of 2-year-old LJ43 or ZC108 were randomly collected from more than 20 tea plants. Pathogen infection and fungal growth assay Colletotrichum camelliae isolates CCA and CCB were both originally isolated from a diseased tea garden in Fancun, Hangzhou. Disease samples were collected from leaves showing visible anthracnose symptoms. The isolates were obtained by the single-spore isolation technique. The surface of leaves with anthracnose symptoms were first scratched with a small microbe-free blade and placed in sterilized water. After shaking for 20 min, the suspension was subjected to a tenfold dilution, and each dilution was distributed onto the surface of potato dextrose agar (PDA) culture medium (9.0 cm diameter Petri plates), followed by incubation in a climate chamber (22 ± 2 °C, 12 h light/12 h dark). Single germinated conidia were transferred to new PDA plates, and the incubation continued to generate the pure isolates. All spores were cultivated, collected, washed, and frozen at -80 °C in 0.8% NaCl at a concentration of 10 8 spores mL −1 . For inoculation of tea plants, the spores were diluted in ddH 2 O. For droplet inoculations, 5 or 10 µL of 1 * 10 6 spores mL −1 was applied to single detached tea leaves (2-year-old healthy tea plant cultivar LJ43 or ZC108) as previously described [40]. All of the inoculation experiments were designed as follows. For all treatments, leaves were wounded with a razor blade immediately before inoculation. Three replicates were carried out for each treatment. In each repetition, three to five mature leaves were used and each mature tea leaf usually received six to eight droplets of spores. For the control, ddH 2 O alone was used after wounding. All the leaves were completely randomly distributed during incubation. Each experiment was independently repeated at least three times. During fungal inoculation, all detached leaves were consistently kept under sealed plastic hoods (Tianxing, Ningbo, China) at high humidity (> 80%). Inoculation was carried out on a bench at room temperature (around 25 °C); otherwise, the plants were placed into a specific climate chamber for fungal incubation under a strict light (12 h light/12 h dark) and temperature regime (25 ± 2 °C). After inoculation, the lesion size was visually measured and photographs were taken at different times (i.e., 1, 2, 3, 4, 5 and 6 days post infection). And then, leaves of similar size (which contained two or three lesions) were harvested and frozen at -80 °C for the DNA assays. DNA extraction The DNA samples frozen in liquid nitrogen were homogenized using a TissueLyser (Qiagen, Hilden, Germany) for 2 × 30 s at 30 strokes/s A total of 400 µL of DNA extraction buffer (200 mM Tris-HCl, pH 7.5; 250 mM NaCl; 25 mM EDTA, and 0.5% SDS) was added to each of the homogenized samples, which were shaken again in the Tissue Lyser for 10 s at 30 strokes/s. The DNA extractions were the performed as previously described [40]. For each DNA sample, at least three technical replicates were performed. Quantitative real-time PCR For qPCR analysis, approximately 30 ng of DNA was mixed with 0.4 mM gene-specific primers (GAPDH and ITS for the pathogen, while Cs18SrDNA1 was used for tea) ( Table 1) and SYBR Green Supermix (Takara, Dalian, China) in a total volume of 25 µL. The reaction mixture contained 12.5 µL of SYBR Green, 1.5 µL of the forward and reverse primers, 9 µL of ddH 2 O and 2 µL of template DNA. qPCR was performed using the Applied Biosystems 7500 Sequence Detection System (ABI, Massachusetts, USA). The PCR program consisted of a preliminary step of 1 min at 95 °C followed by 40 cycles at 95 °C for 15 s and 60 °C for 34 s. A no template control for each primer pair was included in each run. The results were analysed using the Applied Biosystems 7500 software and Microsoft Office Excel based on the CT values observed. For each treatment, one representative set of results was presented as the mean 2 −∆∆CT value ± SEM. The relative amounts of pathogen DNA and tea DNA were determined by qPCR employing specific primers (Table 1). Statistical analysis The lesion sizes and qPCR data were analysed with an analysis of variance (ANOVA) using the SPSS 18 software (IBM, New York, USA) and the least significant difference (LSD) test. All treatments were repeated independently three times. Reported values were presented as the mean ± standard error of three repeats, and a P value < 0.05 was considered statistically significant according to LSD test.
6,790
2020-02-17T00:00:00.000
[ "Environmental Science", "Biology" ]
SPOTONE: Hot Spots on Protein Complexes with Extremely Randomized Trees via Sequence-Only Features Protein Hot-Spots (HS) are experimentally determined amino acids, key to small ligand binding and tend to be structural landmarks on protein–protein interactions. As such, they were extensively approached by structure-based Machine Learning (ML) prediction methods. However, the availability of a much larger array of protein sequences in comparison to determined tree-dimensional structures indicates that a sequence-based HS predictor has the potential to be more useful for the scientific community. Herein, we present SPOTONE, a new ML predictor able to accurately classify protein HS via sequence-only features. This algorithm shows accuracy, AUROC, precision, recall and F1-score of 0.82, 0.83, 0.91, 0.82 and 0.85, respectively, on an independent testing set. The algorithm is deployed within a free-to-use webserver, only requiring the user to submit a FASTA file with one or more protein sequences. Introduction Hot-Spots (HS) can be defined as amino acid residues that upon alanine mutation generate a change in binding free energy (∆∆G binding ) higher than 2.0 kcal mol −1 , in opposition to Null-Spots (NS), which are unable to meet this threshold. Although the threshold of 2.0 kcal mol −1 can vary in the definition of HS, a representative amount of studies on the subject typically use this cut-off [1][2][3][4][5][6]. HS are key elements in Protein-Protein Interactions (PPIs) and, as such, fundamental for a variety of biochemical functions. The disruption of these interactions can alter entire pathways and is of interest to therapeutic approaches [1,7]. These residues are also known to be important for protein dimerization [8] and ligand binding [9]. Indeed, HS tend to be associated with the binding of small ligands, hence becoming ideal subjects of study on target proteins for drug design approaches [9][10][11]. Databases of experimental determined HS and NS can be found in the literature: ASEdb [12], BID [13], PIN [14] and SKEMPI [15]. More recently, SKEMPI 2.0 was released, making available a larger amount of experimental information. However, most of the new information does not include mutations to alanine (and the corresponding change in free binding energy), which is the material under scope in the present work [16]. These databases can be used to deploy Machine-Learning (ML) algorithms that take both the positive (HS) and negative (NS) information and construct a binary classifier that should be able to predict, upon previously unforeseen amino acid residues in a protein, its HS/NS status. Although ML is not limited to binary classification, on this problem and given the available data format, binary classification was the most explored approach until now. Several algorithms have been proposed for HS computational predictions, using different ML approaches, features and datasets [17][18][19][20][21][22][23][24][25]. Recently (2017), SPOTON [22], using information on both the protein sequence and structure, achieved results of 0.95 accuracy on an independent testing set, making it the best performing HS predictor at the time. Most of the high-performing HS predictors incorporate structural information. Although yielding clearly robust results, it hinders the possibilities of a broader deployment, since there are still fewer proteins for which a three-dimensional (3D) structure is available in online repositories [26] compared to the determined and available protein sequences [27]. It is known that sequence-based predictors tend to perform more poorly, in comparison with the ones engulfing structural information. For example, Nguyen et al. (2013) [19] were able to achieve an accuracy of 0.79 and a precision of 0.75 using sequence-based frequency-derived features. More recently, Hu et al. (2017) [20] achieved an F1-score of 0.80 using only sequence-based features while Liu et al. (2018) [21] achieved an F1-score of 0.86 using sequence-based features and amino acid relative Solvent Accessible Surface Area (SASA). The problem of HS computational determination is usually riddled with class imbalance, as there are commonly more experimentally determine residues as NS than HS due to the nature of PPIs. Conversely, the size of the dataset is usually not large enough to dilute this discrepancy. As such, problems emerge on the dataset training, but, more importantly, on the analysis of the results. We developed SPOTONE (hot SPOTs ON protein complexes with Extremely randomized trees), a HS predictor that only makes use of protein sequence-based features, all of which were calculated with an in-house Python pipeline. To avoid protein-centered overfitting, features concerning the whole protein were not applied to the classification problem. This allowed us to avoid the predictor from learning HS/NS only on a specific subset of proteins and be able to correctly classify even for unforeseen subtypes of biological machineries. Furthermore, we deployed a rigorous train-test split that ensured equality among classes, not only in the training and testing datasets, but also regarding the amino acid types. The resulting platform and predictor are available at: http://moreiralab/resources/spotone. Results The results presented herein were attained following a ML pipeline, depicted in Figure 1, which lays the overall steps involved in dataset preparation and prediction model training and refinement. The detailed version of each step is further explored in the Material and Methods Section. Dataset We began by analyzing our dataset, the same previously mined and cleaned for SPOTON [22], composed by 534 amino acid residues, of which 127 are HS and 407 are NS, from 53 protein-protein complexes. Figure 2A shows the class distribution by amino acid type. Clearly, TYR, one of the most common HS in nature, is an outlier. Secondly, it should be noted that MET and CYS have no registered HS. Finally, it should also be noted that, due to the nature of the method used for HS experimental determination, there are no ALA residues in either the HS or NS class (as already explained). Figure 2B shows the split of the protein primary sequences into four equally long quartiles, which allowed us to analyze the HS/NS distribution along these ordered sections. It should be noted that, in the first quartile of the protein, the number of HS is at its highest value, although the number of NS is not equally as high. In the last quartile of the protein sequences, the number of overall registered HS/NS is the lowest; however, the proportion in which they stand favors the existence of HS rather than NS, in comparison with the remaining quartiles. The comparison with the literature-based features can be consulted at the landing page of our website. These features include secondary structure propensity, pKa associated values, number of atoms of each type and standard area and mass associated values. Their analyses can show tendencies of these features that correlate to their usefulness to the ML deployment. Firstly, the 534 amino acids were split into experimentally determined HS (127) and NS (407). Secondly, 60% of the entries of both classes were randomly picked for the train dataset while the remaining 40% were not used for the training phase (20% for test and 20% for an independent test). All datasets were matched with their corresponding 173 features. The training data were used to train the models, which were tested on the test set to yield HS/NS predictions. The predictions were then used to update probability thresholds and generate the final model, which basically consists of the trained model with subsequent HS probability correction. The final model was then applied to an independent test, which did not influence any step of the process, in order to be evaluated. More details on the used method can be found in the Materials and Methods Section. Machine-Learning Algorithms Tables A1 and A2 in the Appendix A list the full results attained for the various algorithms and methods. Table A1 shows that the in-house built features subset displayed one of the highest performance metrics in comparison with any of the other features alone. It can be noticed that PSSM led to a slight improvement, but the small difference of performance does not compensate the larger amount of time needed for this feature calculation. The introduction of iFeatures, concerning the whole protein, did not increase significantly the performance and introduced concerns related to protein-centered overfitting, and as such was discarded of further studies. The extremely randomized trees took the lead in most performance metrics, and it is clearly more robust in what concerns the identification of HS, as denoted by the high recall score. It should be noted that neither grid search parameter tuning nor prediction probability tuning according to amino acid type performance was used before method selection to keep the independent test unbiased (further explained in the Material and Methods Section). As such, all values presented in Table 1 concern default settings. This allowed the selection of extremely randomized trees algorithm for parameter tuning, as well as subsequent required alterations. To avoid the adaptation introduced and displayed in Table A3 leading to the generation of false positives, we set half of the testing set aside, comprising 20% of the whole dataset. Table 2, which lists the performance metrics of the parameter-tuned adapted model for both the training and the testing set, shows a significant increase in the testing performance, while the training scores remain unchanged. This trend was further validated by deploying the model in the independent testing set. Table 2. Performance metrics on the same training and testing sets after updating the prediction thresholds, and evaluated using the metrics accuracy (Acc), AUROC, precision (Prec), recall (Rec) and F1-score (F1). It should be noted that the "class_weight" parameter, available on the deployment of the extremely randomized trees used was particularly relevant in tackling class imbalance, since, by setting it to "balanced_subsample", it generates and updates class weights based on the samples. A full comparison with state-of-the-art predictions is shown in Table 3. Apart from SPOTON [22], two values for each performance metric are listed: on the left is the value assessed with the dataset used on SPOTONE and on the right are the values presented in the corresponding scientific papers for each method. These values were attained from the pipeline used in SPOTON [22]; since the dataset is the same, the performance comparison also stands equal. In the case of the sequence-based methods that are not SPOTONE, we were not able to deploy our dataset as the webservers indicated were not active or available; this applies to the methods of Nguyen Discussion This work presents a significant improvement in HS prediction at the interface of protein-protein complexes. However, more than the high performing metrics, the robustness of this model emerges from a thorough treatment and splitting of the dataset, as well as from the exclusion of whole protein sequence features, leaving only residue specific sequence-based features. Figures A1-A3 display the performance of SPOTONE upon being applied to three different complexes (PDB ids: 1a4y, 1jck and 3sak), with insights on all the residues experimentally determined for these complexes and comparison of this information to our HS/NS SPOTONE prediction. These three examples clearly show how well the predictor works on a point-by-point example. Our final accuracy (0.82), recall (0.82) and precision (0.91) highlight the existence of a very low number of falsely predicted HS as well as NS. Its closeness in performance to the best structural based predictor is complemented with the high versatility of using only sequence-based features prediction, which allows a much wider application in a variety of biological problems. Finally, all the work is available in a free-to-use platform that allows the user to input one or more protein sequences in FASTA format (Box 1) and attain a detailed HS/NS prediction with corresponding graphical interface. The platform is available at http://moreiralab.com/resources/spotone. Box 1. Example FASTA file, with the different proteins' chains separated by paragraphs and clear identifiers initiated with ">", separated from the single letter amino acid code chain with a paragraph. This needs to be stored in a ".fasta" file to be submitted to SPOTONE. Materials and Methods The dataset used here was retrieved from our previous method, SPOTON [22], and is comprised of 534 amino acid residues (127 positive-HS and 407 negative-NS). This dataset was constructed of data merged from the experimental databases ASEdb [12], BID [13], PINT [14] and SKEMPI [15], and as such comprises all literature available experimental data coming from alanine scanning mutagenesis. We also highlight that sequence redundancy was already eliminated in our previous work. To address this particular problem, we did not simply split the 534 samples into training and testing sets. Firstly, we split all the samples into two datasets containing either HS or NS. Of these datasets, we extracted 20 different subsets from each (corresponding to the 20 possible amino acids). We randomly split these 40 sets (20 HS subsets and 20 NS subsets) in a 60:40 ratio, using "train_test_split" from scikit-learn [28]. Finally, we stitched the tables corresponding to the training set and the testing set back together. Our process was devised to ensure that HS and NS were equally represented for each residue in both the training set and the testing set. Unfortunately, ALA entries were completely absent from the dataset (due to the experimental detection method typically used in wet labs) and CYS and MET only had NS entries (as these residues have a lower/null incidence as key in PPIs). For the latter two cases, we included them in the training set, as it would not be possible to assay their presence in the testing set. Following this procedure, we ended up with a training set containing 312 residues and a testing set containing 222 residues. We randomly split the final testing set in two, with 111 residues each; half the testing set was used to fine-tune probability thresholds (see Prediction Probability Tuning), while the other half was set aside for fully independent test analysis, only having been used after selecting the ML model and performing all parameter tuning. Features The following section reports the calculation of 173 features with an in-house Python pipeline and literature-based information on amino acid characteristics. All the extracted features can be calculated simply using the input sequence of a FASTA file. It should be noted that we only used sequence-based features and, furthermore, we did not add any sequence feature about the protein as a "whole", which might have, due to the size of the dataset, promoted overfitting on a protein level. As shown in Tables A1 and A2, pre-constructed whole-sequence based features and Position-Specific Scoring Matrix (PSSM) were also tested. For the first, we used iFeature [29] and attained 14.056 whole sequence-based features, for each of the chains. For PSSM, we used an in-house psiblast [30] deployment to extract 42 position conservation features. One-hot Encoding (20 Features) The first twenty features extracted for each amino acid residue were simply a one-hot encoded representation of the amino acid; thus, for each amino acid, nineteen columns were filled with "0", and only one (with the corresponding value), was filled with "1". Relative Position Feature (1 Feature) In Figure 2B, we display the abundance of NS/HS on the protein sequence quartiles. The quartiles were defined by splitting the proteins' length by four and analyzing the residues present in each of the sections. As such, we used the numbering 1-4 (representing its relative position in the sequence) as a feature that indicates the quartile in which each amino acid is present. Literature-Based Features (19 Features) Several amino acid properties are constantly determined, updated and made available online. We downloaded 19 amino acid properties from the BioMagResBank [31] and associated each of them with each of the amino acids; the features and corresponding values per amino acid used are listed in Tables A4 and A5. Please note that this database is regularly updated to improve the reliability of the experimental data. The statistical distribution of these properties regarding their HS/NS on the dataset used are available in form of violin, scatter and boxplots on the landing page (http://www.moreira.com/resources/spotone). Window-Based Features (133 Features) Window-based features were described with a "sliding windows" that stopped on the target residue and considered the residues that stand close to it, sequence wise. We considered window sizes of 2, 5, 7, 10, 25, 50 and 75 amino acid residues, and, for each target residue, averaged the values corresponding to the features of in the Literature-Based Features Section on the residues comprised in the windows. Thus, if we multiply the number of raw features (19) by the number of windows (7), we added 133 features. Machine-Learning Models Deployment We exploited different algorithms: Neural Networks ("MLPClassifier") [32], Random Forest ("RandomForestClassifier") [33], AdaBoost ("AdaBoostClassifier"), Support Vector Machine ("SVC") [34] and Extremely Randomized Trees ("ExtraTreesClassifier") [35]. All of the algorithms were used from their scikit-learn [28] deployment. The extremely randomized trees algorithm, similar to a random forest, is based on decision trees. From the training set, the algorithm picks attributes at random and generates subsets; by training these on the decision trees that comprise the model, an ensemble model is built by majority vote. However, one of the main differences to other algorithms is that it chooses node cut-points (the bifurcation points' thresholds in a decision tree) fully at random; another significant difference is that the full training set is used, instead of a bootstrap replica, for each of the decision trees that comprise the ensemble model. This additional randomization is ideal in small datasets, in which overfitting is more likely to occur on the training set without a proper test evaluation of robustness. This method has proven to have successful results in solving other biological based problems [36,37]. After running all the methods in default scikit-learn [28] settings, we fine-tuned some parameters of the extremely randomized trees [35] with a grid search ("GridSearchCV", scikit-learn [28]), and the following parameters were updated: "n_estimators": 500; "bootstrap": True; and class_weight: "balanced_subsample". The full set of parameters can be consulted in Table A6, the parameters not referred were kept as default. Grid search was performed with 10-fold cross-validation. Model Evaluation To evaluate the models, we subjected both the training and the testing set to confusion matrix analysis. This table relates the actual and the predicted instances (sample) and compares them by their binary status of Negative (N) or Positive (P) in the prediction to their actual class of True (T) or False (F). It further relates these in four different possible combination states: True Negative (TN) is when the prediction is N and the actual is F; True Positive (TP) is when the prediction is P and the actual is T; False Negative (FN) is when the prediction is N and the actual is T; and False Positive (FP) is when the prediction is P and the actual is F. Prediction Probability Tuning We performed further inspection of the HS/NS prediction by amino acid, in addition to the whole dataset, as can be seen in the "original" rows in Table A3. This inspection led us to notice that the HS/NS ratio had a significant toll in model performance. For example, TYR had a robust prediction of HS/NS; however, residues which had not such a balanced HS/NS ratio performed more poorly. Although this is a classification problem, most classification methods calculate class probability before yielding the predicted class, which is determined according to the higher probable class. As such, we examined the probability associated to the positive class (HS). Upon inspection of classification probabilities of the actual residues, it was noticed that, although not classified as HS, most of these amino acids still had a higher probability of HS classification than NS. The adaptation value displayed in Table A3 is the increase in probability of the HS class, added post-training, that allows higher HS probability amino acids to reach the HS class (above 50%). This value was implemented following the condition that it should not generate FP while increasing the amount of TP. As such, when, for each amino acid, the maximum false negative HS probability was higher than the maximum true negative HS probability, the HS probability (for that amino acid) was updated (Equation (7)). CYS, MET and ALA were not displayed in Table A3 due to their absence from the testing set. Webserver Implementation The webserver was fully implemented with Python. Plotly [38] was used for dynamic graphical representations; scikit-learn [28] was used to perform user submission treatment, analysis and prediction; and in-house Python scripts were used to perform all feature extraction and intermediate steps. Flask was used for overall server set-up and visual layout construction [39]. The output each run includes a dynamic heatmap displaying the probability of HS, for each amino acid in the single or more chains submitted by the user. The full table with the classification probabilities as well as binary class before and after class probability tuning are also available for the user to download. A snapshot of the webserver output is displayed in Figure 3. Conclusions SPOTONE is a thorough prediction algorithm that tackles HS classification in a problem-tailored protocol. The pre-processing and ML steps can be the framework for further protein-based structural biology problems, as are innovating in several processes: (1) by highlighting the importance of protein-based overfitting versus amino acid based features; (2) by providing an answer with a set of simple, replicable, in-house features that make use of freely available information and amino acid position; (3) by considering the evaluation of the amino acid prediction capabilities instead of simply the target features at hand; (4) by attributing specific weights to amino acid types as a way to underline that these are not only features but also subsample spaces of the dataset; (5) by introducing a viable sequence-based HS predictor; and (6) by providing an intuitive and biologically relevant data interpretation tool (HS probability maps). Furthermore, SPOTONE as a webserver (http://moreiralab.com/resources/spotone) is easily usable by non-proficient researchers, with an intuitive framework. Acknowledgments: A. J. Preto acknowledges José G. Almeida for thorough discussions on the proper approach towards the problem at hands that yielded fruitful protocol improvements. Conflicts of Interest: The authors declare no conflict of interest. Data and Code Availability: All data and code used to perform the described experiences are available at https://github.com/MoreiraLAB/spotone. Table A1. Performance metrics (training and testing datasets) for the three studied subsets: with only the in-house features (one-hot encoding, relative position, literature based and window-based features), using only PSSM features and the joint dataset with both in-house and PSSM features. Dataset Classifier Name Subset Accuracy AUC Precision Recall F1-Score . Depiction the complex between a T-Cell receptor beta chain and SEC3 superantigen: PDB ID 1jck. Brighter red colors were attributed to residues with a higher probability of being classified as HS. (B,C) Close-ins of all interfacial residues for which there is an experimental ∆∆G binding value, and as such a HS/NS classification. Green boxes represent correctly predicted residues, whereas red boxes represent incorrectly classified residues. Figure A3. (A). Depiction of chain C of the complex PDB ID 3sak. Brighter red colors were attributed to residues with a higher probability of being classified as HS. (B,C) Close-ins of all interfacial residues for which there is an experimental ∆∆G binding value, and as such a HS/NS classification. Green boxes represent correctly predicted residues, whereas red boxes represent incorrectly classified residues.
5,360.6
2020-10-01T00:00:00.000
[ "Computer Science", "Biology" ]
Curvature effects on the structural, electronic and optical properties of isolated single-walled carbon nanotubes within a symmetry-adapted non-orthogonal tight-binding model The effects of curvature on the structure, electronic and optical properties of isolated single-walled carbon nanotubes are studied within a symmetry-adapted non-orthogonal tight-binding model using 2s and 2p electrons of carbon. The symmetry-adapted scheme allows reducing the matrix eigenvalue problem for the electrons to diagonalization of 8×8 matrices for any nanotube type. Due to this simplification, the electronic band structure of nanotubes with a very large number of atoms in the unit cell can be calculated. Using this model, the structure of 187 small- and moderate-radius nanotubes is optimized. It is found that the deviations of the optimized structure from the non-optimized one are large for tube radii smaller than 5 Å. The band structure and the dielectric function of 101 small- and moderate-radius nanotubes are calculated. The optical transition energies for these nanotubes are derived from the dielectric function and plotted versus tube radius. It is shown that the structural optimization introduces small changes to the transition energies obtained within the non-orthogonal tight-binding model. The transition energies for the optimized structure within this model agree well with the available ab initio data for a few nanotube types. On the other hand, the results for the former deviate widely from those used for nanotube characterization in π-band tight-binding model especially for small-radius tubes. The derived transition energies can be used for the assignment of nanotube absorption spectra and for the selection of nanotube types for which the Raman scattering is resonant. Introduction The discovery of the carbon nanotubes in 1991 [1] and the speculations about their amazing electronic properties [2]- [4] directed much attention to their experimental and theoretical study. In the simplest case, a nanotube consists of a single graphitic layer and is termed a single-walled carbon nanotube (or, for brevity, a nanotube). A nanotube can be viewed as a long strip of a graphene sheet rolled up into a seamless cylindrical surface and can be characterized uniquely by a pair of non-negative integer numbers (L 1 , L 2 ). Nanotubes with diameters equal to or greater than that of C 60 are either metallic (zero-gap semiconducting) of the armchair type (L 1 = L 2 ), or small-or moderate-gap semiconductors (L 1 = L 2 ). In [2] it is shown that all armchair tubes are metallic in the presence of curvature and stable against a spontaneous symmetry breaking to far below room temperature. Hamada et al [3] used the graphene sheet model introduced in [2] together with all-valence tight-binding (TB) calculations for zigzag nanotubes (L 1 = 0, L 2 = 0) to show that nanotubes other than armchair should be either small-or moderategap semiconductors. Saito et al [4] applied the graphene sheet model in π-band tight-binding (π-TB) calculations to predict that nanotubes with L 1 − L 2 which is equal to a multiple of 3 are metallic instead of small-gap ones. Extensive band structure calculations for nanotubes were carried out using an all-valence TB approach and a first-principles all-electron density-functionaltheory approach within the local density approximation (LDA) [5] based on a symmetry-adapted scheme [6]. Detailed plane-wave ab initio pseudopotential LDA calculations of small-radius insulating nanotubes [7] showed that strongly modified low-lying non-degenerate conduction band states are introduced into the band gap due to σ * -π * rehybridization. As a result, the LDA gaps of some tubes are lowered by more than 50% and the tube (6, 0) previously predicted to be semiconducting within the all-valence TB models is shown to be metallic. Similar effects were observed in the electronic properties of carbon nanotubes with polygonized cross sections calculated within a plane-wave ab initio pseudopotential LDA approach [8]. Recently, in an extensive ab initio LDA study of nanotubes with radii between 5 and 7.5 Å, a shift of the electron eigenenergies up to ∼0.1 eV relative to the results of the π-TB model was predicted [9]. The optical properties of nanotubes have been treated exclusively within π-TB models within the gradient approximation for the matrix elements of the linear momentum. The selection rules for allowed dipole transitions were first discussed by Ajiki and Ando [10] in the study of the low-energy optical absorption due to interband transitions as a probe of the Aharonov-Bohm effect. π-TB calculations of the plasmons and optical properties of carbon nanotube systems were presented by two groups [11,12]. The polarized optical conductivity was calculated for a number of nanotubes with radii between 4 and 8 Å within an all-valence TB model [13] based on a symmetry-adapted scheme which is essentially the one introduced in [5]. Ab initio calculations of the dielectric function were carried out for a (5, 7) nanotube [14] and for three small-radius nanotubes: (3,3), (5,0) and (4,2) [15,16]. Among the various computed quantities, the optical transition energies are of great importance for the nanotube characterization since they can be of help for the assignment of the optical absorption spectra of nanotube samples. The predictions of the π-TB models have been widely used for these purposes [17]. However, the π-TB models cannot reproduce satisfactorily the electronic structure and optical properties of small-radius nanotubes. Recently, a π-TB model with a chirality-and diameter-dependent nearest-neighbour hopping integral was used to relate well-resolved features in the UV-VIS-NIR spectra of individual nanotubes to electronic excitations in specific tube types [18]. The precise computation of the optical properties of nanotubes requires the implementation of more realistic approaches. The ab initio calculations are hindered by the large number of atoms in the unit cell of most nanotubes. An alternative approach can be to use a well-tuned non-orthogonal TB model (see e.g. [3]- [5,13,19]) based on a symmetry-adapted scheme that will allow one to handle nanotubes with a large number of carbon atoms in the unit cell (see e.g. [5,13,20,21]). Here, results of structural optimization and calculation of the electronic band structure and dielectric function of a large number of nanotubes, carried out within a symmetry-adapted nonorthogonal tight-binding model, are presented. First, the main relations between the structural parameters of nanotubes are introduced in section 2. The symmetry-adapted non-orthogonal tight-binding model is presented in section 3. The optimized nanotube structure of all nanotubes, the obtained electronic band structure and the dielectric function for three small-radius nanotubes, as well as the transition energies for all armchair and zigzag nanotubes are given in section 4. The conclusions are presented in section 5. The nanotube structure The ideal single-walled carbon nanotube can be viewed as obtained by the rolling up of an infinite strip of a graphene sheet into a seamless cylinder [3]- [5]. The seamlessness of the tube means coincidence of lattice points connected on the sheet by a lattice vector L 1 a 1 + L 2 a 2 (a 1 and a 2 are the primitive translation vectors of the sheet, L 1 and L 2 are integer numbers, L 1 L 2 0). This ideal nanotube can be specified uniquely by the pair of indices (L 1 , L 2 ). We recall that a two-atom unit cell can be mapped onto the entire graphene sheet by the use of two primitive translation vectors. Similarly, a two-atom unit cell can be mapped onto the entire tube by use of two different screw operators (see figure 1) [6]. By definition, a screw operator {S i | t i } (i = 1, 2) executes a rotation of the position vector of an atom at an angle ϕ i about the tube axis with a rotation matrix S i and a translation of the position vector at a vector t i along the tube axis. Thus the equilibrium position vector x(lk) of the kth atom in the lth cell (l = (l 1 , l 2 )) can be obtained from the position vectors of the atoms in the zeroth unit cell x(k) ≡ x(0k) (k = 1, 2) as We adopt the abbreviated notation S 1 (l) = S l 1 1 S l 2 2 and t(l) = l 1 t 1 + l 2 t 2 and rewrite equation (1) in the form x(lk) = {S(l)| t(l)}x(k) = S(l)x(k) + t(l). (2) In a similar way, one of the atoms in the two-atom unit cell can be mapped unto the other atom by use of a screw operation defined by an angle ϕ and a translation vector t . The primitive rotation angles and the primitive translations of the two types of screw operations can be found from the translational periodicity and rotational boundary conditions Here T is the primitive translation vector of the nanotube. N 1 and N 2 are integer numbers defining T on the graphene sheet and are given by the relations The integer number d is equal to the highest common divisor d of L 1 and L 2 if L 1 − L 2 is not a multiple of 3d or d is equal to 3d if L 1 − L 2 is a multiple of 3d . Using equations (3)-(6), one obtains Here, the total number of atomic pairs in the unit cell, N c , is given by The position vectors of the atoms of the tube can be written as x(nlk) = x(lk) + nT, where the integer number n labels the translational unit cells and l labels the two-atom unit cells in each translational unit cell. A nanotube can be characterized alternatively by its radius R and chiral angle (or wrapping angle) θ which is the angle between the tube circumference and the nearest zigzag of C-C bonds, 0 • θ < 30 • [3]. For the 'rolled-up' structure these quantities are given by where a C-C is the C-C bond length in graphene. The 'rolled-up' structure is useful when the nanotube structure cannot be optimized, as is the case with some tight-binding models using fixed parameters. However, in other tight-binding models with explicit dependence of the parameters on the interatomic separations and in all ab initio models of the electronic structure one should optimize the nanotube structure. In the simplest case, only the bond lengths and valence angles for the two atoms in the two-atom unit cell are varied in the optimization procedure preserving the translational and the screw symmetry of the tube. Thus, R, T , ϕ , and t can be considered as independent structural parameters. For the optimized structure the above relations between R, T and θ, and L 1 , L 2 will generally no longer hold. It is worth noting that, if the translational symmetry condition is not imposed, the chiral tubes can loose their translational symmetry upon optimization. The error due to imposing this condition (see equation (28)) is expected to be small because of the usually large number of atoms in the unit cell of most chiral tubes. The symmetry-adapted non-orthogonal tight-binding model The electronic band structure of a periodic structure is usually obtained by solving the oneelectron Schrödinger equation where m is the electron mass, V(r) the effective periodic potential, ψ k (r) and E k are the oneelectron wavefunction and energy depending on the wavevector k. This equation can be solved by representing ψ k (r) as a linear combination of basis functions ϕ kr (r) In the tight-binding approach, the ϕs are constructed from atomic orbitals centred at the atoms. Let us denote by χ r (R(l) − r) the rth atomic orbital centred at an atom with position vector R(l) in the lth unit cell. Bloch's condition for the basis functions ϕ is satisfied for the following linear combination of χs: where N is the number of unit cells in the system. In the case of graphene, the lattice parameters are equal (a = b) and therefore R(l) = l 1 a 1 + l 2 a 2 = la. Then, one can introduce a dimensionless wavevector k = (k 1 , k 2 ) and rewrite equation (19) as After substitution of equation (20) in equation (17), the electronic problem for graphene is transformed into a matrix eigenvalue problem. In the case of nanotubes, one can still use the equations above. However, the number of atoms in the unit cell of some nanotubes can be very large, leading to a large-size matrix equation for the electronic problem. We notice, however, that any nanotube has a screw symmetry, which allows one to use only a two-atom unit cell for the electronic problem [6]. To implement explicitly the screw symmetry, we start with symmetrized wavefunctions, which satisfy a modified Bloch's condition under screw operations with any l. Wavefunctions with such a property have the form where k = (k 1 , k 2 ) is an yet undefined two-component wavevector of the nanotube and T rr (l) are appropriate rotation matrices rotating a given atomic-type orbital to the same orientation with respect to the nanotube surface for all atoms. It is straightforward to verify that a screw operation transforms a symmetrized wavefunction into itself up to a Bloch exponent. Substituting equation (21) in equation (17), we obtain where and The quantities H rr (l) and S rr (l) are the matrix elements of the HamiltonianĤ and the overlap matrix elements, respectively. The vectors R(l) and R (l) are position vectors of atoms in the lth two-atom unit cell. The wavevector components k 1 and k 2 can be determined by imposing the rotational boundary and translational periodicity conditions which yields the relations where k is the one-dimensional wavevector of the tube (-π k π) and the integer number l labels the electronic energy levels with a given k (l = 0, 1, . . . , N c − 1). From equations (27) and (28) we obtain k 1 and k 2 The substitution of equations (29) and (30) into equations (21)-(24) yields where l runs over the indices of the two-atom unit cells in the translational unit cell. The quantities α(l) and z(l) are given by The set of linear algebraic equations (32) has non-trivial solutions for the coefficients c only for energies E which satisfy the characteristic equation The solutions of equation (37), E klm , are the electronic energy levels; the energy bands are labelled by the composite index lm (m = 1, 2, . . .). The corresponding eigenvectors c klmr are determined from equation (32). The symmetry-adapted approach to the calculation of the electronic band structure of nanotubes has the important advantage of a large reduction in the computational time. Indeed, the calculation of the band energies for a given k needs time scaling as the cube of the size of the two-atom eigenvalue problem (8 3 for 4 electrons per carbon atom) times the number of two-atom unit cells N c . On the other hand, a straightforward calculation of the same band energies will require time scaling as the cube of the size of the 2N c -atom eigenvalue problem, i.e. (8N c ) 3 . The presented symmetry-adapted approach has been applied to the vibrational eigenvalue problem [20,21] and can be used in the calculation of any property of a nanotube within a microscopic model. The total energy of a nanotube (per unit cell) is given by where the first term is the band energy (the summation is over all occupied states) and the second term is the repulsive energy, consisting of repulsive pair potentials φ(r) between pairs of nearest neighbours. The increase of the total energy (per carbon atom) when a graphene strip is folded into a nanotube is the strain (or folding) energy E st : where E Gr is the total energy of graphene per unit cell. For the structural optimization of a nanotube one needs the band and the repulsive contributions to the forces acting on the atoms. The band contribution to the force in α direction on the atom with a position vector R(0) is given by the Hellmann-Feynman theorem The repulsive contribution is the first derivative of the total repulsive energy with respect to the position vector R(0). The imaginary part of the dielectric function in the random-phase approximation is given by [22] where h ω is the photon energy, e the elementary charge and m the electron mass. The sum is over all occupied (v) and unoccupied (c) states. p kl cklv,µ is the matrix element of the component of the momentum operator in the direction µ of the light polarization p kl cklv,µ = dr ψ * kl c (r)p µ ψ klv (r). For z-axis along the tube axis, the quantities f ll ,µ are given by Equations (45) and (46) express the selection rules for allowed dipole optical transitions, namely, optical transitions are only allowed between states with the same l for parallel polarization and between states with l and l differing by 1 for perpendicular polarization (compare with [10]). Further on, from Maxwell's relation ε =ñ 2 (ñ is the complex refractive index), the refractive index n = Reñ and the extinction coefficient κ = Imñ are readily obtained. The relations α = 2ωκ/c (c is the light velocity in vacuum) and R = |(ñ − 1)/(ñ + 1)| 2 allow one to derive the absorption coefficient α and the reflection coefficient for normal incidence R. Let us consider a single pair of valence and conduction bands with maximum and minimum separated by a direct gap E cv corresponding to an allowed optical transition v → c. Assuming that the matrix elements p cv,µ are independent of k, it is straightforward to show that the contribution to ε 2 from these bands is given by Here m * is the reduced effective mass for the two bands. Alternatively, for a pair of valence and conduction bands with minimum and maximum separated by energy E cv one obtains In the general case, the graph ε 2 (ω) will consist of two types of spikes close in form to those described by equations (47) and (48). From the derivation of the latter two equations it is clear that the electron density of states (DOS) versus ω will have the same two types of spikes. Results and discussion The parameters of the non-orthogonal tight-binding (nTB) model are taken from a densityfunctional-based study [23]. In the case of graphite, these parameters showed excellent performance in the calculation of the equilibrium lattice parameter and the cohesive energy. The tight-binding electronic structure of graphene corresponds well to the ab initio results for the valence and conduction bands in the energy region (-3, 3) eV (the Fermi energy is set to zero). The calculated energy separation between the π and π * bands at the M point of the Brillouin zone of graphene was 4.9 eV which agrees with the ab initio LDA result of ∼4.4 eV [16] while the π-TB value was ∼5.9 eV [4]. This implies that the optical properties of graphene should be reproduced with the same accuracy up to ∼6 eV. The same reliability region should be valid for carbon nanotubes as well. Here, the structure of 187 nanotubes with radii R in the range from 2 to 15 Å and N c < 400 is optimized within the nTB model. The optimization is carried out under the constraint that all atoms lie on a cylindrical surface and R, T , ϕ and t are considered as independent structural parameters. The total energy and the forces on the atoms of the two-atom unit cell are calculated with a tube-dependent number of k points N k for which these quantities converge. For example, N k = 60 for the tube (3,3) and N k = 30 for the tube (9,9). It was found that N k decreases nearly proportionally to 1/N c . It is clearly seen in figure 2 that, upon optimization, the nanotubes widen laterally and shorten in length, which has as a consequence a decrease of the chiral angle. This effect depends on the nanotube chirality. Armchair tubes have the smallest increase in radius and almost zero shortening and largest decrease in the chiral angle. Zigzag nanotubes have largest increase in the radius and medium shortening (the chiral angle is zero by definition). Chiral nanotubes have a moderate increase in the radius, largest shortening and medium decrease of the chiral angle. The trend of change of the structural parameters corresponds to the ab initio results for several nanotubes in [16,24,25]. For example, we obtain an increase in the radius for tubes (4,4) and (10, 10) of 1.6 and 0.3% compared to 1.2 and 0.2% [24]. For tubes (5, 0), (3,3) and (4, 2) with non-optimized radii of 1.96, 2.03 and 2.07 Å, we obtain optimized radii of 2.05, 2.12 and 2.14 Å that are in fair agreement with the ab initio results 2.04, 2.10 and 2.14 Å [16], and 2.06, 2.12 and 2.17 [25]. The maximum values of the changes in R, θ and T are 0.1 Å, 1 • , 0.4 Å, and the relative changes are 5, 4 and 1%, respectively. The optimized nanotube structure is characterized with non-equal bond lengths and bond angles for atoms in the two-atom unit cell. The differences between the optimized and non-optimized structures decrease with the increase in radius and can be ignored for radii larger than about 5 Å. It is interesting to compare the strain energy calculated in the present model ( figure 3) with the ab initio results [24]- [27]. Fitting the strain energy versus radius with E st = C/R 2 [28] we obtain C = 2.133 eV Å 2 per atom which Absolute changes of the radius R, chiral angle θ and primitive translation T for all nanotubes with radii between 2 and 15 Å and N c < 400 upon optimization. It is clearly seen that the nanotubes widen laterally, shorten in length and the chiral angle decreases. The calculated electronic band structure of three small-radius nanotubes (5, 0), (3, 3) and (4, 2) within the nTB and π-TB models is shown in figure 4. It is seen that the nTB band structure of these tubes deviates considerably from the π-TB one. Similar to the ab initio band structure [15,16], the large curvature of these tubes leads to large σ * -π * rehybridization which modifies the nTB band structure with respect to that of π-TB [7,15,16]. In particular, nanotube (5, 0) is metallic contrary to the predictions of π-TB. The crossing of the bands at the Fermi level in nanotube (3,3) is at k ≈ 0.25π instead of at k = (2/3)π. Nanotube (4, 2) has an indirect band gap of ≈0.83 eV which is twice as small as the direct gap of the π-TB model but is a little larger than the ab initio value [15,16]. The calculated imaginary part of the dielectric function of nanotubes (5, 0), (3,3) and (4, 2) within the nTB model for parallel and perpendicular polarization in the energy range from 0 to 6 eV is shown in figure 5. The form of the peaks follows approximately equations (47) and (48). The peaks in the spectra for parallel polarization originate from minima and maxima of occupied and unoccupied bands with the same quantum number l. For example, peak A 1 in figure 5 can be associated with an optical transition between a maximum of an occupied band of ∼ −2 eV and a minimum of an unoccupied band of ∼1 eV of tube (3,3). These minima and maxima give rise to spikes in the electronic density of states of the nanotubes. The peaks in the spectra for perpendicular polarization originate from minima and maxima of occupied and unoccupied bands as well as from states on parallel parts of occupied and unoccupied bands with quantum numbers l and l ± 1. For example, peaks B 1 and B 2 come from such transitions near the crossing point of the bands at the Fermi level; peak B 3 comes from states near the Brillouin zone boundary; peak B 4 is due to transitions between minima and maxima of bands at the zone centre. The calculated transition energies for the three tubes (5, 0) (1.0, 1.2 eV), (3, 3) (2.9 eV), and (4, 2) (1.85, 1.96 eV) correspond well to the ab initio results 1.2, 2.9 and 1.9 eV [16]. This is an indirect evidence for the applicability of the used density-functionaltheory-based non-orthogonal tight-binding model to the optical properties of carbon nanotubes. It should be noted that the depolarization effects are not accounted for in the calculation of ε 2 for perpendicular polarization. On the other hand, the small lateral size of the nanotubes can lead to strong depolarization effects and to significant reduction of the dielectric function [10]. The precise inclusion of the depolarization effects is expected to result in corrections to the calculated dielectric function for perpendicular polarization mainly in the peak height. The importance of the knowledge of the dielectric function for both parallel and perpendicular polarizations has been underlined recently in a cross-polarized resonant-Raman study of nanotubes [29]. The electronic band structure and the imaginary part of the dielectric function for parallel light polarization are calculated within the nTB model for a large number of small-radius nanotubes. These quantities can be illustrated by means of the band gap and the optical transition energies. The band gaps are equal to the lowest transition energies for parallel polarization of almost all semiconducting tubes. Exceptions are some very-small-radius tubes where the rehybridization effects lead to indirect band gaps (e.g. tubes (4, 2), (5, 1), etc.). The nTB band gaps for the non-optimized (nTB1) and optimized (nTB2) structures of all zigzag tubes with 2 Å < R < 15 Å are given in figure 6 and are compared with π-TB results. First of all, small band gaps appear in all zigzag tubes that are predicted to be metallic by the π-TB model. It is seen in figure 6 that, in the case of zigzag tubes (6 + 3n, 0), the curvature-induced gaps for the optimized structure are larger than for the non-optimized one. The band gaps derived here for optimized zigzag tubes (6, 0), (9, 0), (12, 0) and (15, 0) are 0.45, 0.127, 0.046 and 0.025 eV respectively. For comparison, the results for the gaps of all-valence TB models with parameters fitted to experimental data are 0.20 and 0.04 [3], 0.05 and 0.07 eV [7] for tubes (6, 0) and (9, 0); 0.18 and 0.08 for tubes (6, 0) and (9, 0) [19]. The LDA gaps are (tube (6, 0) is found to be metallic): 0.17 eV for tube (9, 0) [7], 0.093, 0.078 and 0.028 eV for tubes (9, 0), (12, 0) and (15,0) [30]. The gaps measured by scanning tunnelling spectroscopy for tubes (9, 0), (12, 0) and (15, 0) are 0.080, 0.042 and 0.029 eV [31]. It is clear that the gaps calculated here correspond well to the LDA and the experimental values, together with those of other TB models. The band gaps of semiconducting zigzag tubes (5 + 3n, 0) and (7 + 3n, 0) are smaller than those of the π-TB model. Upon optimization within the nTB model, the band gaps are modified and the changes being largest ≈0.5 eV in the limit of very small radii. Therefore, although the effect of structural optimization is smaller than that of σ * -π * rehybridization, the former is not always negligible and structural optimization is necessary for higher reliability of the derived optical transition energies. Finally, the optical transition energies for all 101 nanotubes with 2 Å < R < 8 Å and N c < 400 are derived from the dielectric function calculated for the optimized nanotube structure within the nTB model. It is clear from figure 7, where the transition energies are presented in comparison with the energies of the π-TB model, that the predictions of these two models deviate significantly for small tube radii. It is seen that the curvature-induced rehybridization effects are largest for small radii and decrease with increase in tube radius. For very small radii of ≈2 Å, the π-TB results overestimate the nTB transition energies up to ≈0.5 eV. For moderate radii, the π-TB results are upshifted by about ≈0.1 eV. By ab initio LDA calculations similar shifts of ≈0.1 eV were obtained for nanotubes with radii in the range 5 Å < R < 7.5 Å [9]. The transition energies determine the conditions for resonant Raman scattering in nanotubes. model [4]. The calculations here show that the effects of the nanotube curvature on these energies have to be taken into account even for tubes with moderate radii. Conclusions The optimized structure, the electronic band structure and the dielectric function of a large number of small-and moderate-radius single-walled carbon nanotubes are studied within a nonorthogonal tight-binding model. The model is based on a symmetry-adapted scheme, which allows for significant reduction of the size of the matrix electronic eigenvalue problem. It is shown that the calculated electronic band structure of three small-radius nanotubes agrees well with ab initio simulations up to several eV above the Fermi energy and exhibits large differences with the π-TB results. Secondly, the dielectric function of many nanotubes is calculated within the random-phase approximation for energies up to 6 eV. The derived transition energies differ from the π-TB energies due to curvature effects. These differences are large for small radii and decrease with increase in tube radius. The obtained transition energies versus nanotube radius can be used for the determination of the conditions for resonant Raman scattering from nanotubes.
6,896.4
2004-02-01T00:00:00.000
[ "Materials Science", "Physics" ]